text
stringlengths 0
514k
| meta
dict |
---|---|
---
abstract: 'Dynamics of spiral galaxies derived from a given surface mass density has been derived earlier in a classic paper. We try to transform the singular elliptic function in the integral into a compact integral with regular elliptic function. Solvable models are also considered as expansion basis for RC data. The result makes corresponding numerical evaluations easier and analytic analysis possible. It is applied to the study of the dynamics of Newtonian system and MOND as well. Careful treatment is shown to be important in dealing with the cut-off of the input data.'
author:
- |
W. F. Kao[^1]\
Institute of Physics, Chiao Tung University, HsinChu, Taiwan
title: Mass distribution of highly flattened galaxies and modified Newtonian dynamics
---
INTRODUCTION
============
The rotation curve (RC) observations indicates that less than 10 % of the gravitational mass can be measured from the luminous part of spiral galaxies. This is the first evidence calling for the existence of un-known dark matter and dark energy. In the meantime, an alternative approach, Modified Newtonian dynamics (MOND) proposed by Milgrom [@milgrom], has been shown to agree with many rotation curve observations [@milgrom; @milgrom1; @milgrom2; @sanders; @sanders1; @sanders2].
Milgrom argues that dark matter is redundant in the approach of MOND. The missing part was, instead, proposed to be derived from the conjecture that gravitational field deviates from the Newtonian $1/r^2$ form when the field strength $g$ is weaker than a critical value $g_0 \sim 0.9 \times 10^{-8} $ cm s$^{-2}$ [@sanders2].
In MOND the gravitational field is related to the Newtonian gravitational field $g_N$ by the following relation: $$\label{1} g \cdot \mu_0 ({g \over g_0}) =g_N$$ with a function $\mu_0$ considered as a modified inertial. Milgrom shows that the model with $$\label{2}
\mu_0 (x) = {x \over \sqrt{1+x^2}}$$ agrees with RC data of many spiral galaxies [@milgrom; @milgrom1; @milgrom2; @sanders; @sanders1; @sanders2; @kao05]. The alternative theory could be compatible with the spatial inhomogeneity of general relativity theory. Various approaches to derive the collective effect of MOND has been an active research interest recently. [@mond1; @mond2]
[Recently, it was shown that a simpler inertial function of the following form [@fam; @fam1] $$\label{03}
\mu_0 (x) = {x \over 1+x}$$ fits better with RC data of Milky Way and NGC3198. Indeed, one can show that the MOND field strength $g$ is, from Eq. (1), $$\label{04}
g={ \sqrt{{g_N}^2+4g_0g_N} +g_N \over 2} .$$ The model shown by Eq. (2) will be denoted as Milgrom model, while the model (3) will be denoted as FB model. We will study and compare the results of these two models in the paper. ]{}
Accumulated evidences show that the theory of MOND is telling us a very important message. Either the Newtonian force laws do require modification in the weak field limit or the theory of MOND may just represent some collective effect of the cosmic dark matter awaiting for discovery. In both cases, the theory of MOND deserves more attention in order to reveal the complete underlying physics hinted by these successful fitting results. Known problem with momentum conservation has been resolved with alternative covariant theories[@mh; @bek; @mond1; @g05; @mond2; @romero; @kao06].
It was also pointed out that the proposed inertial function $\mu_0$ could be functional of the whole $N$ field and could be more complicate than the ones shown earlier [@bek84; @brada; @milgrom3]. Successful fitting with the rotation curves of many existing spiral galaxies indicates, however, that the most important physics probably has been revealed by these simple inertial functions shown in this paper. One probably should take these models more seriously in order to generate more clues to the final theory.
If the theory of the MOND is the final theory of gravitation without any dark matter in a large scale system with some inertial function $\mu_0(x)$, one should be able to derive the precise mass distribution form the measured RC provided that the distance $D$ is known. Earlier on, the mass distribution derived from the rotation curve measurement can only be used to predict how much dark matter is required in order to secure the Newtonian force law. In the case of MOND, one should be able to plot a dynamical profile of the $\Gamma$ function ${\Gamma}(r) \equiv M(r)/L(r)$ with the measured luminosity function $L(r)$ following the theory of MOND. The $\Gamma$ function should then provide us useful information about the detailed distribution of stars with different luminous spectrum obeying the well-known $L \sim
M^\alpha$ relation.
The dynamical profile $\Gamma(r)$ can be treated as important information regarding the detailed distribution of stars with different mass-luminosity relations within each spiral galaxy[@jk]. Even one still does not know a reliable way to derive the inertial function $\mu_0(x)$, there are successful models shown earlier as useful candidates for the theory of MOND. It would be interesting to investigate the dependence of these models with the $M/L$ profiles. Hopefully, information from these comparisons will provide us clues to the final theory.
Therefore, we will try first to derive the mass density for Milgrom and Famaey&Binney (FB) models in details. There are certain boundary constraints needed to be relaxed for the asymptotic flat rotation curve boundary condition which is treated differently in the original derivation of these formulae [@toomre]. It is also important to compare the effects of exterior contribution between the Milgrom and FB models for a more precise test of the fitting application.
We will first review briefly how to obtain the surface mass density $\mu(r)$ from a given Newtonian gravitational field $g_N$ with the help of the elliptic function $K(r)$ in section II. The integral involving the Bessel functions is derived in detailed for heuristical reasons in this section too.
In addition, a series of integrable model in the case of Newtonian model, Milgrom model as well as the FB model for MOND will be presented in section III. These solvable models will be shown to be good expansion basis for the RC curve data for spiral galaxies. In practice, this expansion method will help us better understand the analytic properties of the spiral galaxies.
We will also try to convert the formula shown in section II into a simpler form making numerical integration more accessible in section IV. The apparently singular elliptic function $K(r)$ is also converted to combinations of regular elliptic function $E(r)$ by properly managed integration-by-part.
In section V, one derives the interior mass contribution $\mu(r<R)$ from the possibly unreliable data $v(r
>R)$ both in the cases of MOND and in the Newtonian dynamics. The singularity embedded in the useful formula is taken care of with great caution. Similarly, one tries to derive the formulae related $g_N$ from a given $\mu(r)$ in section VI. One also presents a simple model of exterior mass density $\mu(r >R)$ in this section. The corresponding result in the theory of MOND is also presented in this paper. Finally, we draw some concluding remarks in section VII.
Newtonian dynamics of a highly flattened galaxy
===============================================
Given the surface mass density $\mu(r)$ of a flattened spiral galaxy, one can perform the Fourier-Bessel transform to convert $\mu(r)$ to $\mu(k)$ in $k$-space via the following equations [@arfken] $$\begin{aligned}
\mu(r)&=& \int_0^\infty k dk \; \mu(k) J_0(kr) , \label{3} \\
\mu(k)&=& \int_0^\infty r dr \; \mu(r) J_0(kr) \label{4}\end{aligned}$$ with $J_m(x)$ the Bessel functions. Note that the closure relation $$\label{5}
\int_0^\infty k dk J_m(kx) J_m(kx') = {1 \over x} \delta(x-x')$$ can be used to convert Eq. (\[4\]) to Eq. (\[3\]) and vice versa with a similar formula integrating over $dr$. The Green function of the equation $$\nabla^2 G (x)= -4 \pi G \delta (r) \delta(z)$$ can be read off from the identity $${1 \over \sqrt{r^2 +z^2}} = \int_0^\infty dk \; \exp[-k |z|]
J_0(kr) .$$ Therefore, the Newtonian potential $\phi_N$ can be shown to be [@toomre; @kent] $$\phi_N (r,z) = 2 \pi G \int_0^\infty dk \; \mu(k)
J_0(kr)\exp[-k |z|]$$ with a given surface mass density $\mu(r)$. It follows that $$\label{9}
g_N(r)= - \partial_r \phi_N (r, z=0) =2 \pi G \int_0^\infty k dk
\; \mu(k) J_1(kr) .$$ The Newtonian gravitational field $g_N(r)= v_N^2(r) /r$ can be readily derived with a given function of rotation velocity $v_N(r)$. Here one has used the recurrence relation $J_1(x)=-J_0'(x)$ in deriving Eq. (\[9\]). Furthermore, from the closure relation (\[5\]) of Bessel function, one can show that the function $g_N(r)$ satisfies the following conversion equation $$g_N(r) = \int_0^\infty k dk \int_0^\infty r' dr' \; g_N(r')
J_1(kr) J_1(kr') . \label{10}$$ Therefore, one has $$\label{11}
\mu(k) = {1 \over 2 \pi G} \int_0^\infty r dr \; g_N(r) J_1(kr)
.$$
Hence, with a given surface mass density $\mu(r)$ for a flattened spiral galaxy, one can show that $$\label{121}
\mu(r)={1 \over 2 \pi G} \int_0^\infty k dk \int_0^\infty dr' \;
r' g_N(r') J_0(kr) J_1(kr') .$$
Assuming that $\lim_{r \to 0} rg_N(r) \to 0$ and $\lim_{r \to
\infty} rg_N(r) < \infty $ hold as the boundary conditions, one can perform an integration-by-part and show that $$\mu(r)={1 \over 2 \pi G} \int_0^\infty dk \int_0^\infty dr' \;
\partial_{r'} [r' g_N(r')] J_0(kr) J_0(kr')$$ with the help of the asymptotic property of Bessel function: $J_m(r \to \infty) \to 0$. Here we have used the recurrence relation $J_1(x)= -J'_0(x)$ in deriving above equation.
Note that the boundary terms can be eliminated under the prescribed boundary conditions. In fact, the limit $\lim_{r \to
\infty} rg_N(r) < \infty $ is slightly different from the original boundary conditions given in [@toomre]. The difference is aimed to make the system consistent with the flatten RC measurement which implies that $\lim_{r \to \infty} rg_N(r) =
v^2_N(r\to \infty) \to $ constant $< \infty$. In summary, one only needs to modify the asymptotic boundary condition in order to eliminate the surface term during the process of integration-by-part. We have relaxed this boundary condition to accommodate any system with a flat rotation curve.
One can further define $$\label{14}
H(r, r')=\int_0^\infty dk J_0(kr) J_0(kr')$$ and write the function $\mu(r)$ as $$\mu(r)= {1 \over 2 \pi G} \int_0^\infty dr' \; \partial_{r'} [r'
g_N(r')] H(r, r').$$ Note that the function $H(r, r')$ can be shown to be proportional to the elliptic function $K(x)$: $$\label{161}
H(r, r')= {2
\over \pi r_>}K({r_< \over r_>})$$ with $r_>$ ($r_<$) the larger (smaller) of $r$ and $r'$.
The proof is quite straightforward. Since some properties of the Bessel functions are very important in the dynamics of the spiral disk, as well as many disk-like system, we will show briefly the proof for heuristical reason. One of the purpose of this derivation is to clarify that there are different definitions for the elliptic functions written in different textbooks. Confusion may arise applying the formulae in a wrong way.
Note first that the Bessel function $J_0(x)$ has an integral representation [@arfken; @bessel] $$\label{171}
J_0(x)={1 \over \pi} \int_0^\pi \cos [x \sin \theta ]d\theta .$$ Given the delta function represented by the plane wave expansion: $$\delta(x)= {1 \over 2 \pi} \int_{-\infty}^\infty dk \exp [ikx] ,$$ One can show that, with the integral representation of $J_0$, $$\label{19}
\int_0^\infty dx \cos [kx] \; J_0(x) = {1 \over \sqrt{1-k^2}}$$ for $k <1$. On the contrary, above integral vanishes for $k >1$. Therefore, one can apply above equation to show that $$\label{151}
\int_0^\infty dk J_0(kx) J_0(k) = {2 \over \pi} \int_0^{\pi /2}
{d \theta \over \sqrt{1-x^2 \sin ^2 \theta}} = {2 \over \pi} K(x)$$ with the help of the Eq. (\[171\]) again. The last equality in above equation follows exactly from the definition of the elliptic function $K$.
There is an important remark here. Note that the elliptic functions $E(k)$ and $K(k)$ used in this paper, and Ref. [@toomre; @arfken] as well, are defined as $$\begin{aligned}
\label{20}
K(x) &\equiv & \int_0^{\pi/2} (1-x^2 \sin ^2 \theta)^{-1/2} d
\theta ,\\ \label{21} E(x) &\equiv & \int_0^{\pi/2} (1-x^2 \sin ^2
\theta)^{1/2} d \theta\end{aligned}$$ which is different from certain textbooks. Some textbooks, and similarly some computer programs, prefer to define above integrals as $K(x^2)$ and $E(x^2)$ instead. In fact, one can check the differential equations satisfied by $E(x)$ and $K(x)$ that will be shown explicitly shortly in next section. Before adopting the equations presented in the texts, one should check the definition of these elliptic functions carefully for consistency.
After a proper redefinition of $k \to k'r'$ and write $x=r/r' <1$, one can readily prove that the assertion (\[14\]) is correct.
Note that the series expansion of the elliptic function $K(x)$ is $$\label{A}
K(x) = {\pi \over 2} \sum_{n=0}^\infty C_n^2x^{2n}$$ with $C_0=1$ and $C_n = (2n-1)!!/2n!!$ for all $n \ge 1$. Therefore, one can show that it agrees with the series expansion of the hypergeometric function $F({1 \over 2}, {1 \over 2}, 1;
{r_<^2 \over r_>^2})$, up to a factor $\pi /2$. Indeed, one has: $$\label{B}
F(a,b,c; {x^2}) = \sum_{n=0}^\infty {(a)_n (b)_n \over (c)_n n!}
x^{2n},$$ with $(a)_n \equiv \Gamma(n+a)/\Gamma(a)$. It is straightforward to show that the expansion coefficients of Eq.s (\[A\]) and (\[B\]) agree term by term. Therefore, the derivation shown above agrees with the result shown in [@bessel; @toomre]: $$H(r,r')={1 \over r_>} F({1 \over 2}, {1 \over 2}, 1; {r_<^2 \over
r_>^2}) = {2 \over \pi r_>}K({r_< \over r_>}).$$ with $r_>$ ($r_<$) the larger (smaller) of $r$ and $r'$.
As a result, one has $$\label{17}
\mu(r)={1 \over 2 \pi G} \int_0^\infty \partial_{r'} [v_N^2(r')]
\; H(r, r')dr'$$ given the identification $g_N = v_N^2/r$.
Some Simple integrable models
=============================
One can show that $\mu(r)$ is integrable with a Newtonian velocity described by $$\label{vN2}
v_N^2(r) = {C_0^2 a \over \sqrt{ r^2 + a^2}},$$ with $C_0$ and $a$ some constants of parametrization. Note that the velocity function $v_N(r)$ vanishes at spatial infinity while $v_N(0) \to C_0$ with a non-vanishing value. One notes that the observed velocity at $r=0$ is expected to be zero from symmetric considerations. We will come back to this point later for a resolution. Following Eq. (\[121\]), one can write $$\label{12}
\mu(r)={1 \over 2 \pi G} \int_0^\infty k dk \Lambda(k) J_0(kr)$$ with $\Lambda(k)$ defined as $$\label{lamdak}
\Lambda(k) \equiv \int_0^\infty {C_0^2 a \over \sqrt{r^2 +
a^2}}J_1(kr)dr.$$ To evaluate $\Lambda(k)$, one will need the following formula: $$\label{31}
\int_0^\infty dk \exp [-kx] J_0(k)={1 \over \sqrt{1+ x^2}}$$ which follows from Eq. (\[19\]) by replacing $x \to ix$ with the help of an analytic continuation. In addition, writing $ \exp
[-kx]\, dk$ as $-d (\exp [-kx])/x$ and performing an integration-by-part, one can derive $$\label{311}
\int_0^\infty dk \exp [-kx] J_1(k)=1- {x \over \sqrt{1+ x^2}}.$$ Here we have used the identity $J_0'=-J_1$ and the fact that $J_0(0)=1$. With the help of above equation, one can show that $$\int_0^\infty dk (1-\exp [-2k]) J_1(kx)= {2 \over x \sqrt{x^2+4}
}.$$ Multiplying both sides of above equation with $J_1(k'x)xdx$, one can integrate above equation and obtain $$\int_0^\infty dx J_1(kx) {2 \over \sqrt{x^2+4}} = {1- \exp[-2k]
\over k }$$ with the help of the closure relation (\[5\]). After a redefinition of parameters $k \to ka/2$ and $x \to 2r/a$, one can show that $$\int_0^\infty dr J_1(kr) {a \over \sqrt{r^2+a^2}} = {1- \exp[-ak]
\over k }$$ and hence $$\Lambda(k)= {C_0^2 \over k} \left ( 1-\exp [-ak] \right).$$ In additon, equation (\[31\]) can also be written as $$\label{31_1}
\int_0^\infty dk \exp [-kx] J_0(ka)={1 \over \sqrt{ x^2+a^2}}$$ after proper reparametrization. Therefore, one has $$\begin{aligned}
\mu(r)&=&{C_0^2 \over 2 \pi G} \int_0^\infty dk (1-\exp [-ak])
J_0(kr) \nonumber \\
&=& {C_0^2 \over 2 \pi G} \left [ {1 \over r} - {1 \over \sqrt{
r^2 + a^2}} \right ]. \label{38_05}\end{aligned}$$ Hence this model is integrable as promised. Also, as mentioned earlier in this section, this model with a small $C_0$ and properly adjusted $a$ can describe the velocity profile pretty nice in the case of MOND.
One can eliminate the non-vanishing constant by two different methods. The first method is simply subtracting two integrable $v_N^2(r, C_0, a_i)$, namely, define the new Newtonian velocity as $$\label{vN22}
v_{N0}^2(r) = C_0^2\left( {a_1 \over \sqrt{ r^2 + a_1^2}} -{a_2
\over \sqrt{ r^2 + a_2^2}} \right ),$$ with $C_0$ and $a_1 > a_2$ some constants of parameterizations. This new velocity function is also integrable due to the linear dependence of the function $v_N^2(r)$ in Eq. (\[17\]). In addition, it vanishes at $r=0$ and approaches $0$ at spatial infinity $r \to \infty$. Therefore, one has $$\begin{aligned}
\mu_0(r)&=& {C_0^2 \over 2 \pi G} \left [ {1 \over \sqrt{ r^2 +
a_2^2}} - {1 \over \sqrt{ r^2 + a_1^2}} \right ] \label{38_051}\end{aligned}$$ directly from Eq. (\[38\_05\]). In addition, one can also show that [@toomre] higher derivative models defined by $$v_{Nn}^2(r) =-C_n^2 (-{ \partial \over \partial a^2})^n {a \over
\sqrt{ r^2 + a^2}}= \sum_{k=1}^n {C^n_k (2k-1)! (2n-2k)!
\over 2^{2n-1} (k-1)!(n-k)! (2k-1)
}a^{1-2k}(r^2 + a^2)^{-n+k-1/2}-{(2n)! \over 2^{2n} n! }a (r^2 +
a^2)^{-n+1/2}$$ are also integrable and give the mass density as $$\begin{aligned}
\mu_n(r)&=& -{C_n^2 \over 2 \pi G} (-{ \partial \over \partial
a^2})^n \left [ {1 \over r} - {1 \over \sqrt{ r^2 + a^2}} \right
]={C_n^2 \over 2 \pi G} (-{ \partial \over \partial a^2})^n \left
[ {1 \over \sqrt{ r^2 + a^2}} \right ] = {C_n^2 \over 2 \pi G}
{(2n)! \over 2^{2n} n! } (r^2 + a^2)^{-n-1/2}.\end{aligned}$$ Mote that both $v_{Nn}^2$ and $\mu_n(r)$ are in fact functions of $(r^2 + a^2)^{-n-1/2}$ with appropriate combinations. For example, given the Newtonian velocity $$v_{N1}^2(r)
={C_1^2 r^2 \over a ( r^2 + a^2)^{3/2}}= C_1^2 \left [ { 1 \over a
( r^2 + a^2)^{1/2}} - { a \over ( r^2 + a^2)^{3/2}} \right ],
\label{42_05}$$ the corresponding mass density will be given by $$\label{tmmu1}
\mu_1(r)= {C_1^2 \over 2 \pi G( r^2 + a^2)^{3/2}}.$$ For convenience, we have absorbed a common factor $1/2$ into $C_1^2$. Note that $v_{N1}^2(r \to \infty) \to 0$ and $v_1(r=0)=0$ in this model.
Newtonian Model
---------------
Consider the case of Newtonian model that observed velocity $v$ and Newtonian velocity $v_N$ are identical. This model is known to require the presence of dark matters[@gentile]. Since the velocity has to vanish at $r=0$ and goes to a constant at spatial infinity, one will show that an additional constant term added to the $v_N^2$ will provide both resolution at the same time. Indeed, another way to eliminate the non-vanishing velocity at $r=0$ is to introduce a constant velocity by noting that $$\label{12}
\mu(r)={1 \over 2 \pi G} \int_0^\infty k dk \Lambda(k)
J_0(kr)={C_0^2 \over 2 \pi G r}$$ with $\Lambda(k)$ defined as $$\label{lamdak}
\Lambda(k) \equiv C_0^2 \int_0^\infty J_1(kr)dr={C_0^2 \over k}.$$ Here we have used the identity $$\label{Jnkr}
\int_0^\infty dk J_n(kr)={1 \over r}$$ which follows directly from Eq. (\[31\]-\[311\]).
Therefore, one can show that the Newtonian velocity given by the form $$\label{vN2N}
v_0^2(r)= v_{N0}^2(r) =C_0^2\left[ 1- { a \over \sqrt{ r^2 + a^2}}
\right ]$$ is induced by the mass density of the following form $$\begin{aligned}
\mu_0(r)&=& {C_0^2 \over 2 \pi G} {1 \over \sqrt{ r^2 + a^2}}.
\label{38_05N}\end{aligned}$$ Here $C_0$ and $a$ are some constants of parametrization. Note that the Newtonian velocity (\[vN2N\]) vanishes at $r=0$ and $V_N \to C_0$ at spatial infinity as promised. Therefore, this velocity profile goes along with the observed RC of the spirals. Hence this model can be used to simulate the dynamics of spiral galaxies which requires the presence of dark matters. Hence we will call this model with $v=v_N$is the Newtonian model that normally requires the existence of dark matters.
This zero mode is however the only known integrable modes for velocity profiles. Successive differentiating $v_N^2$ in this model simply turns off the constant term $C_0^2$ in Eq. (\[vN2N\]). Therefore, higher derivative modes derived from further differentiation of the Newtonian potential $v_N^2$ with respect to $-a^2$, $$v_{n}^2 \equiv { C_n^2 \over C_0^2 }(-\partial_{a^2})^n v_0^2=
C_n^2 (-\partial_{a^2})^n \left[ { a \over \sqrt{ r^2 + a^2}} \right
]$$ will simply take away the leading constant term from the higher order modes. Therefore, velocity profiles $v_n(r)$ will be quite different from $v_0$ in this case. Note again that the velocity $v_n$ is induced by the corresponding mass density $$\begin{aligned}
\mu_n(r) = {C_n^2 \over 2 \pi G} {(2n-1)!! \over 2^n } (r^2 +
a^2)^{-n-1/2}.\end{aligned}$$ One of the advantage of these well-behaved smooth velocity functions is that they can be used as expansion basis for simulation of velocity profiles. Thanks to the linear dependence of the $v_N^2(r)$ in the mass density function $\mu(r)$, one can freely combine any integrable modes of velocity to obtain all possible combinations of integrable models. For example, the model with $$v^2(r) = v_N^2(r)= \sum_{i,j} v_{0}^2(r,C_{0i}, a_j) \equiv
\sum_{i,j} C_{0i}^2\left[ 1- { a_j \over \sqrt{ r^2 + a_j^2}}
\right ]$$ is integrable and can be shown to be derived by the mass density $$\mu(r)= \sum_i \mu_0(r, C_{0i}, a_j) \equiv \sum_i {C_{0i}^2 \over
2 \pi G} {1 \over \sqrt{ r^2 + a_j^2}}$$ with $C_{ni}$ and $a_j$ all constants of parametrization. Here summation over $i,j$ is understood to be summed over all possible modes with different spectrum described by $C_{0i}$ and $a_j$. This velocity vanishes at $r=0$ and goes to $$v^2(r)\to \sum_{i,j} C_{0i}^2$$ at spatial infinity. One can also add higher derivative terms $v_n^2(r)$ to the velocity profile $v^2(r)$. Since higher derivative velocity $v_n$ vanishes both at $r=0$ and spatial infinity, these additional terms will not affect the asymptotic behavior of $v^2$ at $r=0$. Therefore, one needs to keep at least one zeroth order term in order for $v$ to be compatible with the RC data.
To be more specific, one can consider the model $$v^2(r) = v_N^2(r)= \sum_{i,j,n} v_{n}^2(r,C_{ni}, a_j)\equiv
\sum_{i,j} C_{0i}^2\left[ 1- { a_j \over \sqrt{ r^2 + a_j^2}}
\right ]+ \sum_{k,l,n} C_{nk}^2 (-\partial_{a_l^2})^n \left[ { a_l
\over \sqrt{ r^2 + a_l^2}} \right ]$$ which be derived by the mass density $$\mu(r)= \sum_i \mu_n(r, C_{ni}, a_j) \equiv \sum_{i,j} {C_{0i}^2
\over 2 \pi G} {1 \over \sqrt{ r^2 + a_j^2}} + \sum_{n,k,l}
{C_{nk}^2 \over 2 \pi G} {(2n-1)!! \over 2^n } (r^2 +
a_l^2)^{-n-1/2}$$ with $C_{0i}$ and $a_j$ are all constants of parameterization. Here the summation over $n$ is to be summed over all $n \ge 1$. The velocity of these models will vanish at $r=0$ and goes to $$v^2(r)\to \sum_{i,j} C_{0i}^2$$ at spatial infinity. Therefore, these models turn out to be good expansion basis of any RC data for Newtonian dynamics.
In practice, one may fit the $v^2$ in expansion of these modes in order to analyze the RC in basis of of these basis modes. This helps analytical understanding of the spiral galaxies more transparently. The properties of each modes is easy understand because they are integrable. The corresponding coefficients $C_{ni}$ and $a_j$ will determine the contributions of each modes to any galaxies. One will be able to construct tables for spiral galaxies with the corresponding coefficients of each modes. Hopefully, this expansion method originally developed in [@toomre] will provide us a new way to look at the major dynamics of the spiral galaxies.
Milgrom model
-------------
For the case of Milgrom model (2), the Newtonian velocity $v_N$ and the observed velocity $v$ are related by the following equation $$\label{Mv}
v^2(r) = \left [{{V_N^4(r) + \sqrt{V_N^8(r) +4 V_N^4(r)g_0^2r^2}
\over 2 } }\right ]^{1/2}.$$ Therefore, one can show that a galaxy with a rotation curve given by $$v_0^4(r)= {C_0^4a^2 \over 2( r^2 + a^2)} \left \{ 1 + \sqrt{ 1+ {
4g_0^2 r^2 ( r^2 + a^2) \over C_0^4a^2 } } \right \}$$ is the corresponding velocity induced by the Newtonian velocity $$\label{vN2}
v_{N0}^2(r) = {C_0^2 a \over \sqrt{ r^2 + a^2}}.$$ Here $C_0$ and $a$ are some constants of parametrization. Therefore, the corresponding mass density is given by Eq. (\[38\_05\]) $$\mu_0(r)= {C_0^2 \over 2 \pi G} \left [ {1 \over r} - {1 \over \sqrt{
r^2 + a^2}} \right ].$$ Note that $v_0^4$ approaches a constant $g_0 C_0^2a$ at spatial infinity. This asymptotically flat pattern of the velocity is compatible with many observations of the spirals. This Newtonian velocity does not, however, vanish at the origin. Indeed, one can show that $v(0) \to C_0 \ne 0$ Therefore this would not be a good expansion basis for the physical spirals. The non-vanishing behavior of $v$ at $r=0$ can be secured by considering the refined model (\[vN22\]): $$v_{N0}^2(r) = C_0^2\left( {a_1 \over \sqrt{ r^2 + a_1^2}} -{a_2 \over
\sqrt{ r^2 + a_2^2}} \right )$$ with $C_0$, $a_1>a_2$ some constants of parameterizations. The corresponding velocity function $v$ can be shown to be $$v_0^4(r)= {C_0^4 \over 2} \left [ { a_1 \over \sqrt{ r^2 +
a_1^2}}- { a_2 \over \sqrt{ r^2 + a_2^2}} \right ]^2 \left \{ 1 +
\left [{ 1+ { 4g_0^2 r^2 ( r^2 + a_1^2) ( r^2 + a_2^2) \over C_0^4
\left (a_1\sqrt{ r^2 + a_2^2}- a_2\sqrt{ r^2 + a_1^2} \right )^2 }
} \right ]^{1/2} \right \}$$ This new velocity function $v$ is hence induced by the Newtonian velocity (\[vN22\]). In addition, Note that $v^4$ approaches a constant $g_0 C_0^2(a_1-a_2)$ at spatial infinity and vanishes at $r=0$. This agrees with the main feature of the observed asymptotically flat rotation curve of many spirals. As a result, the corresponding mass density of this model is given by Eq. (\[38\_051\]): $$\mu_0(r)= {C_0^2 \over 2 \pi G} \left [ {1 \over \sqrt{ r^2 +
a_2^2}} - {1 \over \sqrt{ r^2 + a_1^2}} \right ].$$ In addition, a model with a velocity, in the case of Milgrom model, of the form $$\label{tmvn1}
v_1^4 = { C_1^4 r^4 + \sqrt{C_1^8 r^8 + 4 C_1^4 r^6 g_0^2
a^2(r^2+a^2)^3 } \over 2 a^2(r^2+a^2)^3}$$ can be shown to be induced by the Newtonian velocity of the following form given by Eq. (\[42\_05\]): $$v_{N1}^2(r)
={C_1^2 r^2 \over a ( r^2 + a^2)^{3/2}} . \nonumber$$ Therefore, this model is derived by the mass density (\[tmmu1\]): $$\nonumber
\mu_1(r)= {C_1^2 \over 2 \pi G( r^2 + a^2)^{3/2}}.$$ Note that $v_1^2(r \to \infty) \to C_1 \sqrt{g_0/a}$ and $v_1(r=0)=0$ in this Milgrom model. In addition, the corresponding Newtonian model also has the same properties: $v_{N1}(r)$ vanishes both at $r=0$ and $r \to \infty$. Therefore, this model also appears to be a realistic model in agreement with the asymptotically flat rotation curve being observed.
Note again that further differentiation of the Newtonian velocity $v_N^2$ with respect to $-a^2$ will derive integrable higher derivative models: $$v_{Nn}^2 \equiv
C_n^2 (-\partial_{a^2})^n \left[ { a \over \sqrt{ r^2 + a^2}} \right
].$$ Therefore, this velocity will be derived by the mass density $$\begin{aligned}
\mu_n(r) = {C_n^2 \over 2 \pi G} {(2n-1)!! \over 2^n } (r^2 +
a^2)^{-n-1/2}.\end{aligned}$$
One of the advantage of these well-behaved smooth velocity functions is that they can be used as expansion basis for simulation of velocity profiles. Thanks to the linear dependence of the $v_N^2(r)$ in the mass density function $\mu(r)$, one can freely combine any integrable modes of velocity to obtain all possible combinations of integrable models. For example, the model with $$v_N^2(r)= \sum_{i,j} v_{N0}^2(r,C_{0i}, a_j, b_j) \equiv
\sum_{i,j} C_{0i}^2\left[{ a_j \over \sqrt{ r^2 + a_j^2}}- { b_j
\over \sqrt{ r^2 + b_j^2}} \right ]$$ is integrable and can be shown to be derived by the mass density $$\mu(r)= \sum_i \mu_0(r, C_{0i}, a_j) \equiv \sum_i {C_{0i}^2 \over
2 \pi G} \left [ {1 \over \sqrt{ r^2 + b_j^2}} - {1 \over \sqrt{
r^2 + a_j^2}} \right ]$$ with $C_{ni}$ and $a_j$ all constants of parametrization. The velocity $v_N^2$ also vanishes at $r=0$ and goes to $$v_N^2(r)\to \sum_{i,j} C_{0i}^2 {a_j-b_j \over r}$$ at spatial infinity. This will in turn make the corresponding observed Milgrom velocity $v^2$ approaches the asymptotic velocity $v^2_\infty \to [\sum_{i,j}C_{0i}^2g_0 (a_j-b_j)]^{1/2}$. One can also adds higher derivative terms $v_{Nn}^2(r)$ to the velocity profile $v_N^2(r)$. Since higher derivative velocity $v_n$ goes to zero faster than the zero-th derivative term at spatial infinity, these adding will not affect the asymptotic behavior of $v^2$ at spatial infinity. Therefore, leading order terms will determine the asymptotic value of $v$. To be more specific, one can consider the model $$v_N^2(r)= \sum_{i,j,n} v_{n}^2(r,C_{ni}, a_j)\equiv \sum_{i,j}
C_{0i}^2\left[ { a_j \over \sqrt{ r^2 + a_j^2}}- { b_j \over
\sqrt{ r^2 + b_j^2}} \right ]+ \sum_{k,l,n} C_{nk}^2
(-\partial_{a_l^2})^n \left[ { a_l \over \sqrt{ r^2 + a_l^2}}
\right ]$$ which be derived by the mass density $$\mu(r)= \sum_i \mu_n(r, C_{ni}, a_j) \equiv \sum_{i,j} {C_{0i}^2
\over 2 \pi G} \left [ {1 \over \sqrt{ r^2 + b_j^2}} - {1 \over
\sqrt{ r^2 + a_j^2}} \right ] + \sum_{n,k,l} {C_{nk}^2 \over 2 \pi
G} {(2n-1)!! \over 2^n } (r^2 + a_l^2)^{-n-1/2}$$ with $C_{0i}$ and $a_j$ all constants of parametrization. Note again that the velocity $v^2$ of these models also vanishes at $r=0$ and goes to $$v_N^2(r)\to \sum_{i,j} C_{0i}^2 {a_j-b_j \over r}$$ corresponding to $$v^2(r)\to \left [g_0 \sum_{i,j} C_{0i}^2 {(a_j-b_j)} \right ]^{1/2}$$ at spatial infinity. Therefore, these models turn out to be good expansion basis for $v_N^2(r)$ of any RC data for Milgrom models.
In practice, one may convert the RC data from $v$ to $v_N$ following Eq. (\[Mv\]) and then fit the resulting $v_N^2$ in expansion of these modes in order to analyze the RC in basis of of these basis modes. This helps analytical understanding of the spiral galaxies more transparently. The properties of each modes is easy understand because they are integrable. The corresponding coefficients $C_{ni}$ and $a_j$ will determine the contributions of each modes to any galaxies. One will be able to construct tables for spiral galaxies with the corresponding coefficients of each modes. Hopefully, this expansion method originally developed in [@toomre] will provide us a new way to look at the major dynamics of the spiral galaxies.
Famaey and Binney model
-----------------------
For the case of FB model (\[03\]), the Newtonian velocity $v_N$ and the observed velocity $v$ are related by the following equation $$\label{FB04v}
v^2={ \sqrt{{v_N}^4+4g_0rv_N^2} +v_N^2 \over 2} .$$ Therefore, one can show that a galaxy with a rotation curve given by $$\label{44_05}
v_0^2(r)= {C_0^2a \over 2\sqrt{r^2 + a^2}} \left \{ 1 + \left [ {
1+ { 4g_0 r \sqrt{ r^2 + a^2} \over C_0^2a } } \right ]^{1/2}
\right \}$$ is the corresponding velocity induced by the Newtonian velocity $$\label{vN2}
v_{N0}^2(r) = {C_0^2 a \over \sqrt{ r^2 + a^2}}.$$ Here $C_0$ and $a$ some constants of parametrization. Therefore, the corresponding mass density is given by Eq. (\[38\_05\]) $$\mu_0(r)= {C_0^2 \over 2 \pi G} \left [ {1 \over r} - {1 \over \sqrt{
r^2 + a^2}} \right ].$$ Note that $v_0^4$ approaches a constant $g_0 C_0^2a$ at spatial infinity. This asymptotic flat pattern of the velocity is compatible with many observations of the spirals. This Newtonian velocity does not, however, vanish at the origin. Indeed, one can show that $v_0(0) \to C_0 \ne 0$. Therefore this would not be a good expansion basis for most physical spirals. The non-vanishing behavior of $v$ at $r=0$ can be secured by considering the refined model (\[vN22\]): $$v_{N0}^2(r) = C_0^2\left( {a_1 \over \sqrt{ r^2 + a_1^2}} -{a_2 \over
\sqrt{ r^2 + a_2^2}} \right )$$ with $C_0$, $a_1>a_2$ some constants of parametrization. The corresponding velocity function $v$ can be shown to be $$v_0^2(r)= {C_0^2 \over 2} \left [ { a_1 \over \sqrt{ r^2 +
a_1^2}}- { a_2 \over \sqrt{ r^2 + a_2^2}} \right ] \left \{ 1 +
\left [{ 1+ { 4g_0 r ( r^2 + a_1^2)^{1/2} ( r^2 + a_2^2)^{1/2}
\over C_0^2 \left (a_1\sqrt{ r^2 + a_2^2}- a_2\sqrt{ r^2 + a_1^2}
\right ) } } \right ]^{1/2} \right \}$$ This new velocity function $v_0$ is hence induced by the Newtonian velocity (\[vN22\]). In addition, Note that $v_0^4$ approaches a constant $g_0 C_0^2(a_1-a_2)$ at spatial infinity and vanishes at $r=0$. This fits the main feature of the asymptotically flat rotation curve of the spirals. As a result, the corresponding mass density of this model is given by Eq. (\[38\_051\]): $$\mu_0(r)= {C_0^2 \over 2 \pi G} \left [ {1 \over \sqrt{ r^2 +
a_2^2}} - {1 \over \sqrt{ r^2 + a_1^2}} \right ].$$ In addition, a model with a velocity, in the case of FB model, of the form $$v_1^2 = { C_1^2 r^2 \over 2 a(r^2+a^2)^{3/2}} \left [ 1 + \left
[{1 + { 4 g_0 a(r^2+a^2)^{3/2} \over C_1^2 r} } \right ]^{1/2}
\right ]$$ can be shown to be induced by the Newtonian velocity of the following form given by Eq. (\[42\_05\]): $$v_{N1}^2(r)
={C_1^2 r^2 \over a ( r^2 + a^2)^{3/2}} . \nonumber$$ Therefore, this model is derived by the mass density (\[tmmu1\]): $$\nonumber
\mu_1(r)= {C_1^2 \over 2 \pi G( r^2 + a^2)^{3/2}}.$$ Note that $v_1^2(r \to \infty) \to C_1 \sqrt{g_0/a}$ and $v_1(r=0)=0$ in this FB model. The corresponding Newtonian model also has the same limit: $v_{N1}(r)$ goes to $0$ in both $r=0$ and $r \to \infty$ limits. Therefore, this model appears to be a more realistic model compatible with the flat rotation curve being observed.
Note again that further differentiation of the Newtonian velocity $v_N^2$ with respect to $-a^2$ will derive integrable higher derivative models: $$v_{Nn}^2 \equiv
C_n^2 (-\partial_{a^2})^n \left[ { a \over \sqrt{ r^2 + a^2}} \right
].$$ Therefore, this velocity can be shown to be derived from the mass density $$\begin{aligned}
\mu_n(r) = {C_n^2 \over 2 \pi G} {(2n-1)!! \over 2^n } (r^2 +
a^2)^{-n-1/2}.\end{aligned}$$
One of the advantage of these well-behaved smooth velocity functions is that they can be used as expansion basis for simulation of velocity profiles. Thanks to the linear dependence of the $v_N^2(r)$ in the mass density function $\mu(r)$, one can freely combine any integrable modes of velocity to obtain all possible combinations of integrable models. For example, the model with $$v_N^2(r)= \sum_{i,j} v_{N0}^2(r,C_{0i}, a_j, b_j) \equiv
\sum_{i,j} C_{0i}^2\left[{ a_j \over \sqrt{ r^2 + a_j^2}}- { b_j
\over \sqrt{ r^2 + b_j^2}} \right ]$$ is integrable and can be shown to be derived by the mass density $$\mu(r)= \sum_i \mu_0(r, C_{0i}, a_j) \equiv \sum_i {C_{0i}^2 \over
2 \pi G} \left [ {1 \over \sqrt{ r^2 + b_j^2}} - {1 \over \sqrt{
r^2 + a_j^2}} \right ]$$ with $C_{ni}$ and $a_j>b_j$ are all constants of parameterizations. The velocity will vanish at $r=0$ and goes to $$v_N^2(r)\to \sum_{i,j} C_{0i}^2 {a_j-b_j \over r}$$ at spatial infinity. This will in turn make the corresponding observed FB velocity $v$ approach the asymptotic velocity $v^2_\infty \to [\sum_{i,j}C_{0i}^2g_0 (a_j-b_j)]^{1/2}$. One can also adds higher derivative terms $v_{Nn}^2(r)$ to the velocity profile $v_N^2(r)$. Since higher derivative velocity $v_n$ goes to zero faster than the zero-th derivative term at spatial infinity, these adding will not affect the asymptotic behavior of $v^2$ at spatial infinity. Therefore, the leading order terms will determine the asymptotic behavior of the RC.
To be more specific, one can consider the model $$v_N^2(r)= \sum_{i,j,n} v_{n}^2(r,C_{ni}, a_j)\equiv \sum_{i,j}
C_{0i}^2\left[ { a_j \over \sqrt{ r^2 + a_j^2}}- { b_j \over
\sqrt{ r^2 + b_j^2}} \right ]+ \sum_{n,k,l} C_{nk}^2
(-\partial_{a_l^2})^n \left[ { a_l \over \sqrt{ r^2 + a_l^2}}
\right ]$$ which be derived by the mass density $$\mu(r)= \sum_i \mu_n(r, C_{ni}, a_j) \equiv \sum_{i,j} {C_{0i}^2
\over 2 \pi G} \left [ {1 \over \sqrt{ r^2 + b_j^2}} - {1 \over
\sqrt{ r^2 + a_j^2}} \right ] + \sum_{n,k,l} {C_{nk}^2 \over 2 \pi
G} {(2n-1)!! \over 2^n } (r^2 + a_l^2)^{-n-1/2}$$ with $C_{0i}$ and $a_j$ all constants of parameterization. The velocity of these models vanishes at $r=0$ and goes to $$v_N^2(r)\to \sum_{i,j} C_{0i}^2 {a_j-b_j \over r},$$ corresponding to $$v^2(r)\to \left [g_0 \sum_{i,j} C_{0i}^2 {(a_j-b_j)} \right ]^{1/2}$$ at spatial infinity. Therefore, these models turn out to be good expansion basis for $v_N^2$ of any RC data for FB models.
In practice, one may convert the RC data from $v$ to $v_N$ following Eq. (\[FB04v\]) and then fit the resulting $v_N^2$ in expansion of these modes in order to analyze the RC in basis of of these basis modes. This helps analytical understanding of the spiral galaxies more transparently. The properties of each modes is easy understand because they are integrable. The corresponding coefficients $C_{ni}$ and $a_j$ will determine the contributions of each modes to any galaxies. One will be able to construct tables for spiral galaxies with the corresponding coefficients of each modes. Hopefully, this expansion method originally developed in [@toomre] will provide us a new way to look at the major dynamics of the spiral galaxies.
compact and regular expression
==============================
In order to put the integral in a numerically accessible form, Eq. (\[17\]) for $\mu(r)$ can be written as a more compact form with a compact integral domain $x \in [0,1]$: $$\begin{aligned}
\label{18}
&& \mu(r)={1 \over \pi^2 G r}\\ && \times \left [ \int_0^1
\partial_x [v_N^2(rx)] \; K(x)dx \nonumber
- \int_0^1
\partial_y [v_N^2({r \over y})] \; K(y)y dy \right ].\end{aligned}$$ Here one has replaced $x=r'/r$ and $y=r/r'$ in above integral. One of the advantage of this expression is the numerical analysis involves only a compact integral domain $0 \le x \le 1$ instead of a open and infinite domain $0 \le r \to \infty$ domain. Even most integral vanish quickly enough without bothering the large $r$ domain, the compact expression will make both the numerical and analytical implication more transparent to access.
Note that the function $K(x)$ diverges at $x=1$. It is, however, easy to show that $K(x)dx \to 0$ near $x=1$. Usually, one can manually delete the negligible integration involving the elliptical function $K(x \to 1)$ to avoid computer break-down due to the apparent singularity.
It will be, however, easier for us to perform analytic and/or numerical analysis with an equation that is free of any apparently singular functions in the integrand. Indeed, this can be done by transforming the singular elliptic function $K(x)$ to regular elliptic function $E(x)$. The advantage of this transformation will be used to evaluate approximate result in the next sections. Therefore, one will try to convert the apparently singular function $K(x)$ into a singular free function $E(x)$ by performing some proper integration-by-part.
One will need a few identities satisfied by the elliptic functions $E$ and $K$. Indeed, it is straightforward to show that $E(x)$ and $K(x)$ satisfy the following equations that will be useful in converting the integrals into more accessible form: $$\begin{aligned}
\label{22}
x (1-x^2) K''(x)+(1-3x^2)K'(x)-x K(x)&=&0 ,\\
x (1-x^2) E''(x)+(1-x^2)E'(x)+x E(x)&=&0, \label{23}\end{aligned}$$ and also $$\begin{aligned}
x E'(x)+K(x)&=&E(x), \label{24} \\
E'(x)+{x \over 1-x^2} E(x)&=& K'(x), \label{25} \\
x K'(x)+K(x)&=& {1 \over 1-x^2} E(x). \label{26}\end{aligned}$$ Note that Eq. (\[24\]) can be derived directly from differentiating the definition of the elliptic integrals Eq.s (\[20\]-\[21\]). In addition, Eq.s (\[25\]-\[26\]) can also be derived with the help of the Eq. (\[23\]).
When one is given a set of data as a numerical function of $v_N(rx)$, it is much easier to compute $dv_N(k=rx)/dk$ instead of $\partial_x v_N(rx)$. Therefore, one will need the following converting formulae: $$\partial_x [v_N^2(rx)]= r [v_N^2(rx)]',$$ $$\partial_y [v_N^2({r \over y})]= -{r \over y^2} [v_N^2({r \over
y})]'.$$
Therefore, one is able to write the Eq. (\[18\]) as: $$\mu(r)={1 \over \pi^2 G } \left [ \int_0^1 dv_N^2(rx) \; K(x)dx +
\int_0^1 dv_N^2({r \over y}) \; K(y) y^{-1} dy \right ]
\label{261}$$
Here $dv_N^2(r) \equiv \partial_r [v_N^2(r)] \equiv [v_N^2(r)]'$ with $'$ denoting the differentiating with the argument $r$, or $rx$.
Note that the part involving the integral with $dv_N^2(r/y)$ is related to the information in the region $r' \ge r$. Here $r$ is the point of the derived information such as $\mu(r)$, and $r'$ is the source point of observation $v(r')$ in the integrand. Therefore, this part with source function $r/y$ contains information exterior to the target point $r$. On the contrary, the source term with function of $rx$ represents the information interior to the target point $r$.
Most of the time, the source information beyond certain observation limit $r'=R$ becomes un-reliable or unavailable due to the precision limit of the observation instruments. One will therefore need to manually input the missing data in order to make the integration result free of any singular contributions due to the boundary effect. One will come back to this point in section IV.
In addition, Eq. (\[261\]) can be used to derive the total mass distribution $M(r)$ of the spiral galaxy via the following equation: $$M(r)= 2 \pi \int_0^r r' dr' \; \mu(r').$$
Note that the velocity function $v_N$ shown previously in this paper is the rotation velocity needed to work with the total mass of the system in the Newtonian dynamics. One can derive the velocity $v(r)$ that works with the dynamics of MOND with the relation given by Eq. (\[1\]) and (\[2\]). In short, $v \to
v_N$ in the limit of the Newtonian dynamics.
Throughout the rest of this paper, we will discuss the application of these formulae both in the case of the Newtonian dynamics with dark matter and in the case of MOND. Therefore, we will first derive the velocity function $v_N(r)$ from $v(r)$ in the case of MOND. As mentioned above, they follows the relation given by Eq. (\[1\]) and (\[2\]).
Two different models will be studied later:
Case I: [Milgrom model]{}
Indeed, if one has $g(r)={v^2(r) / r}$ in the case of Milgrom model, one can show that $$\label{32}
v_N^2(r)= { v^4(r) \over \sqrt{v^4(r) +g_0^2 r^2} }$$ In dealing with the exterior part involving $r_0/y$, one has to compute $dv_N^2(r)$ at large $r$. By assuming $v(r) \to v_R \equiv
v(r=R)$, one can show that: $$dv_N^2(r \ge R) \to - { v_R^4 g_0^2 r \over (v_R^4 +g_0^2
r^2)^{3/2}} .$$ Here $R$ is the radius of the luminous galactic boundary. Mostly, the flatten region of RC becomes manifest beyond $r \ge R$. In addition, $v_N(rx)$ and $v_N(r/y)$ take the following form: $$\label{34}
v_N^2(rx)= { v^4 (rx) \over \sqrt{v^4 (rx) +g_0^2 r^2x^2} },$$ $$\label{35}
v_N^2({r \over y})= { v^4 ({r / y}) y \over \sqrt{v^4({r / y})
y^2+ g_0^2 r^2 } }.$$
Therefore, the surface mass density $\mu(r)$ can be written as, with the velocity $v_N(r)$ given above, $$\begin{aligned}
\label{36}
&& \mu(r)={1 \over 2 \pi G} \int_0^\infty
\partial_{r'} [{ v^4 \over \sqrt{v^4
+g_0^2 r'^2} }] \; H(r, r')dr'
%- {r_b v_b^4 \over r'\sqrt{v_b^4+g_0^2r_b^2}} \nonumber\end{aligned}$$ in the case of Milgrom model.
Case II: [FB model]{}
Similarly, if one has $g(r)={v^2(r) / r}$ in the case of FB model, one can show that $$\label{FB32}
v_N^2(r)= { v^4(r) \over v^2(r) +g_0 r }$$ By assuming $v(r) \to v_R \equiv v(r=R)$, one can also show that: $$\label{FB33}
dv_N^2(r \ge R) \to - { v_R^4 g_0 \over (v_R^2 +g_0 r)^2} .$$ In addition, $v_N(rx)$ and $v_N(r/y)$ take the following form: $$\label{FB34}
v_N^2(rx)= { v^4 (rx) \over {v^2 (rx) +g_0 rx} },$$ $$\label{FB35}
v_N^2({r \over y})= { v^4 ({r / y}) y \over {v^2({r / y}) y+ g_0
r } }.$$
Therefore, the surface mass density $\mu(r)$ can be written as, with the velocity $v_N(r)$ given above, $$\begin{aligned}
\label{FB36}
&& \mu(r)={1 \over 2 \pi G} \int_0^\infty
\partial_{r'} [{ v^4 \over v^2
+g_0 r' }] \; H(r, r')dr'
%- {r_b v_b^4 \over r'\sqrt{v_b^4+g_0^2r_b^2}} \nonumber\end{aligned}$$ in the case of FB model. We will try to estimate the exterior contribution of these two models shortly.
As mentioned above, it is easier to handle the numerical evaluation involving regular function $E(x)$ instead of the singular function $K(x)$. Therefore, one can perform an integration-by-part and convert the integral in Eq. (\[18\]) into an integral free of singular function $K(x)$. The result reads, with $\mu(r)=\Theta(r)/( \pi^2 Gr)$, $$\Theta(r)= \int_0^1 {V_N^2({r / x})-x V_N^2(r x) \over 1-x^2} E(x)
dx -\int_0^1 E'(x) V_N^2(rx) dx$$ The last term on the right hand side of above equation can be integrated by part again to give $$\begin{aligned}
\Theta(r) &=& \int_0^1 {V_N^2({r / x})-x V_N^2(r x) \over
1-x^2} E(x) dx \nonumber \\
&+& \int_0^1 E(x) \partial_x V_N^2(rx) dx -V_N^2(r) .\end{aligned}$$ Hence one has $$\begin{aligned}
\mu(r) &=& {1 \over \pi^2
Gr} [ \int_0^1 {V_N^2({r / x})-x V_N^2(r x) \over
1-x^2} E(x) dx \nonumber \\
&+& \int_0^1 E(x) \partial_x V_N^2(rx) dx -V_N^2(r) ] \label{39}.\end{aligned}$$ Note again that the integral involving $v_N(rx)$ carries the information $r' \le r$ while the integral with $v_N(r/x)$ represents the contribution from $r'> r$ by the fact that $0 \le x
\le 1$. As promised, one has transformed the singular $K$ function into the regular $E$ function.
Although there are still singular contribution like $1/(1-x^2)$ in the integrand, it is easier to handled since we know these functions better than $K$ function. This is because we only know a formal definition of this function via a set of definitions. Even we have a rough picture about the form of $K(x)$ and $K'(x)$. Numerical and analytical analysis could be difficult as compared to dealing with the more well-known function like $1/(1-x^2)$. We will show explicitly one of the advantage of this equation in next section when one is trying to estimate the contribution from a model describing the missing part of observation.
contribution from the asymptotic region
=======================================
In practice, measurement in far out region is normally difficult and unable to provide us with reliable information beyond the sensitivity limit of the observation instrument. One often can only measure energy flux and rotation curve within a few hundred $kpc$ from the center of the galaxy. Beyond that scale of range, signal is normally too weak to obtain any reliable data. Therefore, one has to rely on various models to interpolate the required information further out.
It it known that, contrary to the spherically system, exterior mass does contribute to the inner region. Therefore, it is important to estimate the exterior contribution carefully with various models. In this section, we will study a velocity model with a flat asymptotic form and its contribution to the inner part in both Newtonian dynamics and MOND cases. One of the purpose of doing this is to demonstrate the advantage of the regular function formulae one derived earlier.
For a highly flatten galaxy, formulae obtained earlier in previous section has been shown to be a very useful tool to predict the dynamics of spiral galaxies. It is also a good tool for error estimation. Possible deviation due to the interpolating data can be estimated analytically more easily with the equations involving only regular elliptic function $E(x)$. Note again that another advantage of Eq. (\[39\]) is that the $r$-dependence of the mass density $\mu(r)$ has been extracted to the function $v_N$. This will make the analytical analysis easier too.
Assuming that the observation data $v$ is only known for the region $r \le R $, the following part of Eq. (\[39\]) $$\label{40}
\delta \mu(r \le R) ={1 \over \pi^2 G r} \int_0^{r/R} dx \left [
{v_N^2(r/x) \over 1-x^2} \; E(x) \right ]$$ represents the contribution of $\mu(r\le R)$ from the unavailable data $v_N(r')$ beyond the point $r = R$. To be more precise, one can write $\mu(r \le R) = \mu_<(r)+ \delta \mu(r)$ with the mass density $\mu_< (r)$ being contributed solely from the $v_N(r')=v_N(r/x)$ data between $0 \le r' \le R$ or equivalently $r/R \le x \le 1$. Explicitly, $\mu_<(r)$ can be expressed as $$\begin{aligned}
\mu_<(r) &=& {1 \over \pi^2
Gr} [ \int_{r/R}^1 {V_N^2({r / x}) \over
1-x^2} E(x) dx -\int_0^1 {x V_N^2(r x) \over 1-x^2} E(x) dx \nonumber \\
&+& \int_0^1 E(x) \partial_x V_N^2(rx) dx -V_N^2(r) ] \label{411}.\end{aligned}$$
As mentioned above, one will need a model for the unavailable data to estimate the contribution shown in Eq. (\[40\]). We will show that a simple cutoff with $v(r
>R)=0$ will introduce a logarithmical divergence to the surface density $\mu(r=R)$. The divergence is derived from the singular denominator $1-x^2$ in Eq. (\[39\]). The factor $1/(1-x^2)$ diverges at $x=1$ or equivalently $r'=r$. A smooth velocity function $v_N(r)$ connecting the region $R-\epsilon < r < R +\epsilon$ is required to make the combination $[V_N^2({r/x})-x V_N^2(r x)] / (1-x^2)$ regular at $x=1$. Here $\epsilon$ is an infinitesimal constant.
Case I: Milgrom model
Let us study first the case of Milgrom model. Since most spiral galaxies has a flat rotation curve $v(r \gg R) \to v_R $ with a constant velocity $v_R$. For our purpose, let us assume that $v_R
= v(r=R)$ for simplicity. Hence the deviation (\[40\]) can be evaluated accordingly. Note again that this simple model agrees very well with many known spirals.
Note first that the Newtonian velocity $v_N(r)$ is given by Eq.s (\[34\]) and (\[35\]) with $v(r >R)=V_R$. After some algebra, one can show that $$\begin{aligned}
\label{451}
&& \delta \mu_M(r<R) = { 1 \over \pi^2 G r} \int_0^{r/R} dx \left
[
{v_N^2(r/x) \over 1-x^2} \; \right ]E(x) \\
&=& {E_M \over \pi^2 G r} \int_0^{r/R} dx \left [ {v_N^2(r/x)
\over 1-x^2} \; \right ] \\ &=& {E_M v_R^4 \over \pi^2 Gr
\sqrt{v_R^4 +g_0^2r^2}}\nonumber \\ && \times \left [ \ln {r
\sqrt{v_R^4 +g_0^2R^2} + R \sqrt{v_R^4 +g_0^2r^2} \over
(g_0r+\sqrt{v_R^4 +g_0^2r^2})\sqrt{R^2-r^2}}
\right ]\label{42}\end{aligned}$$ Note that $\pi/2 \ge E(x) \ge 1$ is a monotonically decreasing function with a rather smooth slope. The rest of the integrand is also positive definite. Therefore, one can evaluate the integral by applying the mean value theorem for the integral (\[451\]) with $E_M \equiv E(x=x_M)$ the averaged value of $E(x)$ evaluated at $x_M$ somewhere in the range $0 \le x_M \le 1$.
Case II: FB model.
Let us study instead the case of FB model. Let us also assume that $v_R = v(r=R)$ for simplicity. Note first that the Newtonian velocity $v_N(r)$ is given instead by Eq.s (\[FB34\]) and (\[FB35\]) with $v(r >R)=V_R$. After some algebra, one can show that $$\begin{aligned}
\label{FB451}
&& \delta \mu_F(r<R) = {E_F \over \pi^2 G r} \int_0^{r/R} dx
\left [ {v_N^2(r/x) \over 1-x^2} \; \right ] \nonumber \\ &=&
{{E_F v_R^4 \over \pi^2 G r} } \int_0^{r/R} dx \left [ {1 \over
(1-x^2)(v_R^2 +g_0r/x)} \; \right ] \equiv {{E_F v_R^2 \over
\pi^2 G r} } I \label{FB42}\end{aligned}$$ with $$I= \int_0^{r/R} dx \left [ {x \over (1-x^2)(x +g_0r/v_R^2)} \;
\right ]$$ Note that $\pi/2 \ge E(x) \ge 1$ is a monotonically decreasing function with a rather smooth slope. The rest of the integrand is also positive definite. Therefore, one can evaluate the integral by applying the mean value theorem for the integral (\[FB451\]) with $E_F \equiv E(x=x_F)$ the averaged value of $E(x)$ evaluated at $x_F$ somewhere in the range $0 \le x_F \le 1$. After some algebra, one can show that $$\delta \mu_F(r<R) = {{E_F v_R^4 \over 2 \pi^2 G r} } \left [ {1
\over v_R^2-g_0r} \ln (1+r/R) - {1 \over v_R^2+g_0r} \ln (1-r/R) -
{2g_0r \over v_R^4-g_0^2r^2} \ln [1+v_R^2/(g_0R)] \right ].$$
Case III: Newtonian case.
Similarly, one can also evaluate the mass density in the case of Newtonian dynamics with dark matter in need. Let us assume $v_N(r
\ge R) = v_R$ for simplicity again. As a result the deviation of mass density required to generate the rotation curve $v_N(r) =
v(r)$ can be shown to be $$\begin{aligned}
\delta \mu_N(r<R) &=& {E_N \over \pi^2 G r} \int_0^{r/R} dx
\left [ {v_N^2(r/x) \over 1-x^2} \; \right ] \nonumber \\
&=& {E_N v_R^2 \over 2 \pi^2 Gr } \ln {R+r \over R-r} \label{dmuN}\end{aligned}$$ after some algebra. Note that, in deriving above equation, one also applies the mean value theorem with $E_N \equiv E(x=x_N)$ the averaged value of $E(x)$ evaluated at $x_N$ somewhere in the range $0 \le x_N \le 1$.
To summarize, one has shown clearly with a simple model for the asymptotic rotation velocity that formulae with regular $E$ function appears to be easier for analysis. This is mainly due to the fact that $E(x)$ is smoothly and monotonically decreasing function within the whole domain $x \in [0,1]$. In most cases, mean value theorem is very useful in both numerical and analytical evaluations.
[In addition, one notes that the mild logarithm divergent terms appeared in the above final results are due to the cut-off at $r=R$. A negative and equal contribution from the interior data will cancel this singularity at $r=R$. To be more precisely, if we turn off the $v$ function abruptly starting the point $r=R$ by ignoring the exterior region contribution, a logarithmic divergence will show up at $r=R$ accordingly. The appearance of the logarithm divergence also emphasize that the boundary condition of these physical observables at $r=R$ should be taken care of carefully to avoid these unphysical divergences. In practice, one normally adds a quickly decreasing $v_N(r >R)$ to account for the missing pieces of information and to avoid this singularity. Numerical computation may, however, bring up a small peak near the boundary $r=R$ if the matching of $v_N$ at the cutoff is not smooth enough. One will also show that similar singular behavior also appears in the final expression of the velocity function derived from a given data of mass distribution in next section. Evidence also shows that exterior contribution should be treated carefully in order to provide a meaningful fitting result.]{}
In order to compare the difference of $\delta \mu$ for different models, we find it is convenient to write $B \equiv v_R^2/(g_0R)$ and $s\equiv r/R$ such that $A$ and $r'$ both become dimensionless parameters. Note that $B \sim 1.1$ if we take $v_R \sim 250\,
km/s$ and $R \sim 4.97 \times 10^4 \, ly$ from the data of Milky Way. Therefore, $B \sim 1.1$ is typically a number slightly larger than 1. Hence, one can write above equations as $$\begin{aligned}
\label{A42}
\delta \mu_M(s<1) &=& {E_M B^2 g_0 \over \pi^2 Gs \sqrt{B^2
+s^2}}\left [ \ln {s \sqrt{B^2+1} + \sqrt{B^2 +s^2} \over
(s+\sqrt{ B^2+s^2})\sqrt{1-s^2}}
\right ], \\
\delta \mu_F(s<1) &=& {E_F B^2 g_0 \over 2 \pi^2 Gs }\left [ {
\ln (1+s) \over B-s} - {\ln (1-s) \over B+s} + {2s \ln (1+B)
\over s^2-B^2} \right ], \\
\delta \mu_N(s<1)
&=& {E_N B g_0\over 2 \pi^2 Gs } \ln {1+s \over 1-s}.
\label{AdmuN}\end{aligned}$$ In addition, one can estimate the deviation $\delta \mu$ at small $s$ where $s \ll 1$, or $r \ll R$. The leading terms read: $$\begin{aligned}
\label{A79}
\delta \mu_M(s \ll 1) &=& {E_M g_0 \over \pi^2 G} \left [
\sqrt{B^2+1} -1 \right ] + {\rm O}(s), \\
\delta \mu_F(s \ll 1) &=& {E_F g_0 \over \pi^2 G } \left [ B-
\ln (B+1) \right ] + {\rm O}(s), \\
\delta \mu_N(s \ll 1 ) + {\rm O}(s).
&=& {E_N g_0\over \pi^2 G }B \label{A81}\end{aligned}$$ at small $r$. Note that $g_0/G \sim 0.18$. Therefore, the most important contribution from the exterior contribution is near the boundary at $r=R$. Our result shows that special care must be taken near the boundary of available data. Appropriate matching data beyond this boundary is needed to eliminated the naive logarithm divergence. The compact expression also made reliable estimation of the deviation possible.
gravitational field derived from a given mass density
=====================================================
One can measure the flux from a spiral galaxy and try to obtain the mass density with the $M/L=$ constant law [@tf77]. Even the $M/L$ law is more or less an empirical law, it does help us with a fair estimate of the mass density distribution. We will focus again on the physics of a highly flattened spiral galaxy. Once the mass density function is known, for the range $ 0 \le r
\le R$, one can also compute the gravitational field $g_N(r)$ from a given mass density function $\mu(r)$. Once the function $g_N(r)$ is derived, one can derive $g(r)$ following the relation given by Eq.s (\[1\]) and (\[2\]) in the case of MOND.
Indeed, one can show that $g_N(r)$ is given by $$g_N (r) =2 \pi G \int_0^\infty k dk \; \int_0^\infty r' dr' \;
\mu(r') J_0(kr') J_1(kr)$$ from the Eq.s (\[4\]) and (\[9\]). Therefore one has $$g_N (r) =2 \pi G \int_0^\infty r' dr' \; \mu(r') H_1(r, r')$$ with $$H_1(r, r')=\int_0^\infty k dk J_1(kr) J_0(kr')=-\partial_r H(r,
r').$$ By differentiating Eq. (\[161\]), one can further show that $H_1(r, r')$ becomes $$\begin{aligned}
H_1(r,r') &=& {2 \over \pi r r'} \left [ K({r \over r'}) - {r'^2
\over r'^2-r^2} E({r \over r'}) \right ] \; { \rm for} \; r<r'
\label{47} \\
&=& {2 \over \pi (r^2-r'^2)} E({r' \over r}) \; \; { \rm for} \;
r> r' . \label{48}\end{aligned}$$ Note that one has used the differential equations obeyed by $E$ and $K$ shown in Eq.s (\[24\])-(\[26\]). Following the method shown in section IV, one can rewrite the equation as $$\begin{aligned}
&& g_N(r)=4G \int_0^1 dx \nonumber \\ && \left [ \mu(rx) {x
E(x) \over 1-x^2} + \mu({r / x}) [{K(x) \over x^2} - {E(x) \over
x^2(1-x^2)}] \; \right ] \label{gNK}\end{aligned}$$ after some algebra. In order to eliminate the singular function $K(x)$, we can also convert $K(x)$ into regular function $E(x)$ following similar method. The result is $$\begin{aligned}
\label{50}
&& g_N(r)=4G \int_0^1 dx \\ && \left [ \mu(rx) {x E(x) \over
1-x^2} - \mu({r / x}) [{E(x) \over 1-x^2} + {E'(x) \over x} ] \;
\right ] \nonumber.\end{aligned}$$ With an integration-by-part, one can convert $E'(x)$ to a regular function $E(x)$. The result is $$\begin{aligned}
\label{51}
&& g_N(r)=2G \mu_*(r) - 2 \pi G \mu (r)+ 4G \int_0^1 dx \\ &&
\left [ {E(x) \over x^2(1-x^2)} [x^3 \mu(rx)- \mu({r / x})] -
{E(x) r \mu'(r/x) \over x^3} \; \right ] \nonumber.\end{aligned}$$ Here $\mu_*(r) \equiv \lim_{x \to 0} 2 \mu(r/x)/x$. If $\mu(r \to
\infty)$ goes to 0 faster than the divergent rate of $r$, $\mu_*(r)$ will vanish or remain finite. For example, if $\mu(r>R)= 2\mu_R Rr /( R^2+r^2)$, one can show that $\mu_*(r) = 4
\mu_RR/r$. Here $\mu_R \equiv \mu(r=R)$. We will be back with this model in a moment.
Similar to the argument shown in section IV, one can show that the terms with $\mu(r/x)$ in Eq. (\[51\]) will contribute $$\begin{aligned}
\label{52}
\delta g_{N0}(r\le R )=-4G \int_0^{r/R} dx E(x) \left [ {
\mu(r/x ) \over x^2 (1-x^2)} + {r \mu'(r/x) \over x^3} \right ]\end{aligned}$$ to the function $g_N(r)$ due to the unavailable data $\mu(r > R)$. Note, however, that part of the exterior region contribution has been evaluated via the integration-by-part process in deriving Eq. (\[51\]) involving $v_N(r/x)$ and $E'(x)$. Therefore, one should start with the complete version (\[50\]) in evaluating the deviation of $g_N$ due to the exterior part. Hence one should have $$\begin{aligned}
\label{53}
\delta g_N(r \le R)=-4G \int_0^{r/R} dx \mu({r \over x}) \left
[ {E(x) \over 1-x^2} + {E'(x) \over x} \right ] .\end{aligned}$$ For simplicity, we will assume the following form of mass distribution in the region $r >R$, $$\mu(r>R)= \mu_R {2 Rr \over
R^2 +r^2}$$ with $\mu_R \equiv \mu(r=R)$. Note that continuity of $\mu(r=R)$ across the matching point $r=R$ is managed to remain valid in this model. Moreover, $\mu(r \to \infty) \to 0$ is expected to hold for the luminous mass of the spiral galaxies. After some algebra, one can show that $$\begin{aligned}
\label{55}
\delta g_N(r)= -4G \mu_R {R \over r}\left [ E({r \over R})-\pi +
A(r) \right ]\end{aligned}$$ with $A(r)$ given by the integral $$\begin{aligned}
\label{61}
A &=& \int_0^{1/b} dy \left [ {1 \over (1-y)(1+by)} + {2b \over
(1+by)^2} \right ] E(\sqrt{y}) \\
&=& E_1 \int_0^{1/b} dy \left [ {1 \over (1-y)(1+by)} + {2b \over
(1+by)^2} \right ]\end{aligned}$$ with the first two terms in Eq. (\[55\]) the contribution from integration-by-part of $E'(x)$. Here one has defined $y=x^2$ and $b = R^2/r^2$ for convenience.
In addition, one also applies the mean value theorem to the integral (\[61\]) by noting again that (i) $\pi/2 \ge E(x) \ge
1$ is a monotonically decreasing function with a rather smooth slope, and (ii) the integrand is a positive function throughout the integration range. Therefore, the integral $A(r)$ can be evaluated by applying the mean value theorem with $E_1 \equiv
E(x=x_1)$ the averaged value of $E(x)$ evaluated at $x_1$ somewhere in the range $0 \le x_1 \le 1$.
The remaining integral in $A$ can be evaluated in a straightforward way and one finally has $$\begin{aligned}
\label{dgN}
&& \delta g_N(r)= \\ && -4G \mu_R {R \over r}\left [ E_1 [1+ {r^2
\over R^2 +r^2} \ln {2 R^2 \over R^2 - r^2 } ] +E({r \over R})-\pi
\right ] \nonumber .\end{aligned}$$ This is the contribution to the Newtonian field $g_N(r \le R)$ due to the unknown exterior mass contribution. Note that similar singularity also appears at $\delta g_N(r=R)$ which means that a careful treatment in modelling the unknown exterior mass is needed.
Case I: Milgrom model.
For the case of Milgrom model, one can show that $$g(r) = \sqrt{{g_N^2(r) + \sqrt{g_N^4(r) +4 g_N^2(r)g_0^2} \over 2
} }.$$ Therefore, the deviation $\delta g^2(r)$ can be shown to be $$\delta g^2(r) = \left [ { 1 \over 2} + {g_N^2(r) + 2 g_0^2 \over 2
\sqrt{g_N^4(r) +4 g_N^2(r)g_0^2} } \right ] \delta g_N^2(r)$$ by solving the algebraic equations (\[1\])-(\[2\]).
Case II: FB model.
For the FB model, one can show that $$\label{FB04}
g={ \sqrt{{g_N}^2+4g_0g_N} +g_N \over 2} .$$ Therefore, the deviation $\delta g^2(r)$ can be shown to be $$\delta g(r) = { 1 \over 2} \left [ 1 + {g_N(r) + 2 g_0 \over
\sqrt{g_N^2(r) +4 g_N(r)g_0} } \right ] \delta g_N(r)$$ by solving the algebraic equations (\[FB04\]).
Therefore, the deviation $\delta g(r)$ can be evaluated from above equation to the first order in $\delta g_N(r)$ with all $g_N(r)$ replaced by $g_{N<}(r)$. Here $g_{N<}(r)$ is defined as the contribution of interior mass to the Newtonian field $g_N(r)$, namely, $$g_N(r \le R)=g_{N<}(r) + \delta g_N(r) .$$
To summarize again, one has shown that the formulae with regular $E$ function appears to be helpful in deriving the gravitational field strength for numerical and analytical purpose.
conclusion
==========
We have reviewed briefly how to obtain the surface mass density $\mu(r)$ from a given Newtonian gravitational field $g_N$ with the help of the elliptic function $K(r)$ in this paper. The integral involving the Bessel functions is derived in detailed for heuristical reasons in this paper too. In addition, a series of integrable model in the case of MOND is also presented in this paper.
One has also converted these formulae into a simpler compact integral making numerical integration more accessible and analytical estimate possible in this paper. The apparently singular elliptic function $K(r)$ is also converted to combinations of regular elliptic function $E(r)$ by properly managed integration-by-part.
As a physical application, one derives the interior mass contribution $\mu(r<R)$ from the possibly unreliable data $v(r
>R)$ both in the cases of MOND and in the Newtonian dynamics. Detailed results are presented for Milgrom model and FB model for the case of MOND. In particular, the singularity embedded in these formulae are shown to be a delicate problem requiring great precaution. Similarly, one also tries to derive similar results for the corresponding $g_N$ from a given $\mu(r)$. One also presents a simple model of exterior mass density $\mu(r
>R)$ as a simple demonstration. The corresponding result in the theory of MOND is also presented in this paper.
In section III, one has studied many solvable models in details. Analysis is generalized to Newtonian models, Milgrom models as well as the FB models. In practice, may convert the RC data from $v$ to $v_N$ following the transformation formula of either Milgrom model or FB model. For the case of Newtonian model, the RC data gives exactly the Newtonian velocity, namely, $v=v_N$. One can then fit the resulting $v_N^2$ in expansion of these modes in order to analyze the RC in basis of of these basis modes. This helps analytical understanding of the spiral galaxies more transparently. The properties of each modes can be easily understood because they are integrable. The corresponding coefficients $C_{ni}$ and $a_j$ will determine the contributions of each modes to any galaxies. One will then be able to construct tables for spiral galaxies with the corresponding coefficients of each modes. Hopefully, this expansion method will provide us a new way to look at the major dynamics of the spiral galaxies. Analytic approach to the dynamics of highly flattened galaxies
In summary, the compact expressions (\[40\]) and (\[51\]) have been shown in this paper to be useful in the estimate of the mass density $\mu(r <R)$ and $g(r <R)$ from the exterior data at $r'
>R$. Explicit models are presented in this paper.
One also presents a more detailed derivation involving the definition of the elliptic functions $E$ and $K$. Various useful formulae are also presented for heuristic purpose.
One also focuses on the application in the case of modified Newtonian dynamics for two different successful models: the Milgrom model and the FB model. The theory of MOND appears to be a very successful model representing possible alternative to the dark matter approach. Nonetheless, MOND could also be the collective effect of some quantum fields under active investigations [@mond1; @mond2]. The method and examples shown in this paper should be of help in resolving the quested puzzle.
This work is supported in part by the National Science Council of Taiwan. We thank professor Zhao H.S. for helpful suggestions and comments.
Arfken, G. B. and Weber, H. J., 2005, Mathematical method for physicists, Bekenstein, Jacob D. [ Relativistic gravitation theory for the modified Newtonian dynamics paradigm]{}, Phys. Rev. D70 (2004) 083509; Bekenstein, Jacob; Magueijo, Joao, [ MOND habitats within the solar system]{}, astro-ph/0602266; Bekenstein, J.; Milgrom, M., Does the missing mass problem signal the breakdown of Newtonian gravity?, ApJ (1984) 286, 7; Brada, Rafael; Milgrom, Mordehai, Exact solutions and approximations of MOND fields of disc galaxies, MNRAS (1995) 276. 453B B. Famaey and J. Binney, [ Modified Newtonian dynamics in the Milky Way]{}, astro-ph/0506723; Giannios, D., Spherically symmetric, static spacetimes in a tensor-vector-scalar theory, Phys. Rev. D71 (2005) 103511; Ing-Guey Jiang and W.F. Kao, [dynamical M/L profile of spiral galaxies]{}, In preparation. Kao,W.F., Modified Newtonian Dynamics In Dimensionless Form, astro-ph/0504009; Kao, W.F., Modified Newtonian dynamics and induced gravity, 2005, gr-qc/0512062; Kao, W.F. MOND as an extra dimensional effect, astro-ph/0603311; Kent, S. M., Dark matter in spiral galaxies. I. galaxies with optical rotation curves, 1986, Astron. J. 91, 1301 Mak,M. K. and Harko, T., Can the galactic rotation curves be explained in brane world models? Phys.Rev. D70 (2004) 024010; Milgrom, M., A modification of the newtonian dynamics as a possible alternative to the hidden mass hypothesis, 1983, ApJ, 270, 365; Milgrom, M., A modification of the newtonian dynamics: Implications for galaxies, 1983, ApJ, 270, 371; Milgrom, M., The shape of ‘dark matter’ haloes of disc galaxies according to MOND, MNRAS (2001) 326, 1261; Milgrom, M., MOND as modified inertia, astro-ph/0510117; Romero, Juan M. and Zamora, Adolfo, Alternative proposal to modified Newtonian dynamics, Phys. Rev. D 73, 027301 (2006) Sanders, R.H., Verheijen M.A.W., Rotation curves of UMa galaxies in the context of modified Newtonian dynamics, 1998, ApJ, 503, 97; Sanders, R.H., The formation of cosmic structure with modified Newtonian dynamics, 2001, ApJ (in press), astro-ph/0011439; Sanders, R. H. and McGaugh, S. S., Modified Newtonian Dynamics as an Alternative to Dark Matter, 2002, Ann.Rev.Astron.Astrophys., 40, 263-317, astro-ph/0204521; Sanders, R. H.; A tensor-vector-scalar framework for modified dynamics and cosmic dark matter, astro-ph/0502222; Toomre, A., On the distribution of matter within highly flattened galaxies, ApJ (1963) 385 Tully, R.B. & Fisher, J.R. 1977, A&A, 54, 661 (Burlington, Elsevire Academic Press) Watson, G. N., 1944, Theory of Bessel functions (2nd ed.; Cambridge, Cambridge Univeristy Press); Zhao, H. S.; Famaey, B. , [ Refining the MOND Interpolating Function and TeVeS Lagrangian ]{}, ApJ (2006) 638L; Gentile G., Salucci P., Klein U., Vergani D., Kalberla P., 2004, MNRAS, 351, 903;
[^1]: gore@mail.nctu.edu.tw
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Working in infinite dimensional linear spaces, we deal with support for closed sets without interior. We generalize the Convexity Theorem for closed sets without interior. Finally we study the infinite dimensional version of Jordan hypersurfaces. Our whole work never assumes smoothness and is based exclusively on non-differential Convex Analysis tools and, in particular, on theory of convex cones. A crucial mathematical tool for our results is obtained solving the decomposition problem for non-closed non pointed cones.
AMS Classification: 52A07; 53D99; 53B10
author:
- |
Paolo d’Alessandro\
Former Professor at the Dept of Math Third Univ. of Rome\
pdalex45@gmail.com
title: 'Support, Convexity Conditions and Convex Hypersurfaces in Infinite Dimension'
---
Introduction
============
If we consider a convex set $C$ in a real linear topological space $E$ and assume that it has interior, then, by a well known topological separation principle, each point in $\mathcal{B}(C)$ is a support point for $C$.
It is also well known that, conversely, if each point of the boundary of a body $C$ has support, then $C$ is convex. These two results form what we call the Convexity Theorem for Bodies (i.e. sets with interior).
The hypothesis of the Theorem is considered of global nature in the literature. And a main stream of literature is that of deducing the same conclusion from purely local conditions. Usually smoothness is assumed, the local hypothesis is provided by curvature and, naturally, the machinery of differential geometry takes over. Since the present paper does not follow this stream of research we just cite [@JONKER1972] as an example.
Here we move with a different orientation. First and foremost we never assume smoothness of surfaces and stay clear of manifold theory. In this non- smooth setting we lean instead only on non-differential Convex Analysis techniques and find our main toolset in the theory of convex cones. This approach will allow us to give support results for closed sets without interior, generalize the Convexity Theorem to closed sets without interior, and, in a Hilbert space setting, develop a non-linear range space theory of convex hypersurfaces. Moreover, in the final Section, we introduce and investigate the concept of convexification of a hypersurface.
We believe that the basic concept of support should not be considered global, simply because it is non-differential. Indeed it is a local non-smooth condition, a fact which we try to substantiate within our theory.
The conical techniques we use are geared on polarity, and, in our non-smooth setting, the concept of tangent cone is the simpler possible. We have no use here for its differential version, namely the famous Bouligant cone.
Interestingly, within the present vision of geometry of convex sets, linear and non-linear range space theory present some superpositions. Moreover, working on these topics a connection between topology and convexity, typical of Convex Analysis, does persists.
Some Mathematical Premises
==========================
In this Section we make precise some technicalities of Vector Topology, that make possible starting a formal treatment.
When we consider a linear topological space (lts) or locally convex space (lcs), we will assume for simplicity that the space be real. It is instead for substantial reasons, appearing in a short while in connection to boundedness, that we will soon restrict ourselves to Hausdorff lts.
A terminological warning. As in [@KNLTS] we call sphere what in geometry is called ball, and what in geometry is called sphere is for us the boundary of a ball.
Also in this paper all cones are intended to be convex. That is, $C$ is a cone in a lts $E$ if $C+C\subset C$ and $\alpha C\subset C$ for any real $\alpha \geq 0$.
We recall some now some well known concepts, that are important to us relating to Hausdorff lts.
\[NSCCONDITIONHAUSS\] Let $E$ be a lts, then $$\{0\}^{-}=\cap \{U:\text{ }U\text{ is a neighborhood of }0\}$$Moreover, $\{0\}^{-}$ is always a linear subspace. The space $E$ is Hausdorff if and only if:$$\{0\}^{-}=\{0\}$$
(A explicit proof is given for example in [@DALEXVECTORTOP1977] at the end of p.29) Thus if a space is not Hausdorff then $\{0\}^{-}$ is a non-trivial linear subspace so that $\{0\}^{-}\neq \{0\}$) .
Incidentally, what one could (and should) do when the space $E$ is not Hausdorff is to consider the topological quotient space $E/\{0\}^{-}$, which is a Hausdorff lts and is in fact the right Hausdorff representation of $E$. Even more incidentally, this is the correct formalization of the a.e. concept in functions spaces. An in-depth analysis might be advisable though, because this is not an issue to be dismissed as lightly as usual. One for example wants to verify that the linear subspace $\{0\}^{-}$stays fixed across the various topologies considered for a given space. But even more serious consideration deserve the facts stated by Theorem 8 in above reference.
In the present context this facts are relevant because of the following consequence of the equation $\{0\}^{-}=\{0\}$, characterizing Hausdorff spaces, which regards boundedness.
Notice in this respect that boundedness is translation independent. Also we call *ray* (emanating from the origin)* * the conical extension (or hull) of a non-zero vector.
\[BOUNDEDNESSCOROL\] Consider a Hausdorff lts $E$. Then a necessary condition for a set $C$ in $E$ to be bounded is that it does not contain any ray or any translated ray.
Without restrictions of generality we may assume that $0\in C$. Suppose that $C$ is bounded but contains a ray. Then because $C$ is absorbed by every neighborhood of the origin, it follows that all neighborhoods must contain the same ray. But then the intersection of all such neighborhoods contains such a ray and this contradicts the hypothesis that the space be Hausdorff, because the equation $\{0\}^{-}=\{0\}$ cannot hold.
Naturally, we will make in what follows use of many elementary computations on convexity, see Theorem 13.1 in [@KNLTS]. We recall a few statement, that will be of use here. Let $E$ be a lts and $A$ a subset of $E$, then:$$A\text{ convex }\Rightarrow A^{i}\text{, }A^{-}\text{ convex}$$$$A\text{ convex body}\Rightarrow tA^{i}+(1-t)A^{-}\subset A^{i}\text{, }0<t\leq 1$$$$A\text{ convex body}\Rightarrow A^{i}=A^{\Diamond }\text{;}A^{i-}=A^{-}\text{;}A^{-i}=A^{i}$$where $A^{\Diamond }$ denotes the radial kernel of $A$. And for any set $B$:$$\mathcal{C}^{-}(B)=\mathcal{C}(B)^{-}$$where is the convex extension (or hull) of $B$ and $\mathcal{C}^{-}(B)$ is the closed convex extension (or hull) of $B$.
We will make of course intensive use of Separation Principles. We recall two statements from [@KNLTS], so that we can bear in mind the exact hypotheses of each major separation result.
Suppose $E$ is a lts and $A$ and $B$ (non-void) convex sets. Suppose $A$ has interior and that $B\cap A^{i}=\phi $. Then there exists a linear continuous functional that separates $A$ and $B$.
Notice that $E$ is not supposed to be Hausdorff in this particular result.
Second Theorem . This time we speak of locally convex spaces (again not necessarily Hausdorff):
Suppose $E$ is a lcs and $A$ and $B$ (non-void) disjoint convex sets, with $A $ compact and $B$ closed. Then there exists $f\in E^{\ast }$ that strongly separates $A$ and $B$.
The Basic Convexity Theorem
===========================
We consider a convex body $C$ (that is, a convex set with interior) in a real Hausdorff lts $E$, and prove the following Theorem, which gives a little additional information with respect to the Convexity Theorem cited at the beginning. In this respect it is useful to bear in mind that whereas any point of $\mathcal{B}(C)$ is a support point of $C$ in view of the first Separation Theorem, it is instead obvious that no point of $C^{i}$ can be a support point, because each such point has a neighborhood entirely contained in $C^{i}$.
\[BODYCONVTH\] Consider a closed body $C$ in a lts. Then $C$ is convex if and only if each of its boundary points is a support point for $C$, and, if this is the case, then $C$ is the intersection of any family of closed semispaces obtained choosing a single support closed semispace for each point of $\mathcal{B}(C)$ itself.
The only if is immediate consequence of the first cited separation Theorem. Then we prove that if each of the boundary points of $C\ $is a support point for $C$ itself, then $C$ is the intersection specified in the statement, and consequently also a closed convex body. Without restriction of generality and since translation is a homeomorphism that leaves invariant all convexity properties, we can assume that $0\in C^{i}$. Consider the intersection $\Psi
$ specified in the statement. Then $\Psi $ is a closed convex set containing $C$ and thus a closed convex body too. Suppose there is a point $y$ in $\Psi
$ such that $y\notin C$ or $y\in \overline{C}$. Since $\overline{C}$ is an open set, there is a neighborhood $U$ of $y$ contained in $\overline{C}$ itself. Also there is a neighborhood $W$ of the origin with $W\subset C^{i}$ Consider the ray $\rho $ generated by $y.$ By radiality of neighborhoods there is a segment $[0:z]\subset W$ with $z\neq 0$ and $[0:z]\subset \rho $. We take $v=\beta y$ with $\beta =\sup \{\gamma :\gamma y\in C^{i}\}$. This sup exists because $y\notin C^{i}$ and because $C^{i}$ is convex and so no point $\gamma y$ with $\gamma >1$ can be in $C^{i}$. Also $\beta >0$ in view of the existence of the vector $z$. Moreover, $v$ cannot be in $C^{i}$ because otherwise it would have a neighborhood contained in $C^{i}$ and since neighborhood of $v$ are radial at $v$ we would contradict that $\beta $ is the given sup. But then it must be $v\in \mathcal{B}(C)$, since there is a net (in $\rho \cap C^{i}$) that converges to $v$ by construction, and therefore by the first Separation Theorem it is a support point for $C$. At this point if $v=y$ we are done, because we have found a contradiction. If not it must be $\beta <1$. Consider the separating continuous linear functional $f$ corresponding to $v$. We have $f(0)=0$, and to fix the ideas and without restriction of generality, $f(v)=\beta f(y)>0$. But then, because $f(y)>f(v)$, the supporting functional leaves the origin and $y$ on opposite open semispaces. This would imply that $y\notin \Psi $, which is again a contradiction. Thus the proof is completed.
Of course all the more $C$ is the intersection of all the closed supporting semispaces of $C$.
Notice that this Convexity Theorem is an instance of the loose connection with topology often met in Convex Analysis. In fact if a closed set has no interior we may well try to strenghten the topology of the ambient space to the effect of forcing the interior to appear. Another example of this loose connection, but going the other way around, is the celebrated Krein -Milman Theorem, where we may weaken the topology in an effort to induce compactness of a convex set, to the effect of making the KM Theorem applicable.
A Refinement of The Convexity Theorem, Cones and Support for Void Interior Sets
===============================================================================
We Start making more explicit from the preceding proof an important property of convex bodies.
\[RAYSSINGLETONINTERSECTION\] Suppose $C$ is a closed convex body in a lts and that, without restriction of generality, $0\in C^{i}$. Then$$C=\cup \{l\cap C\text{, }l\text{ is a ray}\}$$and, if a ray $l$ meets $\mathcal{B}(C)$, the intersection is a singleton. Therefore if $\{z\}$ is such an intersection $[0:z)\subset C^{i}$.
The first part is obvious, since if $y\in C$ then $[0:y]\subset C$. As to the second statement suppose that $z,w\in \rho \cap \mathcal{B}(C)$ and assume for example that $z$ is closer to the origin than $w.$ Then, since $C$ has support at $z$, the supporting hyperplane will leave the origin and $w$ on opposite open semispaces. This is a contradiction and so the second part is proved too. Finally the last part is a direct consequence of elementary computations on convex sets.
With reference to the above Theorem, we prove, regarding boundedness and unboundedness the following
\[RAYSINTERSECTION\] Suppose $C$ is a closed convex body in a Hausdorff lts and that, without restriction of generality, $0\in C^{i}$. If there exists a ray, that does not meet $\mathcal{B}(C)$, then $C$ is unbounded. Thus if $C$ is bounded all the rays meet $\mathcal{B}(C)$ (in a unique point).
Consider a ray $\rho $ which does not meet $\mathcal{B}(C)$ and suppose that there exists a vector $z\in \rho \cap \overline{C}$. Because $\overline{C}$ is open $\rho \cap \overline{C}$ is open in $\rho $. Consider $$\beta =\inf \{\alpha :\alpha z\in \rho \cap \overline{C}\}$$Notice that $\beta >0$ since $C$ has interior. It cannot be $w=\beta z\in
\rho \cap \overline{C}$, because there would be a neighborhood of $w$ contained in $\overline{C}$ and, by radiality of neighborhoods in a lts, we would contradict the definition of $\beta $. Therefore $w\in C$ and, more precisely, $w\in \mathcal{B}(C)$ since each of its neighborhoods meets $\overline{C}$. This contradicts that the ray does not meet $\mathcal{B}(C)$, and therefore such a vector $z$ cannot exist. Thus, if a ray $\rho $ does not meet $\mathcal{B}(C)$, then $\rho \subset C$. But then by Corollary [BOUNDEDNESSCOROL]{}, $C$ is unbounded. This completes the proof.
Theorem \[RAYSSINGLETONINTERSECTION\] has a Corollary which, in a way, generalizes the finite dimensional fact asserting that if $C$ is compact $\mathcal{C}^{-}(C)=\mathcal{C}(C)$. This peculiar fact for finite dimension can be proved leaning on the celebrated Caratheodory’s Theorem, which is finite dimensional only as well.
\[PLAINCEXTWITHOUTCOMPACT\] Suppose $C$ is a closed bounded convex body in a Hausdorff lts and that, without restriction of generality, $0\in C^{i}$. Let $y(\rho )$ be the unique point where each ray $\rho $ meets $\mathcal{B}(C)$. Then$$C=\cup \{[0:y(\rho )]\}$$$$\mathcal{B}(C)=\{y(\rho ):\rho \text{ is a ray}\}$$$$C=\cup \{[x:y]:x,y\in \mathcal{B}(C)\}$$and$$\mathcal{C}^{-}(\mathcal{B}(C))=\mathcal{C}(\mathcal{B}(C))=C$$
The first three statements are either obvious consequences of the preceding Theorem or of immediate proof. As to the last statement, to simplify notations, let $\Psi =\cup \{[x:y]:x,y\in \mathcal{B}(C)\}$ and notice that:$$C\supset \mathcal{C}^{-}(\mathcal{B}(C))\supset \mathcal{C}(\mathcal{B}(C))\supset \Psi =C$$and so we are done.
This result allow us to rephrase the Convexity Theorem for bounded bodies in an interesting way.
\[BODYCONVEXTH2\] Consider a closed bounded body $C$ in a Hausdorff lts. Then $C$ is convex if and only each point of $\mathcal{B}(C)$supports $\mathcal{B}(C)$ itself, and if this is the case, then $C$ is the intersection of any family of semispaces obtained choosing a single closed semispace supporting $\mathcal{B}(C)$ for each point of $\mathcal{B}(C)$.
the proof follows from the fact that $C=\mathcal{C}(\mathcal{B}(C))$, and thus a closed supporting semispace contains $C$ if and only if it contains $\mathcal{B}(C)$.
Again a fortiori $C$ is the intersection of all semispaces that support $\mathcal{B}(C)$.
That a set (not necessarily convex and possibly without interior) has support at a point in its boundary is equivalent to the fact that the normal cone at such point is not trivial (meaning that such a cone is $\neq \{0\}$). The normal cone is nothing but the polar of the tangent cone. We recall a few basic concepts and properties first about cones and then about polarity.
Let $V$ be a cone in a lts $E$. The lineality space of $V$ is the linear subspace $lin(V)$ :$$lin(V)=V\cap (-V)$$Clearly $lin(V)$ is the maximal subspace contained in $V$. We say that $V$ is pointed if $lin(V)=\{0\}$.
Beware that Phelps ([@Phelps2001]) calls proper cone what is for us a pointed cone. Here instead a proper cone is a cone whose closure is a proper subset of the ambient lts.
A cone in a lts $E$ is proper if it is not dense in the whole space. Thus a closed cone is proper if it is not the whole space, or, equivalently, is a proper subset of $E$.
We next cite from [@DALEXCUBO2011] a first major result regarding support for void interior convex sets, illustrating the mathematical facts and reasoning from which it originates. These recalls will be instrumental in the sequel.
\[properclosedcone\] A cone in a lcs $E$ is proper (equivalently its closure is a proper subset) if and only if it is contained in a closed half-space.
Sufficiency is obvious. As to necessity, let $C$ be a closed proper cone. Then there is a singleton $\{y\}$ disjoint from $C$. Singletons are convex and compact and therefore the second Separation Theorem cited at the beginning applies. Thus there exists a continuous linear functional $f$ such that:$$f(x)<f(y)\text{, }\forall x\in C$$and since $C$ is a cone this implies:$$f(x)\leq 0\text{, }\forall x\in C$$Thus the condition is also necessary and we are done.
\[CLOSUREPOINTEDCONE\]The closure of a pointed cone in a linear topological space is a proper cone.
Suppose that it is not true, that is there is a pointed cone $C$ in a linear topological space $E$, such that $C^{-}=E$. Consider a finite dimensional subspace $F$, with its (unique) relative topology, which intersect $C$ in a non trivial, necessarily pointed, cone. Actually we can take instead of $F$, its subspace $\mathcal{L}(F\cap C)$, without restriction of generality. For simplicity we leave the symbol $F$ unchanged. Next notice that, as is well known, because $F$ is the finite dimensional, the pointed convex cone $\Upsilon =F\cap C$ has interior. Thus it can be separated by a continuous linear functional from the origin and therefore it is contained in a closed semi-space. It follows that the closure of $\Upsilon $ in $F$ is contained in a closed half-space and therefore is a proper cone. But by Theorem 1.16 in [@KELLEY55], such closure is $C^{-}\cap F$. By the initial assumption $C^{-}\cap F=E\cap F=F$. This is a contradiction and therefore the proof is finished.
Let $C$ be a closed subset of a lts $E$ and $x\in \mathcal{B}(C)$. Then the Tangent cone $\mathcal{T}_{C}(x)$ to $C$ at $x$ is the cone:$$\mathfrak{T}_{C}(x)=\mathcal{C}o(C-x)$$
We now turn to lcs spaces, because we start using duality for linear topological spaces.
Let $C$ be a subset of a lcs $E$. Then the polar of $C$ is the (always closed) convex cone:$$C^{p}=\{f:f\in E^{\ast },f(x)\leq 0,\forall x\in C\}$$
Let $C$ be a closed subset of a lcs $E$ and $x\in \mathcal{B}(C)$. Then the Normal cone $\mathfrak{N}_{C}(x)$ to $C$ at $x$ is the cone:$$\mathfrak{N}_{C}(x)=\mathfrak{T}_{C}(x)^{p}$$
Also we will use the other important notion of Normal Fan. Let $sp(C)$ the set of all points at which a closed set $C$ has support.
Let $C$ be a closed subset of a lcs $E$. Then the Normal Fan $NF(C)$ is the set:$$NF(C)=\cup \{\mathfrak{N}_{C}(x):x\in sp(C)\}$$
The following Theorem has an immediate proof and nevertheless in conjunction with the above Lemma \[properclosedcone\] and Theorem [CLOSUREPOINTEDCONE]{} it allows to state a major result concerning support for sets with void interior.
\[EXTREMEEQUIVPOINTED\]Consider a closed set $C$ in a lcs. Then a point $y\in \mathcal{B(}C)$ is extreme if and only if the cone $\mathcal{T}(y)$ is pointed.
Straightforward consequence of the definition of extreme point.
Let’s gather for locally convex spaces some important consequences of the analysis developed in this Section. The fact that the closure of a pointed cone is a proper cone (Theorem \[CLOSUREPOINTEDCONE\]) and Lemma [properclosedcone]{} (valid for locally convex spaces) imply that the polar cone of a pointed cone is always non-trivial. At this point we can apply Theorem \[EXTREMEEQUIVPOINTED\] to conclude that for a closed set (without interior) $\ $if a point in its boundary is extreme, then the tangent cone at that point is pointed and hence its polar, or normal cone at the point in question, is non-trivial. We state this result for sets without interior, since if there were a non-void interior, support would be insured anyway. We summarize the above discussion in the following
\[EXTREMEEQUALSUPPORT\]Consider a closed set without interior in a lcs. Then it has support at all points of its boundary that are extreme.
This is a first fundamental result on support for convex closed sets with void interior. It has been applied by the author to the Theory of Maximum Principle as outlined in the following
This result (Theorem \[EXTREMEEQUALSUPPORT\]) was stated in [DALEXCUBO2011]{}, where it was used in a Hilbert Space setting to demonstrate that the Maximum Principle holds for all candidate targets. We briefly sketch the meaning of this assertion. Consider a linear PDE in a Hilbert space setting and suppose that a target $\zeta $ is reachable from the origin in an interval $[0,T]$. Then that a minimum norm control $u_{o}$ (steering the origin to $\zeta $ in the time interval $[0,T]$) exists is an immediate consequence of the fact that the forcing operator $L_{T}$ is continuous and of the Projection Theorem. Let $\rho $ be the norm of $u_{o}$. The Maximum Principle holds if $\zeta $ is a support point for $L_{T}(S_{\rho })$ (where $S_{\rho }$ is the closed sphere of radius $\rho $ in the $L_{2}$ space of input functions), which is a closed convex set without interior in general. Now it is shown, in the cited paper, that $\zeta $ is always a support point for $L_{T}(S_{\rho })$ because $\zeta $ is always extreme, and hence Theorem \[EXTREMEEQUALSUPPORT\] applies. The fact that $\zeta $ is extreme is proved exploiting a property that all Hilbert space have, namely strict convexity. For details proofs and much more the interested reader is referred to the cited paper
Support for sets with void interior in Hilbert Spaces
======================================================
We saw that the closure of a pointed cone is proper. What if the cone is not pointed? We will investigate this issue in the real Hilbert Space environment (in the sequel all Hilbert spaces are assumed to be real without further notice), and then, after developing the appropriate mathematical tools, we will introduce a completely general necessary and sufficient condition of support for closed set with void interior.
Naturally in a Hilbert space environment we can take advantage of the Riesz Theorem and so we can consider Normal Cones as subsets of the Hilbert space itself.
Let $C$ be a closed subset of a lcs $E$. Then the Applied Normal Fan $ANF(C)$ is the set:$$ANF(C)=\cup \{x+\mathfrak{N}_{C}(x):x\in sp(C)\}$$
To begin with we recall some relevant material from [@DALEXAJMAA2].
We start from an elementary Lemma.
\[sumorthosets\] Suppose that $F$ and $G$ are closed subspaces of Hilbert space $H$, that $\ F$ $\bot $ $G$, and that, for two non-void subsets $C$ and $D$ of $H$, $C\subset F$ and $D\subset G$. Then $C+D$ is closed if and only if both $C$ and $D$ are closed. Moreover:$$y=P_{F}y+P_{G}y\in C+D\Longleftrightarrow P_{F}y\in C\text{ and }P_{G}y\in D$$and if $C$ and/ or $D$ are not closed$$(C+D)^{-}=C^{-}+D^{-}$$
We can assume without restriction of generality that $G=F^{\bot }$ because, if this were not the case we can take in lieu of $H$ the Hilbert space $H_{1}=F+G$, which is in fact a closed subspace of $H$. Suppose that $C$ and $D$ be closed and consider a sequence $\{z_{i}\}$ in $C+D$ with $\{z_{i}\}\rightarrow z$. We can write in a unique way:$$z_{i}=P_{F}z_{i}+P_{G}z_{i}$$with $P_{F}z_{i}\in C$ and $P_{G}z_{i}\in D$. By continuity of projection and the assumption that $C$ and $D$ be closed , $\{P_{F}z_{i}\}\rightarrow
P_{F}z\in C$ and $\{P_{G}z_{i}\}\rightarrow P_{G}z\in D$, but since $P_{F}z+P_{G}z=z$ it follows $z\in C+D$ and we are done. Conversely suppose that, for example $D$ is not closed so that there exists a sequence $\{d_{i}\}$ in $D$, that converges to $d\notin D$, but, of course, $d\in G$. Take a vector $c\in C$. The sequence $\{c+d_{i}\}$ converges to $c+d$. But $c=P_{F}(c+d)$ and $d=P_{G}(c+d)$, and therefore, by uniqueness of the decomposition, it is not possible to express $c+d$ as a sum of a vector in $C $ plus a vector in $D$. It follows that $c+d\notin C+D$. The second statement follows immediately from uniqueness of decomposition of a vector $y $ in the sum $P_{F}y+P_{G}y$, and since the proof of the last statement is immediate, we are done.
Next it is in order to recall from the same source the fundamental theorem on decomposition of non-pointed cones. We give a slightly expand the statement and include the proof for subsequent reference.
\[conedecompgeneral\]Consider a convex cone $C$ in a Hilbert space $H$ and assume that its lineality space be closed. Then $$(lin(C)^{\bot }\cap C)=P_{lin(C)^{\bot }}C$$where the cone $lin(C)^{\bot }\cap C$ is pointed. Consequently, if $C$ is closed the cone $P_{lin(C)^{\bot }}C$ is closed too. Moreover, the cone $C$ can be expressed as:$$C=lin(C)+(lin(C)^{\bot }\cap C)=lin(C)+P_{lin(C)^{\bot }}C$$Finally if a cone $C$ has the form $C=F+V$, where $F$ is a closed subspace and $V$ a pointed cone contained in $F^{\bot }$, then $F$ is its lineality space and the given expression coincides with the above decomposition of the cone.
First we prove that $$C=lin(C)+(lin(C)^{\bot }\cap C)$$That the rhs is contained in the lhs is obvious. Consider any vector $x\in C$ and for brevity let $\Gamma $ $=lin(C)$. Decompose $x$ as follows:$$x=x_{\Gamma }+x_{\Gamma ^{\bot }} \label{ortdec}$$where $x_{\Gamma }\in \Gamma $ and $x_{\Gamma ^{\bot }}\in \Gamma ^{\bot }$. Next $x_{\Gamma ^{\bot }}=x-x_{\Gamma }$ as sum of two vectors in $C$ is in $C$ and hence in $\Gamma ^{\bot }\cap C$. Thus we have proved that the $lhs$ is contained in the $rhs$. Next we show that the cone $lin(C)^{\bot }\cap C$ is pointed. Suppose that both a vector $x\neq 0$ and its opposite $-x$ belong to $lin(C)^{\bot }\cap C$ and decompose $x$ as above (\[ortdec\]). Because $lin(C)^{\bot }\cap C\subset lin(C)^{\bot }$, $x_{\Gamma }=0$, so that $x=x_{\Gamma ^{\bot }}\neq 0$. Do the same for $-x$, to conclude that $x_{\Gamma ^{\bot }}$ and $-x_{\Gamma ^{\bot }}$ are in $C$ (but obviously not in $lin(C)$). Because this is a contradiction, $lin(C)^{\bot }\cap C$ is pointed. Finally we prove that:$$lin(C)^{\bot }\cap C=P_{lin(C)^{\bot }}C$$In fact,$$P_{\Gamma ^{\bot }}(\Gamma ^{\bot }\cap C)=\Gamma ^{\bot }\cap C\subset
P_{\Gamma ^{\bot }}(C)$$On the other hand if $z\in P_{\Gamma ^{\bot }}(C)$, for some $w\in C$, $z=P_{\Gamma ^{\bot }}w=w-P_{\Gamma }w$ so that $z\in C$. Hence $z\in
\Gamma ^{\bot }\cap C$ and so $P_{\Gamma ^{\bot }}(C)\subset \Gamma ^{\bot
}\cap C$. Finally, let $x\in C=F+V$ as defined in the last statement. Write $x=x_{F}+x_{F^{\bot }}$. According to Lemma \[sumorthosets\] $x_{F}\in F$ and $x_{F^{\bot }}\in V$. Now $-x=-x_{F}-x_{F^{\bot }}$ and, by the same Lemma, $-x\in C$ if and only if $-x_{F}\in F$ and $-x_{F^{\bot }}\in V$. But $V$ is assumed to be pointed and so $-x_{F^{\bot }}=0$, which shows that $F=lin(F+V)$. On the other hand the pointed cone of the decomposition is $P_{F^{\bot }}(F+V)=V$ and thus the proof is complete.
In general, and in particular for tangent cones, we cannot make any closedness assumption. Thus we need to develop a mathematical tool, which allow us to deal with the case non closed cones. We will tackle this problem momentarily, right after taking a short break to dwell on polarity.
Polarity can be viewed as the counterpart of orthogonality in the context of cone theory, as already seen from the following basic computations for polars. We do not pursue this parallel in depth here, limiting ourselves to what is needed here (for more details see [@DALEXAJMAA2]). Many of the following formulas are for generic sets, however we will be interested mostly in the special cases of polars of cones.
\[ELCOMPPOLARS\]The following formulas regarding polars hold where $C$ and $D$ are arbitrary sets $$(-C)^{p}=-C^{p}$$$$(C^{p})^{p}=C^{-}$$$$C^{-p}=C^{p}$$$$C\subset D\Rightarrow D^{p}\subset C^{p}$$$$(C+D)^{p}=C^{p}\cap D^{p}$$Moreover, if $F$ is a linear subspace (closed or non closed it doesn’t matter) then$$F^{p}=F^{\bot }$$
The polar cone of a convex cone is the normal cone at the origin to the given convex cone. Also, the polar cone of a closed convex cone is the set of all points in the space, whose projection on the cone, coincides with the origin.
We still have to handle the case where the lineality space of a cone is not closed, and here comes our solution for this issue.
In the next Theorem we exclude the case that the cone $C$ is contained in $lin(C)$, for in this case since the reverse inclusion always holds we have $C=linC$ that is, the cone is a linear subspace. Such case is trivial since we have only to ask, for our purposes, that the cone is not dense.
\[CLOSUREOFLINEALITY\]Consider a non pointed non closed cone $C$ and let $\Gamma $ be its lineality space, which is assumed to be not dense. We also assume that $C$ is not contained in $\Gamma $ and that $\Gamma $ is not closed. Consider the set$$\Psi =((C\backslash \Gamma ^{-})\cup \{0\})\cap \Gamma ^{\bot }\subset C$$Then $\Psi $ is a non-void pointed cone, and if $$\Delta \mathfrak{=}\Gamma ^{-}+\Psi$$then$$lin\Delta \mathfrak{=}\Gamma ^{-}$$(so that and Theorem \[conedecompgeneral\] applies to $\Delta $) and: $$C\subset \Delta \mathfrak{\subset }C^{-}$$ Moreover, $C$ and $\Delta $ have the same polar cone.
We first show that $\Psi $ is a cone. Referring to non-zero vectors to avoid trivialities, first of all it is obvious that $x\in \Psi $ implies $\alpha
x\in \Psi $ $\forall \alpha \geq 0$. If $x,y\in \Psi $ then $z=x+y\in
\Gamma ^{\bot }$ and $z\in C$, and these two imply that $z\in \Psi $. Thus $\Psi $ is a cone. At this point we also know that $\Delta $ is a cone, because it is the sum of two cones. Next if there where two non-zero opposite vectors in $\Psi $ they would also be in $\ C\backslash \Gamma $ (since $C\backslash \Gamma ^{-}\subset C\backslash \Gamma $) contradicting that $\Gamma $ is the lineality space of $C$. Therefore $\Psi $ is pointed. Now we look at the intersection:$$(\Gamma ^{-}+\Psi )\cap (-\Gamma ^{-}-\Psi )=(\Gamma ^{-}+\Psi )\cap (\Gamma
^{-}-\Psi )$$with the aid of Lemma \[sumorthosets\]. Clearly, for a vector with a component in $\Psi $, to be in the intersection, it is required that we find two opposite vectors in $\Psi $, which is impossible because $\Psi $ is pointed. Thus a non-zero vector in the intersection can only be in $\Gamma
^{-}$ and so we have proved that $lin\Delta \mathfrak{=}\Gamma ^{-}$. As the first inclusion in the statement (namely $C\subset \Delta $), suppose $x\neq
0$ is in $C$. Either $x\in \Gamma ^{-}$ and thus $x\in \Delta $ or $x\in
C\backslash \Gamma ^{-}$. In this latter case we write $x=x_{\Gamma
^{-}}+x_{\Gamma ^{\bot }}$. Now arguing as in the proof of Theorem [conedecompgeneral]{}, $P_{\Gamma ^{\bot }}(C)\subset \Gamma ^{\bot }\cap C$ (by the way this also shows that $\Psi $ is non-void) and so $x_{\Gamma
^{\bot }}\in \Gamma ^{\bot }\cap C$. Thus it must be $x_{\Gamma ^{\bot }}\in
\Psi $. So we have proved that in any case $x\in \Delta $ and so we have proved that $C\subset \Delta $. Next, applying Lemma \[sumorthosets\] to $\Delta $, we get:$$\Delta ^{-}=\Gamma ^{-}+\Psi ^{-}$$But $\Gamma \subset C\Rightarrow \Gamma ^{-}\subset C^{-}$ and, similarly, $\Psi \subset C\Rightarrow \Psi ^{-}\subset C^{-}$. Thus $\Delta ^{-}$ is the sum of two cones, both contained in $C^{-}$, and therefore we got $\Delta
^{-}\subset C^{-}$ and, a fortiori, $\Delta \mathfrak{\subset }C^{-}$. This completes the proof of the statement about inclusions. Now applying the elementary computations on polars we have:$$C^{-p}\subset \Delta ^{p}\subset C^{p}$$and since, again by the elementary computations, $C^{-p}=C^{p}$ it follows:$$C^{p}=\Delta ^{p}$$Thus the proof is finished.
Sometimes, when a cone is contained in a subspace, it is convenient to consider the polar of a cone within the subspace itself, regarded as the ambient space. When we do so we put a subscript, indicating the subspace, under the symbol of the polar.
We now establish another important result on polar cones.
\[GENERALPOLARLINCLOSED\] Suppose that a cone $C$ has the form $F+\Psi $, where $F$ is a closed proper subspace and $\Psi $ is a pointed cone in $F^{\bot }$. Then$$C^{p}=\Psi _{F^{\bot }}^{p}$$Therefore the polar of the cone is contained in $F^{\bot }$ and in view of Theorem \[CLOSUREPOINTEDCONE\] cannot be trivial.
By direct computation. First of all (by elementary computations on polars) $C^{p}=F^{\bot }\cap \Psi ^{p}$, which implies $C^{p}\subset F^{\bot }$. Thus if $x\in C^{p}$ then $x=x_{F^{\bot }}$. Therefore for $y\in C$, the inequality $(x,y)\leq 0$ implies $x\in \Psi _{F^{\bot }}^{p}$, as we wanted to prove.
Applying the last two Theorems, we reach the conclusion contained in the statement of the following Theorem, which obviously requires no proof:
Consider a non pointed non closed cone $C$ and let $\Gamma $ be its lineality space, which is assumed to be not dense. We also assume that $C$ is not contained in $\Gamma $ and that $\Gamma $ is not closed. Then by Theorem \[CLOSUREOFLINEALITY\] it is true that:$$C^{p}=\Delta ^{p}$$where$$\Delta \mathfrak{=}\Gamma ^{-}+\Psi$$and $\Psi $ is the pointed cone contained in $\Gamma ^{\bot }$ defined in the statement of \[CLOSUREOFLINEALITY\]. Thus $$C^{p}=\Psi _{F^{\bot }}^{p}$$in view of Theorem \[GENERALPOLARLINCLOSED\]
The mathematical tools that we have been developing so far, put together, allow us to state the following fundamental Theorem on support for closed sets with void interior in Hilbert spaces.
Consider a closed set $C$ with void interior in a Hilbert space $H$. Then a point $y\in \mathcal{B}(C)$ has support if and only if the tangent cone $\mathfrak{T}_{C}(y)$ is either pointed (that is, $y$ is an extreme point) or is not pointed but its lineality space is not dense.
The proof is contained in the theory developed so far and does not require any additional effort.
Generalizing the Convexity Theorem to Sets without Interior
===========================================================
We recalled at the beginning the Convexity Theorem asserting that a closed body is convex if and only if each point of its boundary has support.
Here, sticking to the Hilbert space environment we remove the condition that the set has interior. Throughout the rest of the paper the ambient space is always understood to be a real infinite dimensional Hilbert space (with the only exception of our very last Theorem).
We start with a Lemma.
Consider a closed set $C$ without interior (so that $C=\mathcal{B}(C)$) and suppose $x\in C$ has support. Then for all the points of $x+\mathfrak{N}_{C}(x)$ the minimum distance problem from both $C$ and $\mathcal{C}^{-}(C)$ has the unique solution given by $x$.
By definition of normal cone, for any the point $z$ in $x+\mathfrak{N}_{C}(x) $ the minimum distance problem of $z$ from $x+\mathfrak{T}_{C}(x)^{-} $ has a unique solution given by $x$. Let $\delta =\Vert
z-x\Vert $. Now, since $C\subset \mathfrak{T}_{C}(x)^{-}$, $\Vert w-x\Vert
>\delta $, $\forall w\in C$. Therefore $x$, which is in $C$, is also the unique minimum distance solution of $z$ from $C$. Since $\mathcal{C}^{-}(C)\subset x+\mathfrak{T}_{C}(x)^{-}$ the same argument shows that $x$, which is also in $\mathcal{C}^{-}(C)$, is the unique minimum distance solution of $z$ from $\mathcal{C}^{-}(C).$This completes the proof .
Consider a closed set $C$ without interior (so that $C=\mathcal{B}(C)$) and suppose $x,y\in C$ (with $x\neq y$) have support. Then $x+\mathfrak{N}_{C}(x)\cap y+\mathfrak{N}_{C}(y)=\phi $.
In fact if the two translated normal cones had a non-void intersection, then for points in the intersection the minimum distance from $C$ problem would have a non unique solution. But this in view of the preceding Lemma is a contradiction and therefore our statement is proved.
The last Theorem and Corollary show that the support condition can arguably be viewed as local. There may be a closed set that at a single point, or in a whole region, behaves locally like a convex set, although it is not convex at all.
The two above results imply the following:
Consider a closed set $C$ without interior and suppose the set $sp(C)$ of all support points of $C$ is such that the set $ANF(sp(C))$ covers $\overline{C}$. Then the projection Theorem hold good for $C$.
This line of arguing makes it possible to state the following generalization of the Convexity Theorem for closed sets without interior.
\[VOIDINTERIORCONVEXITYTHEOREM\]Consider a closed set $C$ without interior. Then $C$ is convex if and only if the set $sp(C)$ of all support points of $C$ is such that the set $ANF(sp(C))$ covers $\overline{C}$. And if this is the case $C$ is equal to the intersection of the family of closed supporting semispaces of $C$.
The only if is an immediate consequence of the Projection Theorem. Conversely let $\Psi $ be the intersection of semispaces defined in the statement of the Theorem, which is obviously a closed convex set containing $C$. Suppose there exists $y\in \Psi $ such that $y\notin C$. Then by the preceding Theorem we can project $y$ on $C$. The unique projection $z\in C$ has support because $y-z$ is in the normal cone to $C$ at $z$ and so $(y-z,w=z)\leq 0$, $\forall w\in C$. But then the functional $(y-z,.)$ leaves $y$ in an open semispace opposite to the closed one containing $C$, thereby contradicting that $y\in \Psi $. Thus $\Psi =C$ and we are done.
The Bounded Case in Hilbert Spaces and Hypersurfaces
====================================================
For the case where $C$ is bounded body in a Hilbert space, we will study convex non-smooth hypersurfaces. We will prove, in our infinite dimensional and non-smooth setting, that there exists a homeomorphism of the boundary of any convex bounded body onto the boundary of the sphere . Thus the boundary of any convex bounded body is what we will define to be a convex hypersurface.
Our final Theorem in the Section will state a sort of converse of this result, showing that if we consider a convex bounded hypersurface, this is exactly the boundary of both its convex and its closed convex extension (assumed to have interior), and that these two sets are the same.
We note that we will make no use of the weak topology and compactness, since weak topology seems to be of no help in the present matter. Also notice that in our approach we have a *global* description of a hypersurface as the range of a non-linear function (the homeomorphism) defined on a set that we can fix to be the boundary of the unit sphere. In this sense we talk of a (non-linear) range space theory. We will comment at the end on the fact that a linear range space theory can also be used for the same purposes.
We define now the closed hypersurface, where closed stands for “without boundary”. The word closed will be tacitly understood in the sequel to avoid confusion with topological closedness (and actually an hypersurface is assumed to be closed). On the other hand we will not deal here with hypersurfaces with boundary.
In finite dimension the definition can assume that there is a one to one continuous mapping on the surface of a sphere onto the hypersurface. Then the hypersurface is actually homeomorphic to the surface of the sphere for free, since there is a Theorem in [@KELLEY55], stating that a continuous one to one function on a compact set to a Hausdorff topological space has a continuous inverse. Thus in the present infinite dimensional case it seems natural to assume a homeomorphism from scratch.
Also we will deal with the case where convex sets have interior and this seems intuitively founded just in view of such homeomorphism to the boundary of a convex body like the sphere.
A (Jordan) hypersurface $\Sigma $ in a Hilbert space is a set which is homeomorphic (in the strong topology sense) to the boundary of a closed sphere (and therefore it is a closed set). A hypersurface is convex if it has support at each of its points.
Our first aim is to show that if we consider a bounded convex body $C$ then its boundary $\mathcal{B}(C)$ is a bounded convex hypersurface.
\[BOUNDEDCONVEXBODYVSHYPERSURF\] Let $C$ be a bounded convex body. Then its boundary $\mathcal{\ }$of $C$ is a bounded convex hypersurface.
The proof is constructive in the sense that we will exhibit an appropriate homeomorphism. First of all, as usual, we may assume that $0\in C^{i}$ and so we may consider a closed sphere $S_{r}$ around the origin of radius say $r>0$, entirely contained in $C^{i}$. Each ray $\rho $ emanating from the origin meets obviously the boundary of $S_{r}$ (which we denote by $\Phi $) in a unique point, but also meets $\mathcal{B}(C)$ (which we denote by $\Omega )$ in another unique point thanks to Theorem \[RAYSINTERSECTION\]. Now define by $\varphi $ the map that associates each point $y$ of $\Phi $ with the unique point $\varphi (y)$ which is the intersection of the ray generated by $y$ with $\Omega $. The map $\varphi $ is obviously onto, since each point of $\Omega $ generates a ray on its own. It is also one to one because distinct rays meet only at the origin, and so two different values of $\varphi $ in $\Omega $ can only be generated by distinct vectors in $\Phi $. Next we show that the map is continuous. In fact consider any point $z\in \Omega $. We can define a base of the neighborhood system of $z$, intersecting closed spheres around $z$ with $\Omega $. Consider one of those spheres $\Sigma _{\varepsilon }$ with radius $\varepsilon >0$. Notice that $\mathcal{C}o(\Sigma _{\varepsilon })\supset \Sigma _{\varepsilon }$. Now if we take the cone generated by a closed sphere around $r\frac{z}{\Vert
z\Vert }$ of radius $\frac{\varepsilon }{2}\frac{r}{\Vert z\Vert }$ this cone will contain a sphere of radius $\varepsilon /2$ around $z.$This proves continuity of $\varphi $. On the other hand $\varphi ^{-1}(z)=\frac{z}{\Vert
z\Vert }$, and since $\Vert z\Vert $ is bounded from above and from below on $\Omega $, $\varphi ^{-1}$ is continuous too. This completes the proof.
Next we tackle the problem of stating a result going in the reverse direction, that is, from the convex hypersurface to the appropriate convex body.
\[HYPERSURFASBOUNDCONVBODY\] Suppose $\Omega $ is a bounded convex hypersurface and assume that the (necessarily bounded) set $\mathcal{C}^{-}(\Omega )$ has interior. Then $$\Omega =\mathcal{B}(\mathcal{C}^{-}(\Omega ))$$Moreover, (in view of Corollary \[PLAINCEXTWITHOUTCOMPACT\])$$\mathcal{C}^{-}(\Omega )=\mathcal{C}(\Omega )$$and so also $$\Omega =\mathcal{B}(\mathcal{C}(\Omega ))$$
We know from the preceding Theorem that $\mathcal{B}(\mathcal{C}^{-}(\Omega
))$ is a bounded convex hypersurface and elementary computations in [KNLTS]{} show that $\mathcal{C}^{-}(\Omega )^{i}$ is a convex set. Furthermore, because each point of $\Omega $ is a support point for $\Omega $ itself and hence also for $\mathcal{C}^{-}(\Omega )$, it follows that $\Omega \subset $ $\mathcal{B}(\mathcal{C}^{-}(\Omega ))$. As usual we assume without restriction of generality that $0\in \mathcal{C}^{-}(\Omega
)^{i}$. In order to simplify notations in what follows, and again without restriction of generality, we assume that $\mathcal{C}^{-}(\Omega )^{i}$ contains a closed sphere of radius $2$ around the origin (if not we multiply everything by a positive scalar factor and nothing changes for the purposes of the present proof). From our previous theory we know that each ray emanating from the origin meets $\mathcal{B}(\mathcal{C}^{-}(\Omega ))$ (and hence possibly $\Omega \subset $ $\mathcal{B}(\mathcal{C}^{-}(\Omega )$) in a unique point. Let $\varphi $ be the homeomorphism of $\mathcal{B}(S_{2})$ onto $\Omega $. What we want to do now is to extend this homeomorphism to an homeomorphism on the whole of $S_{2}$ onto the set $[0:1]\Omega $. This will be done in two pieces: first we extend it to $S_{2}\backslash S_{1}^{i}$ and then we will take the identity map on $\ S_{1}^{i}$. We call $\psi $ the extended map. Bear in mind that$$M\geq \Vert z\Vert \geq m>2\text{, }\forall z\in \Omega$$for some $M>m$. Thus also:$$M\geq \Vert z\Vert \geq m>2\text{, }\forall z\in \mathcal{C}^{-}(\Omega )$$ For $y\in S_{2}\backslash S_{1}^{i}$ to simplify notations we put:$$\gamma (y)=\varphi (2\frac{y}{\Vert y\Vert })$$which is clearly a continuous function. Then on $S_{2}\backslash S_{1}^{i}$, $\psi $ is defined by:$$\psi (y)=(\Vert y\Vert -1)\gamma (y)+(2-\Vert y\Vert )\frac{y}{\Vert y\Vert }$$In this way when $y\in \mathcal{B}(S_{2})$ or $\Vert y\Vert =2$, we have that $\psi (y)=\gamma (y)$. When $y\in \mathcal{B}(S_{1})$ or $\Vert y\Vert
=1$, then $\psi (y)=y$. In between we have a convex combination of these two vectors. In the rest of $S_{2}$, $\psi $ is defined to be the identity map. Now it is readily seen that $\psi $ is one to one and the inverse function on $([0:1]\Omega )\backslash S_{1}$is:$$\psi ^{-1}(z)=\frac{z}{\Vert z\Vert }(1+\frac{\Vert z\Vert -1}{\Vert \gamma
(z)\Vert -1})$$It is easily verified by inspection that both $\psi $ and $\psi ^{-1}$ are continuous. Thus $\psi $ is an homeomorphism and it maps $S_{2}$ onto the set $[0:1]\Omega $. Because its range is bounded, the image of $S_{2}^{i}$ , which is an open set containing the origin and whose boundary is $\Omega $, must be bounded. Thus each ray emanating from the origin must meet $\Omega $ and so, in view of Corollary \[PLAINCEXTWITHOUTCOMPACT\] it must be $\Omega =$ $\mathcal{B}(\mathcal{C}^{-}(\Omega ))$. This completes the proof .
There is another sort of range space theory, developed in [@DALEXAJMAA2]) that still studies convex structures, but using the range of a linear (instead of non-linear) continuous functions, that is, of operators. We know that a convex closed set is the intersection of closed half spaces. Now in a separable Hilbert space environment (which may be represented by the space $l_{2}$) it is well known that we can limit the intersection to a countable number of semispaces. Under mild assumption the corresponding countable set of inequalities takes on the form $Lx\leq v$ where $L$ is an operator and $v$ is a vector (called the bound vector). The range space point of view consists at looking at this equation from the side of the of the range $\mathcal{R}(L)$ of $L$, instead of the side of the unknown $x$. Just to give a taste of it, from the range space point of view, this inequality has solution if and only if $v\in \mathcal{R}(L)+P$, where $P$ is the closed pointed cone of all non-negative (componentwise) vectors (which, incidentally has void interior!). Note that $\mathcal{R}(L)+P$ is a cone albeit a non pointed one. With this range space approach an extensive theory of geometry, feasibility, optimization and approximation has been developed. For details, proofs and more the interested reader may look at the paper cited above.
Convexification of Hypersurfaces
================================
Intuitively, what is meant here by convexification of a bounded hypersurface, can be illustrated colloquially and in three dimension by this image: wrap a non-convex bounded hypersurface with a plastic kitchen pellicle and the then pellicle takes the form of a surface, which is the convexification of the original surface. Here is a formal definition
Consider a closed bounded hypersurface $\Phi $ in a a real Hilbert space and assume that $\mathcal{C}^{-}(\Phi )=\mathcal{C}(\Phi )$ (where the equality follows from Corollary \[PLAINCEXTWITHOUTCOMPACT\]) has non-void interior. Then $\Omega =\mathcal{B(C}(\Phi ))$, which by Theorem [BOUNDEDCONVEXBODYVSHYPERSURF]{} is a bounded convex hypersurface, is called the convexification of $\Phi $.
Naturally the convexification of a bounded convex hypersurface satisfying the above condition is the surface itself. So to speak, convexity is the fixed point of the convexification operator.
A straightforward consequence of the work carried out so far is the following:
Let $\Phi $ be a closed bounded hypersurface in a real Hilbert space and let $\Omega $ its convexification. Then $\Phi $ is homeomorphic to $\Omega $.
Lets indicate by the symbol $\sim $ the relation “is homeomorphic to” and by $S$ the closed unit sphere around the origin. Then we know that, by definition, $\Phi \sim \mathcal{B}(S)$. But by Theorem [BOUNDEDCONVEXBODYVSHYPERSURF]{} we also know that $\Omega \sim \mathcal{B}(S)$. Because $\sim $ is an equivalence relation, it follows also $\Omega \sim
\Phi $.
The above (and recurring) hypothesis that $\mathcal{C}(\Phi )$ has interior is not needed in finite dimension. In fact we state the following Theorem. By the way, recall that in finite dimension, from the only fact that $\Phi $ is compact it follows that $\mathcal{C}(\Phi )$ is compact too.
\[MAINTHCLOSEDSURFACE\] Let $\Omega $ be a compact convex hypersurface in $R^{n}$. Then $\mathcal{A}(\Omega )=R^{n}$, where $\mathcal{A}(\Omega )$ is the affine extension of $\Omega $. Thus $\mathcal{C}(\Omega )$ is a convex body.
In finite dimension every convex set has relative interior. Thus we can argue momentarily in $E=\mathcal{L}(\mathcal{C}(\Omega ))$, where $\mathcal{C}(\Omega )$ has interior. In this space, in view of Theorem [HYPERSURFASBOUNDCONVBODY]{} we can say that:$$\Omega =\mathcal{B}(\mathcal{C}^{-}(\Omega ))$$$$\mathcal{C}^{-}(\Omega )=\mathcal{C}(\Omega )$$and $$\Omega =\mathcal{B}(\mathcal{C}(\Omega ))$$Next, as we did before we extend the homeomorphism $\varphi $ of $\mathcal{B}(S)$ onto $\Omega $ to the whole of $S$. Thus $\Pi =\psi (S^{i})$ is an open set and $\Omega $ is its boundary. We intersect each coordinate axis with $\Pi $ to obtain a relatively open subset and hence a segment of the form $[0:z^{i})$ with $z^{i}\neq 0$ and laying on the $ith$ coordinate axis. But then $z^{i}$ is in the boundary of $\ \Pi $, that is, in $\Omega $. Thus $\mathcal{A}(\Omega )=R^{n}$ and so we established our thesis.
[9]{} J.L. Kelley, I. Namioka et al, *“Linear Topological Spaces”*, Graduated text in Mathematics, Springer, New York 1963
P. d’Alessandro, *“Optimization and Approximation for Polyhedra in Separable Hilbert Spaces”*, AJMAA Vol 12, Issue 1,Article 7,pp 1-27,May 2015.
R. R. Phelps, *Lectures on Choquet Theorem*, Springer, New York, 2001
J. Stoer, C. Witzgall, *Convexity and Optimization in finite dimensions I*, Springer-Verlag Berlin-Heidelberg-New York 1970.
P. d’Alessandro - *Closure of Pointed Cones and Maximum Principle in Hilbert Spaces*, CUBO A Mathematical Journal, Vol 13 No 2, June 2011
Leo Jonker, *Hypersurfaces of Nonnegative Curvature in A Hilbert Space,* Transactions of the American Mathematical Society, Vol. 169 (Jul., 1972), pp. 461-474
P. d’Alessandro -*Vector Topologies and the Representation od Spaces of Random Variables - Applied Mathematics and Optimization Vol4 pp27-40, 1977*
J.L. Kelley - General Topology - Springer-Verlag Berlin-Heidelberg-New York 1955.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A pseudofinite group satisfying the uniform chain condition on centralizers up to finite index has a big finite-by-abelian subgroup.'
address: 'Université Lyon 1; CNRS; Institut Camille Jordan UMR 5208, 21 avenue Claude Bernard, 69622 Villeurbanne-cedex, France'
author:
- 'Frank O. Wagner'
date: '1/9/2015'
title: 'Pseudofinite $\widetilde{\mathfrak M_c}$-groups'
---
[^1]
Introduction {#introduction .unnumbered}
============
We generalize the results of Elwes, Jaligot, MacPherson and Ryten [@ER08; @EJMR11] about pseudofinite superstable groups of small rank to the pseudofinite context, possibly of infinite rankl
Rank
====
A [*dimension*]{} on a theory $T$ is a function $\dim$ from the collection of all interpretable subsets of a monster model to $\R^{\ge0}\cup\{\infty\}$ satisfying
- [*Invariance:*]{} If $a\equiv a'$ then $\dim(\p(x,a))=\dim(\p(x,a'))$.
- [*Algebraicity:*]{} If $X$ is finite, then $\dim(X)=0$.
- [*Union:*]{} $\dim(X\cup Y)=\max\{\dim(X),\dim(Y)\}$.
- [*Fibration:*]{} If $f:X\to Y$ is a interpretable surjection whose fibres have constant dimension $r$, then $\dim(X)=\dim(Y)+r$.
For a partial type $\pi$ we put $\dim(\pi)=\inf\{\dim(\p):\pi\proves\p\}$. We write $\dim(a/B)$ for $\dim(\tp(a/B))$.It follows that $X\subseteq Y$ implies $\dim(X)\le\dim(Y)$, and $\dim(X\times
Y)=\dim(X)+\dim(Y)$. Moreover, any partial type $\pi$ can be completed to a complete type $p$ with $\dim(\pi)=\dim(p)$. A dimension is [*additive*]{} if it satisfies
- [*Additivity:*]{} $\dim(a,b/C)=\dim(a/b,C)+\dim(b/C)$.
Additivity is clearly equivalent to [*fibration*]{} for type-definable maps.Examples for an additive dimension include:
1. coarse pseudofinite dimension (in the expansion by cardinality comparison quantifiers, in order to ensure invariance);
2. Lascar rank, $SU$-rank or -rank, possibly localised at some $\emptyset$-invariant family of types;
3. for any ordinal $\alpha$, the coefficient of $\omega^\alpha$ in one of the ordinal-valued ranks in (2) above.
In any of the examples, we put $\dim(X)=\infty$ if $\dim(X)>n$ for all $n<\omega$.Note that additivity in the above examples follows from the Lascar inequalities for the corresponding rank in examples (2) and (3) and from [@Hr12] in example (1). Let $G$ be a type-definable group with an additive dimension, and suppose $0<\dim(G)<\infty$. A partial type $\pi(x)$ implying $x\in G$ is [*wide*]{} if $\dim(\pi)=\dim(X)$. It is [*broad*]{} if $\dim(\pi)>0$. An element $g\in G$ is [*wide*]{}/[*broad*]{} over some parameters $A$ if $\tp(g/A)$ is. We say that $\pi$ is [*negligible*]{} if $\dim(\pi)=0$.An additive dimension is invariant under definable bijections.Let $f$ be an $A$-definable bijection, and $a\in\text{dom}(f)$. Then $\dim(f(a)/A,a)=0$ and $\dim(a/A,f(a))=0$, whence $$\dim(a/A)=\dim(a,f(a)/A)=\dim(f(a)/A).\qquad\qed$$ If $\dim(a/A)<\infty$, we say that $a\indd_AB$ if $\dim(a/A)=\dim(AB)$.It follows that $\indd$ satisfies transitivity. Moreover, for any partial type $\pi$ there is a comple type $p\supseteq\pi$ (over any set of parameters containing dom$(\pi)$) with $\dim(p)=\dim(\pi)$, so $\indd$ satisfies extension. Let $\dim$ be an additive dimension. If $\dim(a/A)<\infty$ and $\dim(b/A)<\infty$, then $a\indd_Ab$ if and only if $b\indd_Aa$, if and only if $\dim(a,b/A)=\dim(a/A)+\dim(b/A)$.Obvious.
\[generics\] Let $G$ be a type-definable group with an additive dimension, and suppose $0<\dim(G)<\infty$. If $g$ is wide over $A$ and $h$ is wide over $A,g$, then $gh$ and $hg$ are wide over $A,g$ and over $A,h$.As dimension is invariant under definable bijections $$\dim(gh/A,g)=\dim(h/A,g)=\dim(G).$$ The other statements follow similarly, possibly using symmetry.
Big abelian subgroups
=====================
\[centralizer\] Let $G$ be a pseudofinite group, and $\dim$ an additive dimension on $G$. Then there is an element $g\in G\setminus\{1\}$ such that $\dim(C_G(g))\ge\frac13\dim(G)$.Suppose first that $G$ has no involution. If $G\equiv\prod_I G_i/\U$ for some family $(G_i)_I$ of finite groups and some non-principal ultrafilter $\U$, then $G_i$ has no involution for almost all $i\in I$, and is soluble by the Feit-Thompson theorem. So there is $g_i\in G_i\setminus\{1\}$ such that $\langle g_i^{G_i}\rangle$ is commutative. Put $g=[g_i]_I\in
G\setminus\{1\}$. Then $\langle
g^G\rangle$ is commutative and $g^G\subseteq C_G(g)$. As $g^G$ is in definable bijection with $G/C_G(g)$, we have $$\dim(C_G(g))\ge\dim(g^G)=\dim(G/C_G(g))=\dim(G)-\dim(C_G(g)).$$ In particular $\dim(C_G(g))\ge\frac12\dim(G)$.
Now let $i\in G$ be an involution and suppose $\dim(C_G(i))<\frac13\dim(G)$. Then $$\dim(i^G)=\dim(G/C_G(i))=\dim(G)-\dim(C_G(i))>\frac23\dim(G)$$ and there is $j=i^g\in i^G$ with $\dim(j)\ge\frac23\dim(G)$. Note that $$\dim(C_G(j))=\dim(C_G(i)^g)=\dim(C_G(i))<\frac13\dim(G).$$ Then $$\dim(j^Gj)=\dim(G/C_G(j))=\dim(G)-\dim(C_G(j))>\frac23\dim(G)$$ and there is $h=j^{g'}j\in j^Gj$ with $\dim(h/j)\ge\frac23\dim(G)$. Note that $h$ is inverted by $j$. By additivity, $$\dim(j/h)=\dim(j,
h)-\dim(h)\ge\dim(h/j)+\dim(j)-\dim(G)>\frac13\dim(G).$$ If $H=\{x\in G:h^x=h^{\pm1}\}$, then $H$ is an $h$-definable subgroup of $G$, and $C_G(h)$ has index two in $H$. Since $j\in H$, we have $$\dim(C_G(h))=\dim(H)\ge\dim(j/h)>\frac13\dim(G).\qed$$
\[abelian\] Let $G$ be a pseudofinite -group, and $\dim$ an additive dimension on $G$. Then $G$ has a definable broad finite-by-abelian subgroup $Z$. More precisely, $Z=\Z(C)$ where $C$ is a minimal broad centralizer (up to finite index) of a finite tuple.By the condition, there is a broad centralizer $C$ of some finite tuple, such that $C_C(g))$ is not broad for any $g\in C\setminus\Z(C)$. Put $Z=\Z(C)$, a definable finite-by-abelian normal subgroup of $C$. If $Z$ is broad, we are done. Otherwise $\dim(Z)=0$, and $$\dim(C/Z)=\dim(C)-\dim(Z)=\dim(C).$$ For $g\in C\setminus Z$ we have $\dim(C_C(g))=0$, whence for $\bar g=gZ$ we have $$\begin{aligned}\dim(\bar g^{C/Z})&=\dim(g^CZ/Z)\ge\dim(g^C)-\dim(Z)\\
&=\dim(C)-\dim(C_C(g))-\dim(Z)=\dim(C/Z).\end{aligned}$$ Hence for all $\bar g\in (C/Z)\setminus\{\bar1\}$ we have $$\dim(C_{C/Z}(\bar g))=\dim(C/Z)-\dim(\bar g^{C/Z})=0.$$ As $C$ and $Z$ are definable, $C/Z$ is again pseudofinite, contradicting Proposition \[centralizer\].
Theorem \[abelian\] holds in particular for any pseudofinite -group with the pseudofinite counting measure. Note that the broad finite-by-abelian subgroup is defined in the pure group, using centralizers and almost centres. Moreover, the -condition is just used in $G$, not in the section $C/Z$.
\[corTh\] A superrosy pseudofinite group with $\Ut(G)\ge\omega^\alpha$ has a definable finite-by-abelian subgroup $A$ with $\Ut(A)\ge\omega^\alpha$.A superrosy group is . If $\alpha$ is minimal with $\Ut(G)<\omega^{\alpha+1}$ and we put $$\dim(X)\ge n\quad\text{if}\quad\Ut(X)\ge\omega^\alpha\cdot n,$$ then $\dim$ is an additive dimension with $0<\dim(G)<\infty$. The assertion now follows from Theorem \[abelian\].
For any $d,d'<\omega$ there is $n=n(d,d')$ such that if $G$ is a finite group without elements $(g_i:i\le d')$ such that $$|C_G(g_i:i<j):C_G(g_i:i\le j)|\ge d$$ for all $j\le d'$, then $G$ has a subgroup $A$ with $|A'|\le n$ and $n\,|A|^n\ge|G|$.If the assertion were false, then given $d,d'$, there were a sequence $(G_i:i<\omega)$ of finite groups satisfying the condition, such that $G_i$ has no subgroup $A_i$ with $|A_i'|\le i$ and $i\,|A_i|^i\ge|G_i|$. But any non-principal ultraproduct $G=\prod G_n/\U$ is a pseudofinite -group, and has a definable subgroup $A$ with $A'$ finite and $\dim(A)\ge\frac1n\dim(G)$ for some $n<\omega$. Unravelling the definition of the pseudofinite counting measure (and possibly increasing $n$) we get $|A_i'|\le n$ and $n\cdot|A_i|^n\ge|G_i|$ for almost all $i<\omega$, a contradiction for $i\ge n$.
Pseudofinite -groups of dimension $2$
=====================================
\[dim2\] Let $G$ be a pseudofinite -group, and $\dim$ an additive integer-valued dimension on $G$. If $\dim(G)=2$, then $G$ has a broad definable finite-by-abelian subgroup $N$ whose normalizer is wide. Note that since $\dim$ is integer-valued, a broad subgroup has dimension at least $1$. By Corollay \[corTh\], if $C$ is a minimal broad centralizer (up to finite index), then $A=\Z(C)$ is broad and finite-by-abelian.
If $A$ is commensurate with $A^g$ for all $g\in G$, then commensurativity is uniform by compactness. So by Schlichting’s Theorem there is a normal subgroup $N$ commensurate with $A$. But then $\Z(N)$ contains $A\cap N$, has finite index in $N$ and is broad; since it is characteristic in $N$, it is normal in $G$ and we are done.
Suppose $g\in\N_G(H)$ is such that $A$ is not commensurable with $A^g$. Then clearly $C$ cannot be commensurable with $C^g$, and $$\dim(A\cap A^g)\le\dim(C\cap C^g)=0.$$ Hence $$\dim(AA^g)\ge\dim(AA^g/(A\cap A^g))=\dim(A/(A\cap
A^g)+\dim(A^g/(A\cap A^g))=2\,\dim(A).$$ As $A$ is broad and $\dim(G)=2$, we have $\dim(A)=1$.
Choose some $d$-independent wide $a,b_0,c_0$ in $A$ over $g$. Then $\dim(a^gb_0/g,c_0)=2$, and $$2\ge\dim(c_0a^gb_0/g)\ge\dim(c_0a^gb_0/g,c_0)=\dim(a^gb_0/g,
c_0)=\dim(a^gb_0/g)=2$$ and $c_0\indd_gc_0a^gb_0$. Thus $c_0$ is wide in $A$ over $g,c_0a^gbc_0$. Similarly, $b_0$ is wide in $A$ over $g,c_0a^gbc_0$.
Choose $d,b_1,c_1\equiv_{c_0a^gb_0,g}a,b_0,c_0$ with $d,b_1,c_1\indd_{c_0a^gb_0,g}a,b_0,c_0$. Then $c_0a^gb_0=c_1d^gb_1$, whence $$a^gb=cd^g,$$ where $c=c_0^{-1}c_1$ and $b=b_0b_1^{-1}$. Moreover, $$\dim(b/a,g)\ge\dim(b_0b_1^{-1}/a,b_0,c_0,g)\ge\dim(b_1/a,b_0,c_0,
g)=\dim(b_1/c_0a^gb_0,g)=1,$$ so $b$ is wide in $A$ over $g,a$. Similarly, $c$ is wide in $A$ over $g,d$.
Let $x$ and $y$ be two d-independent wide elements of $C_A(a,b,c,d)$ over $a,b,c,d,g$. Then they are d-independent wide in $A$, and for $z=xgy$ we have $$\dim(z/a,b,c,d,g)=2,$$ so $z$ is wide in $G$ over $g,a,b,c,d$. Moreover $$a^zb=a^{xgy}b=a^{gy}b^y=(a^gb)^y=(cd^g)^y=c^yd^{gy}=cd^{xgy}=cd^z.$$ Choose $z'\in G$ with $z'\equiv_{a,b,c,d}z$ and $z'\indd_{a,b,c,d}z$, and put $r=z'^{-1}z$. Then $r$ is wide in $G$ over $g,a,b,c,d,z$, and $$a^zb^r=a^{z'r}b^r=(a^{z'}b)^r=(cd^{z'})^r=c^rd^{z'r}=c^rd^z.$$ Hence $$c^{-1}a^zb=d^z=c^{-r}a^zb^r.$$ Putting $b'=bb^{-r}$ and $c'=cc^{-r}$, we obtain $$a^zb'=c'a^z,$$ where $a$ is wide in $A$. As $r\indd_{g,a,b,c,d}z$ we have $$\dim(z/a,b',c')\ge\dim(z/a,b,c,d,r)=\dim(z/a,b,c,d)=2,$$ and $z$ is wide in $G$ over $a,b',c'$. If $z''\equiv_{a,b',c'}z$ with $z''\indd_{a,b',c'}z$, then $a^{z''}b'=c'a^{z''}$, whence $$b'^{a^z}=c'=b'^{a^{z''}}$$ and $a^za^{-z''}=a'^z$ commutes with $b'$, where $a'=aa^{-z''z^{-1}}$. $\dim(b')\ge1$, $\dim(a'/b')\ge1$, $\dim(z/a',b')\ge2$ and $\dim(a'^z/b')\ge1$.If $\dim(b'/b)=0$, then $\dim(r/b,b')=\dim(r/b)=2$. Choose $r'\equiv_{b,b'}r$ with $r'\indd_{b,b'}r$. So $b^r=b'^{-1}b=b^{r'}$ and $r'r^{-1}\in C_G(b)$. Since $r'r^{-1}$ is wide in $G$ over $b$, so is $C_G(b)$, and it has finite index in $G$ by minimality. Thus $b\in\Z(G)$ and $\dim\Z(G)\ge1$, so we can take $N=\Z(G)$. Thus we may assume $\dim(b')\ge\dim(b'/b)\ge1$.
Next, note that $z$ and $z''z^{-1}$ are wide and $d$-independent over $a,b',c'$ by Lemma \[generics\]. An argument similar to the first paragraph yields $\dim(a'/b')\ge\dim(a'/a,b',z''z^{-1})\ge1$ and $\dim(a'^z/b')\ge1$.
To finish, note that $\dim(C_G(b'))\ge\dim(a'^z/b')\ge1$. If $\dim(C_G(b'))=2$, then $b'\in\Z(G)$ and $\dim(\Z(G))\ge\dim(b')\ge1$, so we are done again. Otherwise $\dim(C_G(b'))=1$. By the -condition there is a broad centralizer $D\le C_G(b')$ of some finite tuple, minimal up to finite index, and $\dim(D)=\dim(C_G(b'))=1$, whence $\dim(C_G(b')/D)=0$. Choose $z^*\in G$ with $z^*\equiv_{a',b'}z$ and $z^*\indd_{a',b'}z$, and put $h=z^{*-1}z$. Then $a'^{z^*}\in C_G(b')$, so $a'^z\in C_G(b',b'^h)$. Moreover, $h$ is wide in $G$ over $a',b'$ and $d$-independent of $z$, so $$\dim(C_G(b',b'^h))\ge\dim(a'^z/b',h)=\dim(a'^z/b')\ge1$$ and $\dim(C_G(b')/C_G(b',b'^h))=0$. It follows that $$\begin{aligned}\dim(D/D\cap D^h)&=\dim(D/(D\cap C_G(b'^h)))+\dim((D\cap
C_G(b'^h))/(D\cap D^h))\\
&\le\dim(C_G(b')/C_G(b',b'^h))+\dim(C_G(b'^h)/D^h)=0,\end{aligned}$$ whence $\dim(D\cap D^h)=1$, and $D^h$ is commensurable with $D$ by minimality. Thus $$\dim(\N_G(D))\ge\dim(h/b')=2$$ and $\N_G(D)$ is wide in $G$. Since $D$ is finite-by-abelian by Corollary \[corTh\], we finish as before, using Schlichting’s Theorem.
\[soluble\] Let $G$ be a pseudofinite group whose definable sections are , and $\dim$ an additive integer-valued dimension on $G$. If $\dim(G)=2$, then $G$ has a definable wide soluble subgroup.By Theorem \[dim2\], there is a definable finite-by-abelian group $N$ such that $N_G(N)$ is wide. Replacing $N$ by $C_N(N')$, we may assume that $N$ is (finite central)-by-abelian. If $\dim(N)=2$ we are done. Otherwise $\dim(N_G(N)/N)=1$; by Corollary \[corTh\] there is a definable finite-by-abelian subgroup $S/N$ with $\dim(S/N)=1$. As above we may assume that $S/N$ is (finite central)-by-abelian, so $S$ is soluble. Moreover, $$\dim(S)=\dim(N)+\dim(S/N)=1+1=2,$$ so $S$ is wide in $G$.
A pseudofinite superrosy group $G$ with $\omega^\alpha\cdot2\le\Ut(G)<\omega^\alpha\cdot3$ has a definable soluble subgroup $S$ with $\Ut(S)\ge\omega^\alpha\cdot2$.Superrosiness implies that all definabel sections of $G$ are . We put $$\dim(X)=n\quad\Leftrightarrow\quad\omega^\alpha\cdot
n\le\Ut(X)<\omega^\alpha\cdot(n+1).$$ This defines an additive dimension with $\dim(G)=2$. The result now follows from Corollary \[soluble\].
[99]{} Richard Elwes and Mark Ryten. , Math. Log. Quart. 54, No. 4, 374–386 (2008).
R. Elwes, E. Jaligot, H.D. Macpherson and M. Ryten, , Proc. London Math. Soc. (3) 103 (2011), 1049-1082.
E. Hrushovski. , J. Amer. Math. Soc. 25(1):189–243, 2012.
[^1]: Partially supported by ValCoMo (ANR-13-BS01-0006)
| {
"pile_set_name": "ArXiv"
} |
epsf.tex harvmac \#1[Ann. Phys. [**[\#1]{}**]{}]{} \#1[Comm. Math. Phys. [**[\#1]{}**]{}]{} \#1[Int. J. Mod. Phys. [**A[\#1]{}**]{}]{} \#1[J. Math. Phys. [**[\#1]{}**]{}]{} \#1[J. Phys.[**A[\#1]{}**]{}]{} \#1[Mod. Phys. Lett. [**A[\#1]{}**]{}]{} \#1[Nuovo Cim.[**[\#1]{}**]{}]{} \#1[Nucl. Phys. [**B[\#1]{}**]{}]{} \#1[Phys. Lett. [**B[\#1]{}**]{}]{} \#1[Phys. Rev. [**[\#1]{}**]{}]{} \#1[Phys. Rev. [**[\#1]{}**]{}]{} \#1[Phys. Rev. [**B[\#1]{}**]{}]{} \#1[Phys. Rev. [**D[\#1]{}**]{}]{} \#1[Phys. Rep. [**C[\#1]{}**]{}]{} \#1[Phys. Rev. Lett. [**[\#1]{}**]{}]{} \#1[Prog. Theor. Phys. [**[\#1]{}**]{}]{} \#1[Rev. Mod. Phys. [**[\#1]{}**]{}]{} \#1[Suppl. Prog. Theor. Phys. [**[\#1]{}**]{}]{} \#1[Theor. Math. Phys. [**[\#1]{}**]{}]{} \#1[Sov. Phys. JETP [**[\#1]{}**]{}]{}
Ahmed S. Hassan, Masahiro Imachi[^1] [ e-mail:imac1scp@mbox.nc.kyushu-u.ac.jp]{}, Norimasa Tsuzuki[^2] [ e-mail:tsuz1scp@mbox.nc.kyushu-u.ac.jp]{}
and
Hiroshi Yoneyama$^{\dagger}$[^3] [ e-mail:yoneyama@math.ms.saga-u.ac.jp]{}
Department of Physics
Kyushu University
Fukuoka, 812 JAPAN
$\dagger$ Department of Physics
Saga University
Saga, 840 JAPAN
[**Abstracts**]{}
The two dimensional $CP^1$ model with $\theta$ term is simulated. We compute the topological charge distribution $P(Q)$ by employing the “set method" and “trial function method", which are effective in the calculations for very wide range of $Q$ and large volume. The distribution $P(Q)$ shows the Gaussian behavior in the small $\beta$ (inverse coupling constant) region and deviates from it in the large $\beta$ region. The free energy and its moment are calculated as a function of $\theta$. For small $\beta$, the partition function is given by the elliptic theta function, and the distribution of its zeros on the complex $\theta$ plane leads to the first order phase transition at $\theta=\pi$. In the large $\beta$ region, on the other hand, this first order phase transition disappears, but definite conclusion concerning the transition is not reached due to large errors.
The two dimensional $CP^{N-1}$ model is a suitable laboratory to study dynamics of QCD. Topology is expected to play an important role in non-perturbative nature of the dynamics of such theories. Recently numerical study of the topological aspects of the $CP^{N-1}$ model has made much progress \[\] \[\]. However a full understanding of the dynamics of the model requires study of an additional contribution of the imaginary part of the action, i.e., $\theta$ term. The degeneracy of the different topological sector is resolved into a unique vacuum labeled by a parameter $\theta$. As shown by some analytic studies, various models with the $\theta$ term, in general, exhibit a rich phase structure \[\] \[\] \[\] \[\]. It is then worthwhile to study effects of the $\theta$ term to the $CP^{N-1}$ model \[\]. From a realistic point of view also, it is significant to clarify the matter of strong CP violation in QCD.
Introduction of the $\theta$ term does not allow ordinary simulations because of the complex Boltzmann factor. An idea to circumvent the problem is to introduce the constrained updating of the fields, in which the topological charge, being a functional of the dynamical fields, is constrained to take a given value $Q$. So the phase factor $e^{i \theta Q}$ is factored out so that the partition function is given by the summation of the probability distribution $P(Q)$ weighted by $e^{i \theta Q}$ over all possible values of the topological charge $Q$. This algorithm was adopted in simulating the two dimensional U(1) gauge model $\W$.
So far topological aspects of the $CP^1$ model has been studied considerably well both theoretically and numerically. Most works are, however, limited to the theory without the $\theta$ term. This is one of our motives for studying effects of $\theta$ term on the model by means of Monte Carlo simulations. We present here the results of $P(Q)$ and the free energy $F(\theta)$ and its moments as a function of $\theta$ by surveying comparatively wide range of $Q$. Herein the two techniques are involved; one is to take the set method $\ref\BBCS{G.~Bhanot, S.~Black, P.~Carter and
R.~Salvador,
\PL{183}, 331 (1987).}$, and the other is to update by modifying the action to the effective action by introducing trial probability distributions in each set. These enable one to reach very large $Q$’s. In ref. \[\], the model was investigated, and the nature of the dilute gas approximation was clarified. In the present paper, we are interested in the phase structure in $\theta$ and $\beta$ ( inverse coupling constant ) space. We do simulations extensively for various $\beta$ in larger volume $V$ and wider range of $Q$ using the techniques.
From a viewpoint of condensed matter physics as well, the phase structure of the $CP^1$ model is worthwhile to study. The antiferromagnetic quantum Heisenberg chain with spin $s$ is, in the large spin limit, mapped to the two dimensional O(3) non-linear sigma model ($CP^1$ model), as an effective theory describing the low energy dynamics. The topological nature appears through the $\theta$ term with $\theta =2\pi s$ \[\], and then its effect is expected to distinguish the dynamics between the $s=$ integer and half-odd integer cases. This feature is stated as the Haldane conjecture that the $s=$ integer antiferromagnetic Heisenberg chain develops gap, while $s=$ half-odd integer one is gapless \[F. D. M. Haldane, , 1153 (1983).\].
The above mapping is based on spin wave approximation in the large $s$ limit. However the approximation is believed to be good even for very small $s$. It is well known that $s=1/2$ antiferromagnetic Heisenberg model is gapless, and correspondingly the non-linear sigma model with $\theta =\pi$ would be critical. There have been analytic arguments assuring this \[\]. So far, however, only a few works have been done in terms of numerical calculations \[\]. The present paper also concerns this issue numerically.
As shown in this paper, $P(Q)$ shows qualitatively quite different behavior for small and large coupling constants; it shows clearly the Gaussian behavior in the small $\beta$ (inverse coupling constant) region. We will discuss about the possible first order phase transition deducing from the Gaussian distribution. It is based upon analytical discussion by using the third elliptic theta function and the Poisson sum formula. In the large $\beta$ region, $P(Q)$ systematically deviates from the Gaussian. We show the structural difference of $F(\theta)$ and its moments from the small $\beta$ region at finite $\theta$. Near $\theta = \pi$, however, we are not able to draw a definite conclusion about the phase structure due to the large errors. We will discuss this matter in detail.
In the following section we fix the notations and present a brief account of the algorithm of the simulations. In section 3 we give the results. In section 4, the partition function zeros are discussed. Conclusions and discussion are presented in section 5.
We consider the $CP^1$ model with $\theta$ term on a two space-time dimensional euclidean lattice defined by the action where $\hat{Q}$ is a topological charge, and $z_{\alpha,n}$ is a two component complex scalar field ($\alpha=1,2$) at site $n$ constrained by $${\overline z_n} z_n = \sum_\alpha {\overline z_{\alpha,n}} z_{\alpha,n} =1$$ and couples with the one $\overline z_{n+\mu}$ ($\overline z$ is complex conjugation of $z$) at the nearest neighbor sites $n+\mu (\mu=1,2)$.
The partition function as a function of the coupling constant $\beta$ and $\theta$ is defined by and the free energy density $F(\theta)$ is given by where $V$ is volume of the system. The topological charge $\hat{Q}$ is the number of times the fields cover the sphere $S^2$. The lattice counterpart we adopt is that of the geometrical definition in ref. \[\]; the charge density $\hat{Q}(n^*)$ at dual site $n^*$ is given by $$\eqalignno{
\hat{Q}(n^*)={1 \over 2\pi }{\rm Im} \Bigl\{ &\ln
\big[{\rm Tr} P(n)P(n+1)(P(n+1+2)\big] \cr &+
\ln \big[{\rm Tr }P(n)P(n+1+2)P(n+2)\big]\Bigr\},&\tpc\cr}$$ where $P(n)_{\alpha\beta} = z_{\alpha n} {\overline z_{\beta n}}$ and $n$ is the left corner of the plaquette with center $n^*$. This amounts, in terms of $z$, to the topological charge where $\theta_{n,\mu}= {\rm arg} \{ {\overline z_n}z_{n+\mu} \}$.
In order to simulate the model with the complex Boltzmann factor, we follow the Wiese’s idea . It, in principle, introduces the constrained updating of the fields, in which the topological charge, being a functional of the dynamical fields, is constrained to take a given value $Q$. So the phase factor $e^{i \theta Q}$ is factored out, so that the partition function is given by the summation of the probability distribution $P(Q)$ weighted by $e^{i \theta Q}$ in each $Q$ sector. This amounts, in practice, to calculate first the probability distribution $P(Q)$ at $\theta=0$ and to be followed by taking the Fourier transform of $P(Q)$ to get the partition function $Z(\theta)$ as where $P(Q)$ is Here $\prod_n dz_n d{\overline z_n} ^{(Q)} = \prod_n dz_n
d{\overline z_n} \delta_{\hat{Q},Q}$, i.e. , the integration measure restricted to the configurations with given $Q$, where $\delta_{\hat{Q},Q}$ is the Kronecker’s delta. Note that $\sum_Q P(Q)=1$. Expectation value of an observable $O$ is given in terms of $P(Q)$ as where $\langle O \rangle_Q$ is the expectation value of $O$ at $\theta=0$ for a given $Q$ sector We measure the topological charge distribution $P(Q)$ by Monte Carlo simulation by the Boltzmann weight $\exp(-S)$, where $S$ is defined by . The standard Metropolis method is used to update configurations. To calculate $P(Q)$ effectively, we apply (i) the set method and (ii) the trial distribution method simultaneously. In the following, we explain briefly the algorithm to make the paper self-contained. All we have to calculate $P(Q)$ is to count how many times the configuration of $Q$ is visited by the histogram method. The distribution $P(Q)$ could damp very rapidly as $\vert Q \vert$ becomes large. We need to calculate the $P(Q)$ at large $\vert Q \vert$’s which would contribute to $F(\theta)$, $\langle Q \rangle_\theta$ and $\langle Q^2 \rangle_\theta$ because they are obtained by the Fourier transformation of $P(Q)$ and its derivatives. Further, the error of $P(Q)$ at large $\vert Q \vert$ must be suppressed as small as possible. These are reasons why we apply two techniques mentioned above. Since $P(Q)$ is analytically shown to be even function and is certified by simulation, we restrict to the range of $Q$ to $\geq 0$.
The range of $Q$ is grouped into sets $S_i$ ; $S_1 (Q=0 \sim 3)$, $S_2 (Q=3 \sim 6)$, $\cdots$ , $S_i (Q=3(i-1) \sim 3i)$, $\cdots$ (set method). Monte Carlo updatings are done as follows by starting from a configuration within a fixed set $S_i$. When $Q$ of a trial configuration ${C_t}$ stays in one of the bins within $S_i$, the configuration ${C_t}$ is accepted, and the count of the corresponding $Q$ value is increased by one, while when ${C_t}$ goes out of the set $S_i$, ${C_t}$ is rejected, and the count of $Q$ value of the old configuration is increased by one. This is done for all sets $S_i$ ; $i$ = 1, 2, $\cdots$.
Another of the two techniques is to modify the Boltzmann weight by introducing trial distributions $P_t(Q)$ for each set (trial distribution method). This is to remedy $P(Q)$ which falls too rapidly even within a set in some cases. We make the counts at $Q=3(i-1), 3(i-1)+1, 3(i-1)+2$ and $3i$ in each set $S_i$ almost the same. As the trial distributions $P_t(Q)$’s, we apply the form $$P_t(Q) = A_i \exp [ - \bigl( C_i(\beta) / V \bigr) Q^2 ],$$ where the value of $C_i(\beta)$ depends on the set $S_i$, and $A_i$ is a constant. That is, the action during updatings is modified to the effective one such as $S_{\rm eff}=S + \log P_t(Q) $. We adjust $C_i(\beta)$ from short runs to get almost flat distribution at every $Q$ in $S_i$.
To obtain the normalized distribution $P(Q)$ in the whole range of $Q$ from the counts at each set, we make matchings as follows: At each set $S_i$ ( $i$ = 1, 2, $\cdots$ ), the number of counts is multiplied by $P_t(Q)$ at each $Q$. We call the multiplied value $N_i(Q)$, which is hopefully proportional to the desired topological charge distribution $P(Q)$. In order to match the values in two neighboring sets $S_i$ and $S_{i+1}$, we rescale $N_{i+1}(Q)$ so that $N_{i+1}(Q) \rightarrow N_{i+1}(Q) \times r$, where $r$ =$N_i(Q=3i) / N_{i+1}(Q=3i)$, the ratio of the number of counts at the right edge of $S_i$ to that at the left edge of $S_{i+1}$. These manipulations are performed over all the sets. The rescaled $N_i(Q)$’s are normalized to obtain $P(Q)$ such that $$P(Q) = { N_i(Q) \over \sum_i\sum_Q N_i(Q) }.$$ We use square lattices with the periodic boundary conditions. Lattice sizes are $V = L \times L$, and $L$ ranges from $L =$ 24, 36, 48 to 72. The total number of counts in each set is $10^4$. The error analysis is discussed in Appendix. To check the algorithm, we calculated the internal energy. It agrees with the analytical results of the strong and weak coupling expansions . Using the calculated $P(Q)$, we will estimate the free energy $F(\theta)$ and its derivative $\langle Q \rangle_\theta$, respectively. In this subsection we discuss the topological charge distribution $P(Q)$. Partition function can be given by the measured $P(Q)$ as in $\four$ in principle, but we should be careful for estimating $Z(\theta)$ from $P(Q)$. Since $P(Q)$ is very sharply decreasing function of $Q$, its Fourier series $Z(\theta)$ is drastically affected by statistical fluctuations of $P(Q)$. For example, consider two different $Q$ values, say, $Q_1$ and $Q_2$ ($Q_1 \ll Q_2$). Small error $\delta P(Q_1)$ at $Q_1$ could cause very large effects to $Z(\theta)$ because $P(Q_2)$ itself is sometimes much smaller than $\delta
P(Q_1)$. So the effort to obtain $P(Q)$ at large $Q$ may be useless if we allow these fluctuations at the small value of $Q$. In order to avoid this problem, we first fit the measured $P(Q)$ by the appropriate functions $P_{\rm fit}(Q)$ and obtain $Z(\theta)$ using Fourier transforming from $P_{\rm fit}(Q)$. We apply the chi-square-fitting to the logarithm of the measured $P(Q)$ in the form of polynomial functions of $Q$ $$P(Q) = \exp \Big[ \sum_n a_n Q^n \Big].$$ In the following, we present the results of $\beta$ and volume dependence of $P(Q)$ .
In Fig.1, we show the measured $P(Q)$ for various $\beta$’s ($\beta =$ 0.0, 0.5, $\cdots$, 3.5) for a fixed volume ($L=24$). As $\beta$ varies, $P(Q)$ smoothly changes from strong to weak coupling regions. In the strong coupling regions $(\beta \lsim
2.0)$, $P(Q)$ shows Gaussian behavior. In the weak coupling regions $(2.75 \lsim \beta)$, $P(Q)$ deviates gradually from the Gaussian form, being enhanced at large $Q$ compared to the Gaussian. In order to investigate the difference between the two regions in detail, we use the chi-square-fitting to $\log P(Q)$. Table I shows the results of the fittings, i.e., the coefficients $a_n$ of the used polynomial $\sum_n a_n Q^n$ for various $\beta$’s with the resulting $\chi^2/d.o.f$. (i) For $\beta \lsim 2.0$, $P(Q)$’s are indeed fitted well by the Gaussian form. (ii) For $\beta \gsim 2.75$, terms up to quartic one are needed for sufficiently good fitting. The linear term, in particular, is important for fitting the data at very small $Q$ values. The value $Q_{\rm Max}$, which is the largest $Q$ of the range in consideration, is also shown in the table. It is chosen so that the ratio $P(Q_{\rm Max}) / P(0) \approx
10^{-20}$ in the weak couplings. (iii) Between the strong and weak couplings ($2.0\lsim\beta\lsim 2.75$) the fittings according to the polynomial turn out to be very poor ( $\chi^2/d.o.f. \approx 250 $ ). It may indicate the existence of a transitive region between the Gaussian and non-Gaussian regions. (iv) Apart from this region, each of the coefficients change smoothly from the strong to weak coupling regions as shown in Table I.
Here we discuss the volume dependence. In the strong coupling regions, $P(Q)$ is fitted very well by Gaussian for all values of $V$ $$P(Q) \propto \exp \left( - \kappa_V(\beta) Q^2 \right).$$ where the coefficient $\kappa_V(\beta) (=a_2)$ depends on $\beta$ and $V$. Fig.2 shows $\log \kappa_V(\beta)$ vs. $\log V$ for a fixed $\beta(=0.5)$. We see that $\kappa_V(\beta)$ is clearly proportional to $1/V$ $$\kappa_V(\beta) = C(\beta)/V.$$ This $1/V$-dependence of the Gaussian behavior determines the phase structure of the strong coupling region. This will be discussed in detail numerically in §3.2 and analytically in §4.
The proportionality constant $C$ depends on $\beta$. As $\beta$ becomes large, $C(\beta)$ monotonically increases; $C(\beta=0.0)=10.6$, $C(\beta=0.5)=12.3$, $C(\beta=1.0)=15.5$.
Fig.3. shows the volume-dependence of $P(Q)$ for $L= 24$, 36, 48, and 72 in the weak coupling regions ($\beta=3.0$). We do not find the $1/V$-law as in the strong coupling regions, but a clear volume dependence is observed. It causes the different behavior of $F(\theta)$ from that in the strong coupling regions.
Partition function $Z(\theta)$ as a function of $\theta$ is given by $\four$ from $P(Q)$. The free energy is In general, the $n$-th order of the moment is given by the derivatives of $F(\theta)$
In the strong coupling region, we have seen the Gaussian behavior of $P(Q)$, and the $1/V$-law appears to hold up to $L=72$. It is natural to expect that this behavior persists to $V \rightarrow
\infty$. Let us look at how the $1/V$-law affects $F(\theta)$ and $\langle Q \rangle_\theta$. By putting $C(\beta)=12.3$ in $P(Q) \propto \exp \left[ - \left( C(\beta)/V \right) Q^2 \right] $ for $\beta=0.5$, we calculate $F(\theta)$ and $\langle Q \rangle_\theta$ from $\f$ and $\q$. Fig’s 4 and 5 show their volume dependence. As $V$ is increased, $F(\theta)$ very rapidly (already at $L=6$) approaches the quadratic form in $\theta$ from below. Its first moment $\langle Q \rangle_\theta$ develops a peak near $\theta=\pi$, and the position of the peak quickly approaches $\pi$ as $V$ increases. The jump in $\langle Q \rangle_\theta$ would arise at $\theta = \pi$ as $V \rightarrow \infty$. It indicates the first order phase transition at $\theta=\pi$.
In the weak coupling regions, on the other hand, we see the different behavior. Fig.6 shows $F(\theta)$ at $\beta=3.0$. For $\theta \lsim \pi/2$, $F(\theta)$ is volume independent, while for $\theta \gsim \pi/2$, the clear volume dependence appears, where $F(\theta)$ decreases as $V \rightarrow $ large unlike in the strong coupling case. We have checked that the result for $L=20$ agrees with that in ref.$\BDSL$ within errors. The expectation value $\langle Q \rangle_\theta$ is shown in Fig.7. The singular behavior at $\theta=\pi$, which was seen in the small $\beta$ region, disappears. The peak gets round and its locus moves towards small $\theta$ as $V$ increases, which is opposite to Fig.5.
We should make a remark about the errors in the figures. As a general tendency, larger errors arise for larger volume and/or for $\theta \approx \pi$. It is associated with the algorithm to calculate $Z(\theta)$, $\four$, in which $e^{i \theta Q} \approx (-1)^Q $ for $\theta=\pi$ yields large cancellation for slowly falling $P(Q)$ (the behavior at large $V$) in the summation. It causes large errors of the observables due to the denominator in $\evq$. This is just the same as the so called sign problem \[\] which is notorious in the quantum Monte Carlo simulations applied to systems of strongly correlated electrons.
In the previous sections, we have seen that $P(Q)$ is Gaussian in small $\beta$ region. In this section we shall look into the detail of its consequence by paying attention to the partition function zeros in the complex $\zeta$ plane ($\zeta=e^{i \theta}$). Study of the partition function zeros is regarded as an alternative to investigate the critical phenomena. The zeros accumulate in infinite volume limit to the critical point, and how fast they approach the point as $V$ increases tells the order of the phase transition \[\] \[\]. If Gaussian behavior $P(Q) \propto \exp [- \left( C(\beta)/V \right) Q^2]$ persists to infinite volume limit, the partition function is expressed by the third elliptic theta function as $$Z(\theta) \propto \vartheta_3(\nu,\tau),$$ where $p=\exp[-C(\beta)/V] \equiv \exp(i \pi \tau)$ and $\zeta=e^{i\theta} \equiv \exp(i 2 \pi \nu)$. In order to look for the partition function zeros in the complex $\zeta$ plane, it is convenient to use infinite product expansion of $\vartheta_3$ Zeros of $Z(\theta)$ are all found easily on the negative real axis of the complex $\zeta$ plane as for $n$ = 1, 2, $\cdots$, $\infty$. In the complex $\theta$ plane, equivalently, these zeros are located at $$\theta=\pi\pm i (2 n-1) C/V.$$ It thus follows that the $1/V$-law approaching the critical point $\theta_c=\pi$ indicates the first order phase transition $\IPZ$ $\FB$.
An alternative to the above way of looking is to use the Poisson sum formula to the sum . For $V \gg 1$ and near $\theta=\pi$, the sum on the right is well approximated by two terms ( $n=0$ and 1 ), It follows that the partition function has infinite zeros at $\theta=\pi + i (2 n+1)C/V$, where $n$ is integer. Again the $1/V$-law means the existence of the first order phase transition. This result is in complete agreement with that from $\vartheta_3$ function discussed above. To see to what extent the approximation $\pois$ is good, we compare the resulting $F(\theta)$ and $\langle Q \rangle_\theta$ from $\pois$ with those of Monte Carlo simulations. They agree each other. We have seen that $P(Q)$ is Gaussian in the small $\beta$ region. As shown in the last section, it leads to the first order phase transition. This behavior is very much like the $d=2$ U(1) gauge model with $\theta$ term , where $P(Q)$ is Gaussian for all values of the coupling constant \[\]. There the analytic form of $P(Q)$ is given. It may also be interesting to study the $CP^1$ model from the renormalization group point of view, which might show the singular behaviors of the renormalization group flows similar to the U(1) case .
In large $\beta$ region, on the other hand, $P(Q)$ differs from the Gaussian behavior. Consequently, the free energy $F(\theta)$ and the moment $\langle Q \rangle_\theta$ show the quite different behaviors from those in the small $\beta$ regions. The signal of the first order transition disappears. To understand those behaviors, It would be helpful to consider the dilute gas approximation, where instantons of charge $Q=\pm 1$ are randomly distributed. Let us assume that the probability distribution $P_n$ ( $P_{\overline n}$ ), in which $n$ instantons (${\overline n}$ anti-instantons) generate, obeys the Poisson distribution $P_n=\lambda^n e^{-\lambda}/n! ( P_{\overline n}=\lambda^{\overline n}
e^{-\lambda}/{\overline n}! ) $. The topological charge distribution function $P(Q)$ is given by the modified Bessel’s function as $P(Q)=e^{-\lambda} I_Q(\lambda)$, where $\lambda$ is average number of instantons (anti-instantons). For $\lambda \gg 1$, $I_Q(\lambda)$ is approximated by $\exp(-Q^2/2\lambda)$. The $\lambda$ can then be identified as $V/2C$, which is natural since the average number is proportional to the volume $V$. As $\beta$ increases, $C(\beta)$ increases (section 3), that is, the average number of instantons decreases; as $\beta \rightarrow \infty$ (zero temperature limit), the configurations vary slowly so that the configurations with large $Q$ are unlikely to contribute to the partition function. In large $\beta$ region, the behavior of $I_Q(\lambda)$ as a function of $Q$ is qualitatively the same with the result of the simulations. Precisely speaking, however, they are different, and actually the difference is attributed to the asymptotic scaling of the topological susceptibility in ref. .
It is expected from the Haldane conjecture that the second order phase transition would occur at $\theta=\pi$. We, however, seem to fail confirming it. The first order phase transition in small $\beta$ region would have to mutate to the second order one at some $\beta$, if it occurred. In the large $\beta$ region, as discussed in section 3, the volume dependence of the results is large in the interesting region of $\theta$, and the statistical errors mask the nature. For $L=72$, the maximal lattice extension of our study, $F(\theta)$ still changes considerably and gets very large errors for $\theta \gsim \pi/2$. Consequently, so do its moments for a wider range of $\theta$. This is due to the large correlation length in the large $\beta$ region, and the finite size effect is not negligible. The large fluctuations come from the same origin as the so called sign problem , which arises in the strongly correlated electronic system in the condensed matter physics. In order to circumvent the problem, we must address the issue of the lattice effect. It is worthwhile to pursue the issue treated in the present paper from the the improved point of view such as the perfect action $\ref\HN{P.~Hasenfratz and F.~Niedermayer, \NP{414}, 785 (1994).}$ $\ref\DFP{M.~D'Elia, F.~Farchioni and A.~Papa,
``Scaling and topology in the 2-$d$ O(3)-$\sigma$ model on the lattice
with the perfect action", preprint IFUP-TH 23/95, hep-lat/9505004.}$. Recently, the second order phase transition has been found numerically by formulating the model in terms of clusters with fractional topological charge $\pm 1/2$ .
Some numerical studies of the $CP^{N-1}$ model with $N > 2$ have been done without $\Pisa$ and with the $\theta$ term . In the latter case for $CP^3$, interestingly, the first order transition is observed at finite $\theta$ which is smaller than $\pi$ .
**Acknowledgment**
We are grateful to the colleagues for useful discussion. We also wish to thank S. Tominaga for discussion on the algorithm. The numerical simulations were performed on the computer Facom M-1800/20 at RCNP, Osaka University. This work is supported in part by a Grant-in-Aid for Scientific Research from the Japanese Ministry of Education, Science and Culture (No.07640417). One of the authors (A. S. H.) is grateful for the scholarship from the Japanese Government.
[**Appendix** ]{}
In this appendix, we discuss briefly the error analysis when “set method" and “trial distribution method" are used.
We consider first the simple case where a single set is adopted and, as trial distribution, $P_t(Q) = 1$. It is known that the counts in the histogram method essentially obeys multinomial distribution and that the error of counts at $Q$ ($count(Q)$) is estimated by the variance of the distribution . For each $Q$, the variance is $$\sigma^2(Q) = N \cdot {count(Q) \over N} \left( 1 - {count(Q) \over N} \right),$$ where $N$ is the total counts. Therefore $P(Q)$ is estimated by $$P(Q) = {count(Q) \over N} \pm \delta P(Q),$$ where $\delta P(Q) = \sigma (Q) / N$. The relative error ($\delta P / P$) at large $Q$ is given by $${\delta P(Q) \over P(Q)} \approx {\sigma (Q) \over count(Q) }
= {1 \over \sqrt{ count(Q)} }. \eqno (A.1)$$ It could become very large at large $Q$ when $P(Q)$ is rapidly decreasing function of $Q$.
When the above two methods are adopted, the relative error decreases as follows. The trial distribution method makes $count(Q)$ almost independent of $Q$. The variance $\sigma(Q)$ also becomes almost constant at each $Q$. Accordingly, $P(Q)$ is given by $$P(Q) = P_t(Q) \left( count(Q) \pm \sigma(Q) \right),$$ which leads to the relative errors at any $Q$ $${\delta P(Q) \over P(Q)} = {\sigma \over count } \approx {\rm constant}.$$ This is quite an improvement compared to (A.1). When the set method is further used, the constant errors do not propagate over different sets .
**Table caption**
[Table I.]{} The results of chi-square-fitting to $\log P(Q)$ in terms of the polynomial $\sum_n a_n Q^n $ for various $\beta$. Fittings are performed to the data in the range from $Q=0 $ to $Q_{\rm Max}$. The resulting $\chi^2/d.o.f.$’s are also listed. For the data $\beta=0.0$, 0.5 and 1.0, Gaussian fitting is performed.
1.5cm
**Figure captions**
[Figure 1.]{} The topological charge distribution $P(Q)$ vs. $Q^2$ for $\beta=0.0$ to 3.5. The lattice size is $L=24$. The data only for $Q \leq 21$ are plotted. The lines are shown for the guide of eyes.
[Figure 2.]{}$\log a_2 ( = \log \kappa_V )$ vs. $\log V$. $\beta = 0.5$. The $1/V$ behavior is clearly seen.
[Figure 3.]{}$P(Q)$ vs. $Q^2$ for $\beta=3.0$. The lattice size $L$ is taken to be 24, 36, 48 and 72.
[Figure 4.]{}Free energy $F(\theta)$ for $\beta=0.5$. Lines are shown for $V=$ 16, 25 and 36 in order from below.
[Figure 5.]{}The expectation value of the topological charge $\langle Q
\rangle_\theta $ for $\beta=0.5$. Lines are shown for $V=$ 16, 25 and 36 in order from below. The peak of the curve becomes sharper quickly as $\theta
\rightarrow \pi$.
[Figure 6.]{}$F(\theta)$ for $\beta=3.0$. $L$ is chosen to be 24 (square), 36 (triangle), and 48 (circle). Values of $F(\theta)$ are plotted based on the parameters $a_n$ obtained by the fittings explained in the text. The parameters $a_n$ for $L=24$ are shown in Table I. Those for $L=36$ and 48 are obtained in the same process as for $L=24$. The lines are shown for the guide of eyes. The volume dependence appears clearly at $\theta \gsim \pi/2$.
[Figure 7.]{}$\langle Q \rangle_\theta $ for $\beta=3.0$. $L$ is the same as those in Fig.6. Error bars for the data of $L=48$ are not drawn because they are too large.
$$\vbox{
\vskip 5cm
\epsfysize=.950\hsize
\epsfbox{tab.ps}
}$$ $$\vbox{
\vskip 2cm
\centerline{Figure 1}
\bigskip
\epsfysize=1.1\hsize
\hskip.05cm
\epsfbox{f1.ps}
}$$ $$\vbox{
\vskip 3cm
\centerline{Figure 2}
\bigskip
\hskip1.3cm
\epsfysize=.6\hsize
\epsfbox{f2.ps}
}$$ $$\vbox{
\vskip 2cm
\centerline{Figure 3}
\bigskip
\epsfysize=1.1\hsize
\hskip.05cm
\epsfbox{f3.ps}
}$$ $$\vbox{
\vskip 1.7cm
\centerline{Figure 4}
\bigskip
\hskip.2cm
\epsfysize=.5\hsize
\epsfbox{f4.ps}
\vskip 1cm
\centerline{Figure 5}
\bigskip
\hskip.2cm
\epsfysize=.5\hsize
\epsfbox{f5.ps}
}$$ $$\vbox{
\vskip 2cm
\centerline{Figure 6}
\bigskip
\epsfysize=.9\hsize
\hskip.05cm
\epsfbox{f6.ps}
}$$ $$\vbox{
\vskip 2cm
\centerline{Figure 7}
\bigskip
\epsfysize=.9\hsize
\hskip.05cm
\epsfbox{f7.ps}
}$$
[^1]: \*
[^2]: \*\*
[^3]: \*\*\*
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The nonlinear interactions between flexural and torsional modes of a microcantilever are experimentally studied. The coupling is demonstrated by measuring the frequency response of one mode, which is sensitive to the motion of another resonance mode. The flexural-flexural, torsional-torsional and flexural-torsional modes are coupled due to nonlinearities, which affect the dynamics at high vibration amplitudes and cause the resonance frequency of one mode to depend on the amplitude of the other modes. We also investigate the nonlinear dynamics of torsional modes, which cause a frequency stiffening of the response. By simultaneously driving another torsional mode in the nonlinear regime, the nonlinear response is tuned from stiffening to weakening. By balancing the positive and negative cubic nonlinearities a linear response is obtained for the strongly driven system. The nonlinear modal interactions play an important role in the dynamics of multi-mode scanning probe microscopes.'
author:
- 'H. J. R. Westra'
- 'H. S. J. van der Zant'
- 'W. J. Venstra'
title: Modal interactions of flexural and torsional vibrations in a microcantilever
---
Introduction
============
The Atomic Force Microscope (AFM) [@Binnig1986:p930] is a crucial instrument in studying nanoscale objects. Various operation schemes are employed, which include the use of different cantilever geometries, higher modes or the torsional mode for imaging [@Huang2004277; @Neumeister1994:p2527; @Pires2010:p0705; @Dong2009]. The nonlinear tip-sample interactions determine the dynamics in tapping-mode AFM and have been studied in detail [@hillenbrand2000p3478; @Rutzel08082003]. Besides this extrinsic nonlinearity, the intrinsic mechanical nonlinearities determine the dynamics of ultra-flexible microcantilevers at high amplitudes, as shown in a recent study [@Venstra:2010p7325]. These nonlinearities result in a amplitude-dependent resonance frequency and couple the vibration modes. In clamped-clamped beams, the nonlinear coupling is provided by the displacement-induced tension [@Westra:2010p7074; @Gaidarzhy2011:p264106]. For cantilever beams it was shown that the coupling between the modes can be used to modify the resonance linewidth [@Venstra2011:p151904]. In a multi-mode AFM [@Garcia2012; @Raman2011:p809], these modal interactions are of importance, since the resonance frequency of one mode depends on the amplitude of the other modes.\
In this work, we experimentally demonstrate the intrinsic mechanical coupling between the flexural and torsional modes of a microcantilever. The resonance frequency of one mode depends on the amplitude of the other modes. The flexural modes are coupled via the geometric and inertial nonlinearities. The torsional modes exhibit frequency stiffening at high amplitudes, which originates from torsion warping [@Sapountzakis2010:p1853]. Interestingly, the nonlinearity constant of one torsional mode changes sign when another torsional mode is driven at high amplitudes. Finally, the coupling between the torsional and flexural modes is studied.\
Experiment
==========
Microcantilevers are fabricated by photolithographic patterning of a thin low-pressure chemical vapor deposited silicon nitride (SiN) film. Subsequent reactive ion etching transfers the pattern to the SiN layer, and the cantilevers are released using a wet potassium hydroxide etch, resulting in a undercut-free cantilever. The dimensions are length $\times$ width $\times$ height ($L \times w \times h) = 42 \times 8 \times 0.07$ $\mu\mathrm{m}^3$. These floppy cantilevers allow high amplitudes and thus facilitate the study of nonlinearities. The cantilever is mounted onto a piezo actuator, which is used to excite the cantilever. The cantilevers are placed in vacuum (pressure $< 10^{-5}$ mbar) to eliminate air-damping and to enable large vibration amplitudes, where nonlinear terms in the equation of motion dominate the dynamics. The cantilever motion is detected using a home-made optical deflection setup which resembles the detection scheme frequently used in scanning probe microscopes. The flexural and torsional vibration modes are detected with a sensitivity of $\pm$ 1 pm/$\sqrt{\mathrm{Hz}}$ [@BabaeiGavan:2009p4633]. A schematic of the measurement setup is shown in Fig. 1(a). The cantilever displacement signal is measured using either a network (NA) or spectrum analyzer (SA). To drive a second mode, a separate RF source is used.\
![Measurement setup. (a) Optical deflection setup showing the laser beam, which reflects from the cantilever surface. The spot of the reflected laser beam is modulated in time by a frequency corresponding to the cantilever motion. The cantilever is mounted onto a piezo actuator in vacuum. Network (NA) and spectrum analysis (SA) is performed on the signal from the two-segment photodiode. (b) Frequency responses of the first and second flexural (top panels) and torsional (bottom panels) modes. Inset are the calculated mode shapes from Euler-Bernoulli beam theory.[]{data-label="fig1"}](figure1){width="135mm"}
First, the flexural vibrations are characterized by measuring the cantilever frequency response at different resonance modes. The first flexural mode shown in Fig. 1(b) occurs at 54.8 kHz with a Q-factor of 3000. The resonance frequency of the second mode is 347 kHz ($Q=3900$), which is 6.33 times higher than the first resonance mode, in agreement with the calculated ratio $f_{R,F2} / f_{R,F1} = 6.27$, following from Euler-Bernoulli beam theory. Not shown is the third flexural mode at 974.9 kHz, with $f_{R,F3} / f_{R,F1} = 17.8$, near the expected ratio of 17.6. This indicates that in the linear regime the cantilever beam is described by the Euler-Bernoulli beam theory. Throughout the manuscript, the subscripts $\mathrm{F}i$ and $\mathrm{T}i$ indicate the frequency span around the $i^\mathrm{th}$ flexural ($\mathrm{F}$) or torsional ($\mathrm{T}$) resonance mode. The subscript $\mathrm{R}$ refers to the resonance frequency of that particular mode.\
The torsional modes are characterized by rotating the cantilever over 90 degrees in the setup; the two-segment photodiode is then sensitive to vibrations corresponding to torsional resonance modes [@BabaeiGavan:2009p4633]. The frequency response of the first two torsional modes is shown in Fig. 1(b). From theory, the ratio between the lowest two resonance frequencies of the torsional modes is 3, which is close to the measured ratio of $f_{R,T2} / f_{R,T1} = 1638\, \mathrm{kHz} / 535.4\, \mathrm{kHz} = 3.06$. The Q-factors of the first and second torsional mode are 4300 and 3200 respectively.\
At high drive amplitudes, the flexural and torsional modes become nonlinear. The nonlinearity of the flexural modes in a cantilever beam was theoretically studied by Crespo da Silva in 1978 [@CrespodaSilva:1978p30; @CrespodaSilva:1978p29]. To include the torsional nonlinearity, the equations of motion are extended (Appendix A). For the flexural and torsional modes, the nonlinearity causes a Duffing-like frequnecy stiffening when the mode is strongly driven [@Lifshitz:2008p7422; @Nayfeh:1995p42] leading to a bistable vibration amplitude. This bifurcation is observed in all modes studied in this paper. These nonlinearities are responsible for the coupling between the flexural-flexural, torsional-torsional and flexural-torsional modes.
Modal interactions in a microcantilever
=======================================
We now experimentally demonstrate the coupling between the modes of a microcantilever. We use a two-frequency drive signal to excite two resonance modes of the cantilever simultaneously while we measure the motion of one mode. First, we focus on the interactions between the flexural modes. Then we turn our attention to the torsional modes, starting with the amplitude-dependent resonance frequency of the torsional vibrations, followed by the demonstration of the coupling between the lowest two torsional modes. Finally, the interactions between flexural and torsional modes are discussed.
Flexural-flexural mode interaction
----------------------------------
To investigate the interactions between the two lowest flexural modes, the thermal motion of the first mode is measured with a spectrum analyzer, while the RF source strongly drives the second mode. The thermal noise spectra of the first mode as a function of the drive frequency of the second mode are shown in Fig. \[fig2\](a). The color scale represents the power spectral density of the displacement around the resonance frequency of the first mode. A shift of the resonance peak of the first mode is observed as the drive signal at $f_{\mathrm{F2}}$ approaches the nonlinear resonance of the second mode. The resonance frequency of the first mode for each drive frequency of the second mode is obtained by fitting the damped-driven harmonic oscillator (DDHO) response. In Fig. \[fig2\](b), this resonance frequency of the first mode is plotted versus the drive frequency of the second mode. The nonlinear response of the second mode is reflected in the resonance frequency of the first mode where the resonance frequency first increases and then jumps down after the second mode has reached its maximum amplitude, indicated by the arrow. At the maximum amplitude of the second mode, the resonance frequency of the first mode is shifted by several times its linewidth. This experiment shows that the coupling between the flexural modes can introduce significant resonance frequency shifts when multiple modes are excited simultaneously. Moreover, by measuring the shift in resonance frequency of the first mode the motion of the second mode can be detected. For comparison, the nonlinear response of the direct-driven second mode is shown in the inset of Fig. \[fig2\](b).\
![Flexural-flexural mode interactions. (a) Frequency spectra of the thermal motion of the first flexural mode ($f_{\mathrm{F1}}$), when the second mode is driven through its resonance frequency. Colorscale represents the power spectral density of the displacement noise of mode 1. As the peak width remains constant, there is no significant change in the Q-factor. The motion of the second mode tunes the resonance frequency of the first mode. (b) The resonance frequency of the first mode $f_\mathrm{R,F1}$ versus the drive frequency of the second flexural mode. The nonlinear response of the second mode is reflected in the fitted resonance frequency of the first mode. Inset: the direct measurement of the nonlinear frequency response of the second mode [@UMfootnote] []{data-label="fig2"}](figure2){width="135mm"}
Torsional-torsional mode interaction
------------------------------------
Before turning to the interactions between the torsional vibration modes, we first measure the frequency response of a single torsional mode as a function of the drive strength. Although torsional modes are extensively used in AFMs [@Huang2004277; @Lohndorf2000p1176], their nonlinear behavior has not been investigated in detail. To investigate the nonlinearity, we strongly drive the torsional mode. In contrast to the flexural-flexural modes, where the nonlinearity arises from geometric and inertial effects, in torsional modes, the nonlinearity originates from torsion warping and inertial moments [@Sapountzakis2010:p1853]. In Appendix A we discuss the equations of motion including the nonlinearities involved. The amplitude of the first torsional mode with varying drive power is shown in Fig. \[fig3a\](a), with selected frequency responses of the first torsional mode plotted in Fig. \[fig3a\](b). At low driving power, the resonance line shape is a DDHO response, and the cantilever is oscillating in the linear regime. When the power is increased, the frequency response leans towards higher frequencies and the amplitude bifurcates. Close to this critical amplitude (0 dbm) the slope of the frequency response approaches infinity, which may be used to enhance the sensitivity in torsional mode AFM. A frequency stiffening is observed for the first and second torsional mode.\
![Nonlinear torsional mode. (a) Frequency responses of the first torsional mode, when the drive amplitude is increased. Beyond a power of 5 dBm, the response is bistable. Color scale indicates the amplitude normalized to the drive voltage. (b) Resonator amplitude traces at 5 selected drive powers. The nonlinear frequency response is visible at high drive powers. The frequency is swept from low to high. []{data-label="fig3a"}](figure3a){width="135mm"}
Similar to the flexural-flexural modal interactions discussed in the previous section, the coupling between the first and second torsional modes is studied: we measure the thermal noise of the first mode while the drive power of the second torsional mode is varied. The resonance frequencies, obtained from DDHO fits to the thermal noise spectra of the first mode, are shown in Fig. \[fig3b\](a). The resonance frequency increases with 500 Hz, while increasing the driving strength of the second mode to 10 dBm. We now perform a similar experiment as the one as shown in Fig. \[fig2\](b). Thus, the first torsional mode is used to detect the nonlinear vibrations of the second torsional mode. Fig. \[fig3b\](b)) shows the nonlinear response, resembling the behavior of the first mode shown in Fig. \[fig3a\](b).\
![Torsional-torsional mode interactions. (a) The resonance frequency of the first torsional mode, when the drive power at the second torsional mode on resonance is varied. The resonance frequency of mode 1, $f_{\mathrm{R,T1}}$, increases with the drive power of the second mode, $p_{\mathrm{T2}}$. (b) The nonlinear response of the second mode is measured by using the first mode as a detector.[]{data-label="fig3b"}](figure3b){width="135mm"}
Interesting behavior is observed when both torsional modes are driven in the nonlinear regime. In contrast to measurements in the previous section, the first mode is now also driven in the nonlinear regime. Fig. \[fig4\](a) shows the nonlinear frequency response of the first torsional mode, while stepping the drive frequency of the second torsional mode through its resonance. Fig. \[fig4\](b) shows individual traces, which reveal interesting behavior; in the lowest panel (i), there is no influence of the second mode and frequency stiffening of the first mode is observed cf. Fig. \[fig3a\](b). When the amplitude of the second mode starts to increase as it approaches its resonance, the response of mode 1 becomes more linear (panel ii). Here, the frequency stiffening and weakening nonlinearities are balanced yielding a linear response. At high amplitude of the second mode, frequency weakening of mode 1 (panel iii) is observed. When the amplitude of the second mode drops, frequency stiffening is restored (panel iv). This measurement not only demonstrates the coupling between the torsional modes, but also that the sign of the nonlinearity constant of a torsional mode depends on the amplitude of the motion of the other modes. By simultaneous driving another mode, the torsional frequency response can be tuned from a stiffening to a weakening characteristic.\
![Tuning the torsional nonlinearity via modal interactions. (a) Frequency responses of the nonlinear first torsional mode, while the frequency of the second mode is swept trough its nonlinear resonance. The frequency stiffening of the torsional mode changes into weakening when the second mode oscillates with high amplitudes. When the amplitude of the second mode jumps down, again frequency stiffening is observed. (b) Traces from (a) taken at the indicated frequencies. In panel (ii) the response is close to a linear one, due to balancing of the stiffening and weakening nonlinearities.[]{data-label="fig4"}](figure4){width="135mm"}
Flexural-torsional mode interaction
-----------------------------------
The coupling between the first flexural and first torsional mode is now studied experimentally. This coupling is theoretically described in Eq. \[eq2\]. Fig. \[fig5\](a) shows the resonance frequency of the first torsional mode as a function of drive power of the first flexural mode. The resonance frequency increases with 100 Hz when the power in the flexural mode is increased to 10 dBm. Detection of the nonlinear flexural mode by measuring the resonance frequency of the torsional mode is shown in Fig. \[fig5\](b). The nonlinear interactions when both modes are driven in the nonlinear regime are shown in Fig. \[fig5\](c) and (d). The interaction is clearly visible in the frequency-frequency plots, where one frequency is swept and the frequency of the RF source is stepped across the nonlinear resonances of both modes.\
![Flexural-torsional interaction. (a) Resonance frequency shift of the first torsional mode, when the drive power of the first flexural mode is increased. The resonance frequencies are obtained from thermal noise spectra. (b) The nonlinear resonance response of the first flexural mode reflected in the resonance frequency of the torsional mode (from thermal noise spectra). (c) and (d) Nonlinear dynamics when the flexural as the torsional mode are both excited in the nonlinear regime.[]{data-label="fig5"}](figure5){width="135mm"}
Discussion and conclusion
=========================
In summary, we demonstrated the coupling between the flexural and torsional vibration modes in a microcantilever. This coupling is due to nonlinearities, which also give rise to a amplitude-dependent resonance frequency. The interactions between the different flexural modes, between different torsional modes and between the flexural and torsional modes are demonstrated in detailed experiments. We also demonstrate the nonlinear frequency stiffening of torsional modes driven at high amplitudes.\
Several applications are proposed for the modal interactions. A specific resonance mode can be shifted to a higher frequency by simultaneously driving another mode. For strongly driven modes, the cubic spring constant (nonlinearity) can be modified from positive to negative, tuning the response from stiffening to weakening. By balancing two excitation strengths, a nonlinear response can be tuned to a linear one. By modal interactions, one mode can be used to detect the motion of another mode of the same cantilever. Besides these applications, the modal interactions have consequences for multi-mode schemes, such as the scanning probe microscopy and mass sensors based on microcantilevers [@Raman2011:p809; @Li:2007p25; @Raiteri2001115; @dohn2007p103303].
Acknowledgement {#acknowledgement .unnumbered}
===============
The authors acknowledge financial support from the Dutch funding organization FOM (Program 10, Physics for Technology).
[25]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , ****, ().
, ****, ().
, ****, ().
, , , , , , , , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , , ****, ().
, , , ****, ().
, ****, ().
, , , , , , , ****, ().
, ****, ().
, , , , , ****, ().
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, , , ****, ().
, , , , ****, ().
, , , , ****, ().
, , **** ().
Theory of modal interactions {#app}
============================
In this appendix the nonlinear equations of motion of the modes in a cantilever are described. We start with the general equations of motion, which include the coupling between the torsional and flexural modes. Then, we consider flexural modes along one axis. We conclude with the equation of motion of two coupled flexural modes, relevant to the experiment in Section 3.1. We start with the equations derived by Crespo da Silva [@CrespodaSilva:1978p30; @CrespodaSilva:1978p29], who described the motion in two flexural directions $v$ and $w$ and the torsional angle $\theta$: $$\begin{aligned}
m v_{tt} &+ \gamma^v v_t + \frac{\partial^2}{\partial s^2}(D_{\zeta}v_{ss}) = \frac{\partial }{\partial s}\Bigg\{ -D_{\xi} w_{ss}(\theta_s + v_{ss}w_s) + w_s Q_{\theta} - v_s\frac{\partial}{\partial s}\Big[D_{\zeta}(v_s v_{ss} + w_{s}w_{ss})\Big] \nonumber \\
& + (D_\eta - D_\zeta)w_s v_{ss} w_{ss} + \Big[ \Big(D_\eta - D_\zeta\Big)\frac{\partial}{\partial s} \Big(\theta w_{ss} - \theta^2 v_{ss}\Big)\Big] \nonumber \\
&+ \jmath_\xi w_{st}(\theta_t + v_{st}w_s) + \jmath_\zeta v_{st}\frac{\partial}{\partial t}\Big(v_{s}v_{st} + w_{st}w_{s}\Big) - \frac{\partial}{\partial t}\Big[(\jmath_\eta - \jmath_\zeta)(\theta w_{st} \nonumber \\
&- \theta^2 v_{st} + w_s v_{st} w_{st}) - \jmath_\zeta v_{st}\Big] - \frac{v_s}{2}\int_L^s \mathrm{d}s' \, m \int_0^{s'} \mathrm{d}s'' \, \frac{\partial^2}{\partial t^2} \Big(v_s^2 + w_s^2\Big) \Bigg\} + q \cos(\Omega t) , \\
m w_{tt} &+ \gamma^q w_t + \frac{\partial^2}{\partial s^2}(D_{\eta}w_{ss}) = \frac{\partial }{\partial s} \Bigg\{ D_{\xi} v_{ss}(\theta_s + v_{ss}w_s) - w_s\frac{\partial}{\partial s}\Big[D_{\eta}(v_s v_{ss} + w_{s}w_{ss})\Big] \nonumber \\
&+ \Big[\Big(D_\eta - D_\zeta\Big)\frac{\partial}{\partial s}\Big(\theta v_{ss} + \theta^2 w_{ss}\Big)\Big] - \jmath_\xi v_{st}(\theta_t + v_{st} w_s) + w_s\frac{\partial}{\partial t}(\jmath_\eta w_s w_{st} + \jmath_\zeta v_s v_{st}) \nonumber \\
&- \frac{\partial}{\partial t}\Big[(\jmath_\eta - \jmath_\zeta)(\theta v_{st} + \theta^2 w_{st}) - \jmath_\eta w_{st}\Big] - \frac{w_s}{2}\int_L^s \mathrm{d}s' \, m \int_0^{s'} \mathrm{d}s'' \, \frac{\partial^2}{\partial t^2}\Big(v_s^2 + w_s^2\Big) \Bigg\}, \\
\jmath_\xi \frac{\partial }{\partial t} &\Big(\theta_{t} + v_{st} w_s\Big) - \frac{\partial }{\partial s}\Big[D_\xi(\theta_s + v_{ss}w_{s})\Big] + (D_\eta - D_\zeta)\Big[(v_{ss}^2 - w_{ss}^2) \theta - v_{ss}w_{ss}\Big]
\nonumber \\
&- (\jmath_\eta - \jmath_\zeta) \Big[(v_{st}^2 - w_{st}^2)\theta - v_{st} w_{st} \Big] = Q_\theta.\end{aligned}$$ Here, the subscripts $s$ and $t$ denote differentiating to time and position respectively. $\xi$, $\eta$ and $\zeta$ represent the principle axes of the beam’s cross section. $\gamma$ and $Q_\theta$ represents the damping, $D_{\eta,\zeta}$ are the flexural stiffnesses of the beam and $D_{\xi}$ is the torsional stiffness. The moments of inertia are given by $\jmath_{\eta,\zeta,\xi}$. The driving force is $q$ with driving frequency $\Omega$.\
Now, we consider only vibrations in one direction, so the Eq.A.2 and all terms with $w$ in Eqs.A.1 and A.3 are disregarded. For the torsional mode, the nonlinear effect of torsion warping is taken into account [@Sapountzakis2010:p1853], where we assume that the ends of the beam are axially immovable. The coupled differential equations (in non-dimensional form, notation from Ref. [@CrespodaSilva:1994p9050]) are now given by: $$\begin{aligned}
v_{tt} &+ \gamma^v v_t + \beta_y v_{ssss} = \frac{\partial }{\partial s} \Bigg\{- v_s \frac{\partial }{\partial s} \Big(\beta_y v_s v_{ss}\Big) - \frac{\partial }{\partial s}\Big[(1 - \beta_y )\theta^2 v_{ss}\Big] + \jmath_\zeta v_{s} \frac{\partial }{\partial t}\Big( v_s v_{st}\Big) \nonumber \\
& - \frac{\partial }{\partial t} \Big[\Big(\jmath_\zeta - \jmath_\eta\Big)\theta^2 v_{st} + \jmath_\zeta v_{st}\Big] - \frac{v_s}{2}\int_1^s \mathrm{d}s' \, m \int_0^{s'} \mathrm{d}s'' \, \frac{\partial^2 }{\partial t^2} \Big(v_s^2 \Big) \Bigg\} + q \cos(\Omega t) , \\
\jmath_\xi \theta_{tt} &- \jmath_{\xi} \gamma^\theta \theta_t + \Big(\beta_\theta + \frac{I_t}{A D_\eta L^2} \tilde{N}\Big)\theta_{ss} - \frac{\rho C_s D_\eta}{\sqrt{m}L^4} \theta_{sstt} + \frac{E C_s}{L^4} \theta_{ssss} - \beta_z \theta_s^2 \theta_{ss} \nonumber \\
&- (1 - \beta_y) v_{ss}^2 \theta + (\jmath_\eta - \jmath_\zeta) v_{st}^2 \theta = 0.
\label{eq2}\end{aligned}$$ $\beta_y$ and $\beta_\theta$ are the ratios between two stiffnesses ($\beta_y = D_\zeta / D_\eta$ and $\beta_\theta = D_\xi / D_\eta$) and $A$ is the cross sectional area. The torsion nonlinearity is written as $\beta_z = \frac{3}{2 L^3}E I_n$. The torsion constant $I_t$, the warping constant $C_S$ and the time-dependent tensile axial load $\tilde{N}$ are given by: $$\begin{aligned}
I_t &= \int_S \Big( y^2 + z^2 _ y \frac{\partial \phi_S^P}{\partial z} - z \frac{\partial \phi_S^P}{\partial y} \Big) \mathrm{d}S,
\nonumber \\
C_S &= \int_S (\phi_S^P)^2 \mathrm{d}S,
\nonumber \\
\tilde{N} &= \frac{1}{2}\frac{EI_p}{l}\int_0^l (\theta_x')^2 \mathrm{d}x.\end{aligned}$$ Here, $S$ is the solid angle and $\phi_S^P$ is the primary warping function. A more detailed description of the nonlinearity in the torsional mode is found in Ref. [@Sapountzakis2010:p1853].\
To demonstrate the origin of the nonlinear interactions observed in the main text, we now simplify the coupled equations Eq.A.4 and A.5 by applying the Galerkin procedure. The solutions are then written as a linear combination of the linear mode shapes of the cantilever with coefficients, which correspond to the time-dependent vibration, $v = \sum_i F^v_i (s) v_i(t)$ and $\theta = \sum_i F^\theta_i \theta_i(t)$, where $i$ represents the mode number. The mode shapes of the flexural and torsional modes of the cantilever will be discussed. Introducing the operator $\mathcal{L}$, the linear part of Eq. \[eq2\] is written as: $$\mathcal{L}[F^v] = \beta_y \frac{\partial^4 F^v}{\partial t^4} + \jmath_\zeta \omega^2 F^v = \omega^2 F^v$$ The resonance frequency is denoted as $\omega$. The eigenfunctions can be calculated together with the boundary conditions for a single-clamped cantilever $F^v(0) = F^v_s = F^v_ss(1) = F^v_{sss} = 0$ as: $$\begin{aligned}
F &= [ \cosh(k_1 s) - \cos(k_2 s) - K (\sinh(k_1 s) - k_1/k_2 \sin(k_2 s) ] ,
\nonumber \\
K &= \frac{k_1^2 \cosh(k_1) + k_2^2 \cos(k_2)}{k_1^1 \sinh(k_1) + k_1 k_2 \sin(k_2)},\end{aligned}$$ The values of $k_{1,2}$ are given by: $$k_{1,2} = \sqrt{\mp \frac{\jmath_\zeta \omega_B^2}{2 \beta_y} + \sqrt{\Bigg(\frac{\jmath_\zeta \omega_B^2}{2 \beta_y}\Bigg)^2 + \frac{\omega_B^2}{\beta_y} }}.$$ The values of $k_1$ and $k_2$ depend on the mode number $i$ and can be calculated via the generating function: $$k_1^4 + k_2^4 + 2 k_1^2 k_2^2 \cosh(k_1)\cos(k_2) + k_1k_2(k_2 - k_1)^2 \sinh(k_1)sin(k_2) = 0$$ The dimensional resonance frequency of the flexural mode is given by $\omega_{B,i} = \kappa_i (h/L^2) \sqrt{D_\zeta/\jmath_\zeta}$, where $\kappa_i$ is 1.875, 4.695 and 7.855 for $i=1$, 2 and 3. The beam shape of the first two flexural modes are shown in the inset of Fig. \[fig1\] of the main text.\
The torsional mode shapes can be calculated by considering the operator $\mathcal{M}$ with eigenvalues $\omega_T$ $$\mathcal{M}[G] = \frac{\beta_\xi}{\jmath_\xi} \frac{\partial^2 F^\theta}{\partial t^2} = \omega_T^2 G,
\label{torsion}$$ and the corresponding boundary conditions of $F^\theta(0) = F^\theta_s(1) = 0$. Inserting the boundary conditions in Eq. \[torsion\], gives the equation for the torsional mode shapes: $G = \sin[(2i-1)\pi/2 s]$. The resonance frequency of the torsion mode is given by $\omega_T = (2i-1)(\pi/2)\sqrt{\beta_\xi/\jmath_\xi}$.\
The Galerkin procedure is applied to Eq.A.4 and A.5: i.e. the solutions are written as a linear combination of the eigenmodes. We assume that the flexural mode is only excited around the resonance frequency, accumulating in the equations: $$\begin{aligned}
v^i_{tt} &+ \gamma^v v^i_t + \omega_F^2 v^i = \sum_j \sum_k \sum_l \Big( v^j \Big[\alpha_{1,ijkl} \theta^k \theta^l + \alpha_{2,ijkl} v^k v^l +\alpha_{3,ijkl} (v^k v^l)_{tt} \Big] \Big) + f^i \cos(\Omega t),
\nonumber \\
\theta^i_{tt} &+ \gamma^\theta + \omega_T^2 \theta^i = \sum_j \sum_k \sum_l \Big( \theta^j \Big[\alpha_{4,ijkl} v^k v^l + \alpha_{5,ijkl} \theta^k \theta^l \Big] \Big).\end{aligned}$$ The above equations show that the nonlinearity is the origin of the modal interactions. Note that for $j=k=l$, the nonlinear equation describing one mode of the cantilever is found. A quadratic coupling is present between two different vibrational modes (for example $k=l$) also follows directly from the cubic nonlinearities. This quadratic coupling is clearly observed in the experiments. In Eq. A.12, the terms linear in $\theta$ are assumed to only modify the resonance frequency $\omega_T$. The coupling (Galerkin) coefficients $\alpha$ are given by the following equations: $$\begin{aligned}
\alpha_{1,ijkl} &= -(1-\beta_y) \int_0^1 F^i(F^j_{ss} G^k G^l)_{ss} \mathrm{d}s ,
\nonumber \\
\alpha_{2,ijkl} &= -\beta_y \int_0^1 F^i[F^j_s(F^k_s F^l_{ss})_{s}]_s \mathrm{d}s ,
\nonumber \\
\alpha_{3,ijkl} &= -\frac{1}{2} \int_0^1 F^i \Big( F^j_s \int_1^{s''} \int _0^{s'} F^k_s F^l_s \mathrm{d}s \mathrm{d}s' \Big)_{s''} \mathrm{d}s'' ,
\nonumber \\
\alpha_{4,ijkl} &= \frac{-(1-\beta_y)}{\jmath_\xi}\int_0^1 G^i G^j (F^k F^l)_{ss} \mathrm{d}s,
\nonumber \\
\alpha_{5,ijkl} &= \beta_z \int_0^1G^i (G^j_s G^k_s) G^l_{ss} \mathrm{d}s.\end{aligned}$$ Considering the interactions only between the lowest two modes of the torsional and the flexural mode, the values of the integrals in the coefficients $\alpha$ are given in Table A.1.
$ijkl$ $\alpha_1$ $\alpha_2$ $\alpha_3$ $\alpha_4$ $\alpha_5$
-------- -------------- -------------- -------------- -------------- ----------------
1111 3.213440553 40.44066328 4.596772482 14.71996258 -0.01963728194
2222 317.7598007 13418.09334 144.7254988 45.80067683 -12.21612177
1211 -25.80780977 -102.3196141 -3.595970428 -2.107211790 -0.05545673259
1121 5.199336697 65.86205943 -3.595970415 -45.97052884 -0.05545673259
1122 10.83931447 172.7377892 25.17415228 74.66573509 -1.468408253
1221 -20.39723182 228.0179031 6.117366163 22.42570986 -0.1567290224
1212 -20.39723182 2083.845719 6.117366163 22.42570986 -1.468408253
2111 -25.80780333 -102.3196141 -3.595970417 -2.107211790 -0.05776014140
2211 395.6571526 172.7376727 25.17415228 11.08062923 -0.1631564726
2121 -20.39724250 228.0179002 6.117366149 22.42570986 -0.1631564726
2112 -20.39724250 2083.845647 6.117366149 22.42570986 -1.529151963
: The values of the integrals in the coefficients $\alpha$ for the interactions between the first and second flexural and torsional modes.
\[tab1\]
To give an example, we work out Eq. A.12 for the fundamental and second flexural mode of a cantilever. This case is studied in the experiment and described in Section 3.1. We denote the amplitudes with $v^1=a$ and $v^2=b$) and the coupled equations are given by: $$\begin{aligned}
a_{tt} &+ \gamma_1 a_t + \omega_F^2 a = \alpha_{2,1111} a^3 + \alpha_{2,1222} b^3 + (\alpha_{2,1112} + \alpha_{2,1121} + \alpha_{2,1211})a^2 b
\nonumber \\
&+ (\alpha_{2,1212} + \alpha_{2,1221} + \alpha_{2,1122}) b^2 a + \alpha_{3,1111} 2a(a^2)_{tt} + \alpha_{3,1222} 2b(b^2)_{tt} + (\alpha_{3,1112} + \alpha_{3,1121}) a(ab)_{tt}
\nonumber \\
&+ (\alpha_{3,1221} + \alpha{_3,1212}) b(ab)_{tt} + \alpha_{3,1211} b(a^2)_{tt} + \alpha_{3,1122} a(b^2)_{tt} + f_1 \cos(\Omega t), \\
b_{tt} &+ \gamma_2 b_t + \omega_F^2 b = \alpha_{2,2111} a^3 + \alpha_{2,2222} b^3 + (\alpha_{2,2112} + \alpha_{2,2121} + \alpha_{2,2211})a^2 b
\nonumber \\
&+ (\alpha_{2,2212} + \alpha_{2,2221} + \alpha_{2,2122}) b^2 a + \alpha_{3,2111} 2a(a^2)_{tt} + \alpha_{3,2222} 2b(b^2)_{tt} + (\alpha_{3,2112} + \alpha_{3,2121}) a(ab)_{tt}
\nonumber \\
&+ (\alpha_{3,2221} + \alpha_{3,2212}) b(ab)_{tt} + \alpha_{3,2211} b(a^2)_{tt} + \alpha_{3,2122} a(b^2)_{tt} + f_2 \cos(\Omega t).
\label{flexflex}\end{aligned}$$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'As the realization of a fully operational quantum computer remains distant, *quantum simulation*, whereby one quantum system is engineered to simulate another, becomes a key goal of great practical importance. Here we report on a variational method exploiting the natural physics of cavity QED architectures to simulate strongly interacting quantum fields. Our scheme is broadly applicable to any architecture involving tunable and strongly nonlinear interactions with light; as an example, we demonstrate that existing cavity devices could simulate models of strongly interacting bosons. The scheme can be extended to simulate systems of entangled multicomponent fields, beyond the reach of existing classical simulation methods.'
author:
- Sean Barrett
- Klemens Hammerer
- Sarah Harrison
- 'Tracy E. Northup'
- 'Tobias J. Osborne'
title: Simulating Quantum Fields with Cavity QED
---
Modelling interacting many-particle systems classically is a challenging yet tractable problem. However, in the quantum regime, it becomes rapidly intractable, owing to the dramatic increase in the number of variables required to describe the system. Feynman [@Feynman1982] realized that an alternate approach would be to exploit quantum mechanics to carry out simulations beyond the reach of classical computers. This idea was the basis of Lloyd’s simulation algorithm [@Lloyd1996], a procedure for a *digital* quantum computer to simulate the dynamics of a strongly interacting quantum system. In contrast, there is also an *analogue* approach to quantum simulation, where the simulator’s Hamiltonian is tailored to match that of the simulated system [@Buluta2009]. The complementary aspects of the analogue and digital methods, reviewed in [@Buluta2009; @Aspuru-Guzik2012; @Bloch2012; @Johanning2009; @Lewenstein2007], have led to a host of recent experiments [@Friedenauer2008; @Gerritsma2010; @Haller2010; @Kim2010; @Lanyon2010; @Islam2011; @Simon2011; @Barreiro2011; @Lanyon2011].
To date, most experimental implementations of quantum simulation algorithms have been focussed on the task of simulating *quantum lattice systems*, with comparatively less attention paid to systems with continuous degrees of freedom. The archetypal example of a quantum system with a continuous degree of freedom is the *quantum field*. Currently, quantum simulations of quantum field theories have relied on discretization of the dynamical degrees of freedom. One body of recent theoretical work is focussed on the analogue simulation of discretized quantum fields, using cold atoms in optical lattices [@Lepori2010; @Bermudez2010; @Cirac2010; @Semiao2011; @Kapit2011] and coupled cavity arrays [@Hartmann2006; @Greentree2006; @Angelakis2007]. Complementing this are proposals for digital quantum simulation on a universal quantum computer of the zero-temperature [@Byrnes2006] and thermal [@Temme2011] dynamics of non-abelian gauge theories and, more recently, a digital quantum simulation [@Jordan2012; @Jordan2011] of scattering processes of a discretized $\lambda\phi^4$ quantum field.
In this paper we report on an *analogue* algorithm to simulate the ground-state physics of a one-dimensional strongly interacting quantum field using the *continuous* output of a cavity-QED apparatus [@Raimond2001; @Miller2005; @Walther2006; @Haroche2006; @Girvin2009]. Our method involves no discretization of the dynamical degrees of freedom; the simulation register is the continuous electromagnetic output mode of the cavity. The variational wave function generated in this way therefore belongs to an extremely expressive class, namely the class of continuous matrix product states, as we will show. We argue that our approach is already realizable with state-of-the-art cavity-QED technology.
![image](Detectors.pdf){width="1.5\columnwidth"}
We concentrate on simulating quantum fields modelling collections of strongly interacting bosons in one spatial dimension. These systems are compactly described in second quantization using the quantum field annihilation and creation operators $\hat{\psi}(x)$ and $\hat{\psi}^\dag(x)$, which obey the canonical commutation relations $[\hat{\psi}(x), \hat{\psi}^\dag(y)] = \delta(x-y)$. The task is to determine the ground state of a given field-theoretic Hamiltonian $\hat{\mathcal{H}}(\hat\psi,\hat\psi^\dagger)$. The prototypical form of such a Hamiltonian is $$\label{eq:Ham}
\hat{\mathcal{H}} = \int (\hat{T} + \hat{W} + \hat{N})\, dx$$ where $\hat{T} = \frac{d\hat{\psi}^\dag(x)}{dx}\frac{d\hat{\psi}(x)}{dx}$, $\hat{W} = \int w(x-y) \hat{\psi}^\dag(x)\hat{\psi}^\dag(y)\hat{\psi}(y)\hat{\psi}(x)\, dy$, and $\hat{N} = -\mu\hat{\psi}^\dag(x)\hat{\psi}(x)$ describe the kinetic energy, two-particle interactions with potential $w(x-y)$, and the chemical potential, respectively. Our approach provides a quantum variational algorithm for finding the ground states of an arbitrary Hamiltonian that is translation-invariant and consists of finite sums of polynomials of creation/annihilation operators and their derivatives.
The apparatus proposed to simulate the ground-state physics of $\hat{\mathcal{H}}$ is a single-mode cavity coupled to the quantum degrees of freedom of some intracavity medium (Fig. \[fig:dictionary\]); our proposal is not tied to the specific nature of the medium, so long as one or more tunable nonlinear interactions are present that are sufficiently strong at the single-photon level. Below we consider the example of a single trapped atom coupled to the cavity via electronic transitions. The system is described by a Hamiltonian $\hat H_{\mathrm{sys}}(\lambda)$ that depends on a set of controllable parameters $\lambda$, for example, externally applied fields. When the cavity is driven, either directly through one of its mirrors or indirectly through the medium, the intracavity field relaxes to a stationary state, and the cavity emits a steady-state beam of photons in a well-defined mode.
The crucial idea underlying our proposal is to regard the steady-state cavity output as a continuous register recording a *variational* quantum state $|\Psi(\lambda)\rangle$ of a one-dimensional quantum field with control parameters $\lambda$ *as* the variational parameters. This representation is chosen so that the spatial location $x$ of the simulated translation-invariant field is identified with the value of the time-stationary cavity output mode exiting the cavity at time $t = x/s$. The arbitrary scaling parameter $s$ is included in the set of variational parameters $\lambda$. We complete this identification by equating the annihilation operator $\hat{\psi}(x)$ of the simulated quantum field with the field operator $\hat{E}^{+}(t)$ for the positive-frequency electric field of the cavity output mode [^1], via $\hat{\psi}(x) =\hat E^{+}(t)/\sqrt{s}$.
Recall that the variational method proceeds by minimizing the average energy density of the variational state $f(\lambda) = \langle \Psi(\lambda)|\hat{T}+\hat{W}+\hat{N}|\Psi(\lambda)\rangle$ over the variational parameters $\lambda$. A key point in our scheme is that — with the identification of the field operators $\hat E^{+}(t)$ and $\hat{\psi}(x)$ in hand — the value of $f(\lambda)$ can be determined from standard optical measurements on the cavity output field, namely the measurement of *Glauber correlation functions* [@Glauber1963; @Mandel1995], see Fig. \[fig:dictionary\]. This result is easily seen for the Hamiltonian of Eq. . Thanks to the linearity of the expectation value, we can separately measure $\langle \hat{T} \rangle$, $\langle \hat{W}\rangle$, and $\langle \hat{N} \rangle$. The expectation value of the chemical potential term corresponds to a function of the *intensity* of the output beam via $\langle \hat{N} \rangle = -\frac{\mu}{s}\langle\hat E^{-}(t)\hat E^{+}(t)\rangle$. The kinetic energy term $\langle \hat{T} \rangle$ corresponds to the limit $$\begin{aligned}
\langle \hat{T} \rangle = \lim_{\epsilon_1,\epsilon_2\rightarrow 0} \frac{1}{s^3\epsilon_1\epsilon_2}&\left(g^{(1)}(t+\epsilon_1,t+\epsilon_2)-g^{(1)}(t+\epsilon_1,t)\right.\\
&\left.-g^{(1)}(t,t+\epsilon_2)+g^{(1)}(t,t)\right),\end{aligned}$$ where $g^{(1)}(t_1,t_2)=\langle\hat E^{-}(t_1)\hat E^{+}(t_2)\rangle$; this quantity can be estimated by choosing a finite but small value for $\epsilon_1$ and $\epsilon_2$. Note that this procedure does not amount to a simple space discretization because the output is a continuous quantum register. The final term $\langle \hat{W}\rangle$ depends on two-point spatial correlation functions $\langle \hat{\psi}^\dag(x)\hat{\psi}^\dag(y)\hat{\psi}(y)\hat{\psi}(x)\rangle$, which translate to measurements of $g^{(2)}(t_1,t_2)=\langle\hat E^{-}(t_1)\hat E^{-}(t_2)\hat E^{+}(t_2)\hat E^{+}(t_1)\rangle$. The detection schemes to estimate all terms in the showcase Hamiltonian are presented in Fig. \[fig:dictionary\]. From a wider perspective, *any* Glauber correlation function $g^{(n,m)}=\langle\hat E^{-}(t_1)\cdots\hat E^{-}(t_n)\hat E^{+}(t'_m)\cdots\hat E^{+}(t'_1)\rangle$, i.e., any $n+m$-point field correlation function composed of $n$ creation operators $E^{-}$ and $m$ annihilation operators $E^{+}$ [@Glauber1963; @Mandel1995], can be measured with similar, albeit more complex setups, such as in [@Gerber2009; @Koch2011]. Thus, upon identifying $E^{-}(t)$ and $E^+(t)$ with $\psi^\dagger(x)$ and $\psi(x)$, respectively, our scheme admits the measurement of any equivalent energy density $\propto \langle\hat\psi^\dagger(x_1)\cdots\hat\psi^\dagger(x_n)\hat\psi(x'_m)\cdots\hat\psi(x'_1)\rangle$, and therefore ultimately the simulation of arbitrary Hamiltonians $\hat{\mathcal{H}}(\hat\psi,\hat\psi^\dagger)$.
Once $f({\lambda})=\langle \Psi(\lambda)|\hat{\mathcal{H}}|\Psi(\lambda)\rangle$ has been experimentally estimated for a given $\lambda$, the next step is to apply the variational method to minimize $f(\lambda)$. Minimization is carried out by adaptively tuning the parameters $\lambda$ in the system Hamiltonian $\hat{H}_\mathrm{sys}(\lambda)$ and iteratively reducing $f(\lambda)$, for example, using a standard numerical gradient-descent method. Once the optimum choice of ${\lambda}$ is found, the resulting cavity output field is a variational approximation to the ground state of $\hat{\mathcal{H}}$, and relevant observables of the field theory can be directly measured using the detection schemes of Fig. \[fig:dictionary\]. We emphasize that our method applies also to cases where a numerical estimation of $f(\lambda)$ cannot be performed efficiently due to the size and complexity of the system to be simulated, and we suggest that this is exactly the strength of our approach. Moreover, the optimization may be performed experimentally without theoretically calculating the cavity-QED system dynamics; indeed, it is not necessary to accurately characterize $H_{\mathrm{sys}}$ or its relation to the adjustable parameters $\lambda$.
Why should the stationary output of a cavity-QED apparatus be an expressive class capable of capturing the ground-state physics of strongly interacting fields? It is possible to show that such states are of *continuous matrix product state* (cMPS) type, a variational class of quantum field states recently introduced for the classical simulation of both nonrelativistic and relativistic quantum fields [@Verstraete2010; @Osborne2010; @Haegeman2010a; @Haegeman2010b]. These states are a generalisation of *matrix product states* (MPS) [@Fannes1992; @Verstraete2008; @Cirac2009; @Schollwoeck2011], which have enjoyed unparalleled success in the study of strongly correlated phenomena in one dimension in conjunction with the *density matrix renormalization group* (DMRG) [@White1992; @Schollwoeck2005]. It turns out that *all* quantum field states admit a cMPS description, providing a compelling argument for their utility as a variational class [@Completeness]. Crucially, the cMPS formalism turns out to be *identical* to the input-output formalism of cavity QED [@Collett1984]. This identification was anticipated in [@Verstraete2010; @Osborne2010; @Schoen2005; @Schoen2007], and we elucidate it further in the supplemental material. It implies that *quantum field states emerging from a cavity are of cMPS type* and thus fulfill the necessary conditions for being a suitable and expressive class of variational quantum states.
![Two-particle correlations in the Lieb-Liniger model are reproduced in simulations of an ion-trap cavity experiment. This *critical* model exhibits a transition between the *superfluid* regime for $v \approx 0$ and the *Tonks-Girardeau* regime for $v\gg 0$, which is seen in the value of the correlation function at $t = 0$. (a) The Lieb-Liniger ground state is simulated for interaction strengths $v = \{ 0.07\, \text{(red)}, 3.95\, \text{(orange)}, 60.20\, \text{(yellow)}, 625.95\, \text{(green)}\}$, and correlation functions $\langle \hat{\psi}^{\dagger}(0)\hat{\psi}^{\dagger}(x)\hat{\psi}(x)\hat{\psi}(0) \rangle$ calculated as in [@Verstraete2010; @Osborne2010] using 338 variational parameters. (b) Two-photon correlation functions $g^{(2)}{(t)}$ for an experiment with the parameters of [@Stute2012]. Although there are visible differences, with just three variational parameters $\{ g, \Omega, s \}$ the transition in the correlation functions is approximately reproduced. It is worth emphasizing how unusual it is for a variational calculation with only a few parameters to reproduce anything more than the coarsest features of a correlation function, e.g., if mean-field theory is used one does not obtain nontrivial correlators. Strikingly, the transition in (a) is captured even in the presence of realistic decay channels (inset). Note that this transition is analogous to that observed in [@Dubin2010].[]{data-label="fig:pilot_study"}](g2.pdf){width="\columnwidth"}
Even though cavity-QED output states are of cMPS type, can a realistic system in the presence of decoherence reproduce the relevant physics of a strongly interacting quantum field? As a test case, we demonstrate that the paradigmatic cavity-QED system, comprising a single trapped atom coupled to a high-finesse cavity mode, is capable of simulating the ground-state physics of an equally paradigmatic field, namely, the Lieb-Liniger model [@Lieb1963]. This model describes hard-core bosons with a delta-function interaction and is given by Eq. with $w(x-y) = v\delta(x-y)$, where $v$ describes the interaction strength. Our simulator consists of a two-level atom interacting with one cavity mode, described (in a suitable rotating frame) by the on-resonance Jaynes-Cummings Hamiltonian $$\begin{aligned}
\hat H_{sys} = g(\hat\sigma^{+}\hat a + \hat\sigma^{-}\hat a^{\dagger}) + \Omega(\hat\sigma^{+} + \hat\sigma^{-}),
\label{HJC}\end{aligned}$$ where $\hat\sigma^{+}$ is the atomic raising operator and $\hat a$ is the cavity photon annihilation operator, $g$ the atom–cavity coupling, and $\Omega$ the laser drive. The cavity-QED Hamiltonian $\hat H_{sys}$ can be realized in various experimental architectures [@Raimond2001; @Miller2005; @Walther2006; @Haroche2006; @Girvin2009]. Here we choose the example of a trapped calcium ion in an optical cavity, with which tunable photon statistics have previously been demonstrated [@Dubin2010], and we show in the supplemental information (see below) how $g$ and $\Omega$ can function as variational parameters.
In an experiment, to measure the variational energy density $f(\lambda)$, the output beam would be allowed to relax to steady state, the intensity $I$, $g^{(1)}$, and $g^{(2)}$ functions estimated as depicted in Fig. \[fig:dictionary\], and $f(\lambda)$ finally estimated from postprocessing this data. Measurement schemes 2a and 2b of Fig. \[fig:dictionary\] are just the standard laboratory techniques of photon detection and Hanbury-Brown and Twiss interferometry. Measurement scheme 2c represents an interferometer with variable path length that is used to estimate the derivative of the quantum field in the kinetic energy term $\langle \hat{T} \rangle$. Length shifts on the mm scale correspond to ps values of $\epsilon_1$ and $\epsilon_2$ in the estimation of $\langle \hat{T} \rangle$; as these values are six orders of magnitude smaller than the relevant timescales of the experiment, they are considered sufficiently small.
Obviously the test model chosen here is simple enough to admit a *classical* simulation, which we carry out for the experimental parameters of [@Stute2012]. Exploiting a simple gradient-descent method, we find the values of $\lambda = \{ g, \Omega, s \}$ that minimize $f(\lambda)$ for a given value of $v$. This procedure is repeated over a range of $v$ values of interest, and the corresponding optimized values of $\lambda$ are then used to compute quantities of interest, e.g., spatial-correlation functions as shown in Fig. \[fig:pilot\_study\]. Remarkably, we find that just these three variational parameters $\lambda=(g,\Omega, s)$, when varied in the experimentally feasible parameter regime of [@Dubin2010; @Stute2012] in the presence of losses, allow for a quantum simulation of Lieb-Liniger ground-state physics. One expects that the ground-state approximation would improve by increasing the dimension of the auxiliary system and by allowing sufficiently general internal couplings and couplings to the field. In the context of atom-cavity systems, this can be done by making use of the rich level structure of atoms (i.e., Zeeman splittings) and making use of motional degrees of freedom. For sufficiently complex intracavity dynamics a classical simulation will become unfeasible, and at the same time it becomes conceivable that such a simulator will outperform the best classical methods.
The reliability of a quantum simulator can be compromised by decoherence effects, as was recently emphasized in [@Hauke2011]. Our simulation of the ion-cavity system includes both cavity decay (at rate $\kappa$) and decay of the ion due to spontaneous emission (at rate $\gamma$). The cavity decay rate rescales the parameter $s$ linking measurement time and simulated space, and thus it can be considered as a variational parameter itself. We emphasize that cavity decay does not function as a decoherence channel in our scheme but is rather an essential element of the cMPS formalism. In contrast, spontaneous emission sets the limit for the timescale over which coherent dynamics can be observed. For our present example, we can show that the regime of strong cooperativity $\mathcal{C} = g^2/\kappa\gamma \gtrsim 1$ is sufficient to allow a simulation of the Lieb-Liniger model despite detrimental losses. The Lieb-Liniger model exhibits nontrivial variations (Friedel oscillations) of the two-point correlation function on a length scale $\xi=\langle\hat\psi(x)\hat\psi^\dagger(x)\rangle^{-1}$. This length scale in the simulated model translates to a time scale over which the stationary output field (as the simulating register) should exhibit similar nontrivial features. From the previously established scaling rule one finds that the required time scale is $\tau=\xi/s=\langle \hat E^{-}(t)\hat E^{+}(t)\rangle^{-1}$. By means of the cavity input-output relations, we relate the output photon flux to the mean intracavity photon number $\langle\hat E^{-}(t)\hat E^{+}(t)\rangle=\kappa\langle\hat a^\dagger \hat a\rangle$. In the bad cavity limit $\kappa \gg g$ the cavity mode can be adiabatically eliminated. In this case $\langle a^\dagger a\rangle\simeq (g/\kappa)^2$, such that $\tau=\kappa/g^2$. On the other hand, the characteristic decoherence time of the ion is determined by the spontaneous emission rate $2\gamma$. Beyond a time $1/2\gamma$ the second order correlation function $g^{(2)}$ will be trivial. We therefore demand $\tau\lesssim 1/2\gamma$, which is equivalent to the requirement of a large cooperativity $\mathcal{C}\gtrsim 1$. For the exemplary case of the ion-cavity system considered above the cooperativity was indeed $\mathcal{C}\simeq 1.8$, see supplementary material. While equivalent conditions must be determined on a case-to-case basis, we expect that nontrivial quantum simulations in cavity QED will not be possible in the weak-coupling regime. Finally, there are overall losses associated with scattering and absorption in cavity mirrors, detection path optics, and photon-counter efficiency. However, while these losses reduce the efficiency with which photon correlations are detected, they do not otherwise affect the system dynamics.
A natural question is when our scheme would provide a practical advantage over classical computers in the simulation of quantum fields. We expect this to be the case in particular for the simulation of fields with multiple components, or species of particles, with canonical field-annihilation operators $\hat{\psi}_\alpha(x)$, $\alpha = 1, 2, \ldots, N$. This situation arises in at least two settings: firstly, for vector bosons in gauge theories with gauge group $\textsl{SU}(N)$, and secondly, in the nonrelativistic setting of cold atomic gases with multiple species. Variational calculations using cMPS fail in these settings, as the number of variational parameters must scale as $D\sim 2^N$. On the other hand, in a cavity-QED quantum simulation multiple output fields are naturally accessible via polarization or higher order cavity modes, and at the same time large effective Hilbert space dimensions can be achieved, e.g., with trapped ions or atoms. With $N \gtrsim 10$, substantial practical speedups are already expected with respect to the classical cMPS algorithm, which requires a number of operations scaling as $2^{3\times N}$.
The realisation that ground-state cMPS and the field states emerging from a cavity are connected can be exploited to characterise the correlations of the emitted light. Indeed, we obtain a simple criteria to determine when the correlations in the light field are nonclassical: if it turns out that the simulated hamiltonian is quadratic in the field operators; and (b) contains only “ultralocal” terms, i.e., no derivatives in the field operators, then the ground state is a trivial (i.e., gaussian) product state, and there would be no nonclassical correlations in the emitted light.
The output of a cavity-QED apparatus admits a natural interpretation as a variational class of quantum-field states. We have demonstrated that this allows an *analogue* quantum simulation procedure for strongly correlated physics using current technology. This result opens up an entirely new perspective for all cavity-QED systems which exhibit sufficiently strong nonlinearities at the single-photon level. This includes not only optical cavities coupled to atoms, but also superconducting circuits with super-strong coupling to solid state quantum systems [@Bozyigit2010], as well as other nonlinear systems achieving a high optical depth without cavities, such as atomic ensembles exhibiting Rydberg blockade [@Pritchard], or coupled to nanophotonic waveguides [@Vetsch2010; @Goban2012]. Looking further afield, since the input-output and cMPS formalisms generalize in a natural way to fermionic settings [@Sun1999; @Search2002; @Gardiner2004], our simulation procedure might be applicable to cavity-like microelectronic settings involving fermionic degrees of freedom. We hope our work will inspire explorations of these promising directions.
We acknowledge helpful conversations and comments by Jens Eisert, Frank Verstraete, Ignacio Cirac, Bernhard Neukirchen, Jutho Haegeman, Konstantin Friebe, Jukka Kiukas and Reinhard Werner. This work was supported by the Centre for Quantum Engineering and Space-Time Research (QUEST), the ERC grant QFTCMPS, the Austrian Science Fund (FWF) through the SFB FoQuS (FWF Project No. F4003), the European Commission (AQUTE, COQUIT), the Engineering and Physical Sciences Research Council (EPSRC) and through the FET-Open grant MALICIA (265522).
Supplementary Material
======================
Continuous Matrix Product States and Cavity QED
-----------------------------------------------
The matrix product state formalism has recently been generalized to the setting of quantum fields in [@Verstraete2010; @Haegeman2010; @Haegeman2010b] giving rise to *continuous matrix product states* (cMPS). These states refer to one-dimensional bosonic fields with annihilation and creation operators $\hat\psi_\alpha(x)$ and $\hat\psi_\alpha^\dagger(x)$ (such that $[\hat\psi_\alpha(x),\hat\psi_\beta^\dagger(x)]=\delta_{\alpha\beta}\delta(x-y)$), and are defined as $$\begin{aligned}
\label{eq:Psi}
|\Psi\rangle&=\mathrm{tr}_{aux}\{\hat U\}|\Omega\rangle,\end{aligned}$$ where $|\Omega\rangle$ denotes the vacuum state of the quantum field and $$\begin{aligned}
\hat U&=\mathcal{P}\exp\left(\int_{-\infty}^\infty dx \hat H_{\text{cMPS}}(x)\right).\end{aligned}$$ Here $\mathcal{P}$ denotes path ordering, and the (non Hermitian) Hamiltonian is $$\begin{aligned}
\label{eq:HcMPS}
\hat H_{\text{cMPS}}(x)=\hat Q\otimes\mathds{1}+\sum_\alpha \hat R_\alpha\otimes \hat\psi^\dagger_\alpha(x)\end{aligned}$$ with $\hat Q$ and $\hat R_\alpha$ being $D\times D$ matrices acting on an auxiliary system of dimension $D$. $\mathrm{tr}_{aux}$ in Eq. is the trace over this auxiliary system.
As was shown in [@Verstraete2010; @Osborne2010] an equivalent representation of the state $|\Psi\rangle$ as defined in Eq. is given by $$\begin{aligned}
\label{eq:Psi2}
\hat\rho&\propto \lim_{x_0,x_1\rightarrow\infty}\mathrm{tr}_{aux}\left\{\hat U(x_0,x_1)|\Omega\rangle\langle\Omega|\otimes|\psi\rangle\langle\psi|\hat U^\dagger(x_0,x_1)\right\}\end{aligned}$$ where $$\begin{aligned}
\hat U(x_0,x_1)&=\mathcal{P}\exp\left(\int_{x_0}^{x_1} dx \hat H_{\text{cMPS}}(x)\right),\end{aligned}$$ and $|\psi\rangle$ is an arbitrary state of the auxiliary system, which plays the role of a boundary condition at $x_0$. The two states in and are equivalent in the sense that they give rise to identical expectation values for normal- and position-ordered expressions of field operators $\langle\hat\psi^\dagger(y_1)\ldots\hat\psi^\dagger(y_n)\hat\psi(y_{n+1})\ldots \hat\psi(y_m)\rangle$, see [@Verstraete2010; @Osborne2010]. Note that the trace in is a proper partial trace over the auxiliary system (in contrast to the trace in ).
Consider, on the other hand, a cavity with several relevant modes described by annihilation/creation operators $\hat a_\alpha,\,\hat a^\dagger_\beta$ (such that $[\hat a_\alpha,\hat a^\dagger_\beta]=\delta_{\alpha\beta}$) which are coupled to some intracavity medium via a system Hamiltonian $\hat H_{sys}$. Moreover, each cavity is coupled to a continuum of field modes ($[\hat a_\alpha(\omega),\hat a^\dagger_\beta(\bar\omega)]=\delta(\omega-\bar\omega)\delta_{\alpha\beta}$) through one of its end mirrors. (The generalization to the case of double-sided cavities, or ring cavities with several outputs per mode is immediate.) The total Hamiltonian for the cavities, the intracavity medium, and the outside field is $$\begin{aligned}
\hat H_{cQED}(t)&=\hat H_{sys}\otimes\mathds{1}\\
&\quad+i\sum_\alpha \int d\omega\sqrt{\frac{\kappa_\alpha(\omega)}{2\pi}}\left(\hat a_\alpha \otimes \hat a_\alpha^\dagger(\omega)e^{-i\omega t}-\mathrm{h.c.}\right).\end{aligned}$$ This Hamiltonian is written in an interaction picture with respect to the free energy of the continuous fields, and it is taken in a frame rotating at the resonance frequencies of the cavities. In the interaction picture and rotating frame each integral extends over a band width of frequencies around the respective cavity frequencies. In the optical domain the Born-Markov approximation, which assumes $\kappa_\alpha(\omega)=\mathrm{const.}$ in the relevant band width, holds to an excellent degree. In this case it is common to define time-dependent operators $\hat E^+_\alpha(t)=\int d\omega/\sqrt{2\pi}\,\hat a_\alpha(\omega)\exp(-i\omega t)$, which fulfill $[\hat E^+_\alpha(t),\hat E^-_\beta(t')]=\delta(t-t')$. As in the main text, these operators correspond to the electric field components at the cavity output, and are defined such that $\langle\hat E^{-}(t)\hat E^{+}(t) \rangle$ denotes the flux of photons per second. The Hamiltonian then is $$\begin{aligned}
\hat H_{cQED}(t)&=\hat H_{sys}\otimes\mathds{1}+i \sum_\alpha \sqrt{\kappa_\alpha}\left(\hat a_\alpha \otimes \hat E_\alpha^-(t)-\mathrm{h.c.}\right).\end{aligned}$$ If the outside field modes are in the vacuum, standard quantum optical calculations [@Gardiner2004a] show that this Hamiltonian is equivalent to an *effective* non-Hermitian Hamiltonian $$\begin{aligned}
\hat H_{cQED}(t)&=\hat H_{sys}\otimes\mathds{1}-i\sum_\alpha\frac{\kappa_\alpha}{2}\hat a^\dagger_\alpha \hat a_\alpha\otimes\mathds{1}+i \sum_\alpha \sqrt{\kappa_\alpha} \hat a_\alpha \otimes \hat E_\alpha^-(t).\end{aligned}$$ When this is compared to Eq. the identification of the formalism of cMPS and cavity QED is immediate. If we assume that the field and the cavity system is in a state $|\Omega\rangle\otimes|\psi\rangle$ at some initial time $t_0$ the final state of the field outside the cavity at time $t_1$ is given by expression , when we identify $x=s\,t$, and $\hat H_{\text{cMPS}}(x)=-i\hat H_{cQED}(x/s)$, that is $$\begin{aligned}
\label{eq:identification}
\hat \psi(x)&= \frac{1}{\sqrt{s}}\hat E^+_\alpha(t), &
\hat R_\alpha&=\sqrt{\frac{\kappa_\alpha}{s}}\hat a_\alpha, &
\hat Q&= -\frac{i}{s}\hat H_{sys}-\sum_\alpha\frac{\hat R^\dagger_\alpha \hat R_\alpha}{2},\end{aligned}$$ with an arbitrary scaling factor $s$. Therefore, the state of the output modes of a cavity is always a continuous matrix product state. Formally these states are cMPS with an infinite-dimensional auxiliary system $D\rightarrow\infty$. However, due to energy constraints, the dimensions of the cavity system are effectively finite. The relevant dimension of the cavity Hilbert space then sets the dimension $D$ of the auxiliary system in the cMPS formalism.
Quantum Variational Algorithm
-----------------------------
As an exemplary test case we demonstrated that the cavity QED system comprising a single trapped atom strongly coupled to a single high-finesse cavity mode is capable of simulating the ground-state physics of the Lieb-Liniger model. The atom-cavity system is described (in a suitable rotating frame) by the on-resonance Jaynes-Cummings Hamiltonian $$\begin{aligned}
H_{sys} = g(\hat \sigma^{+}\hat a + \hat \sigma^{-}\hat a^{\dagger}) + \Omega(\hat\sigma^{+} + \hat\sigma^{-}),
\label{HJC1}\end{aligned}$$ where $\hat\sigma^{+}$ is the atomic raising operator and $\hat a$ is the cavity-photon annihilation operator, $g$ the atom cavity coupling, and $\Omega$ the laser drive. Photons leak out of the cavity with leakage rate $\kappa$, and it is assumed that, in a real experiment, this output light can be measured by various detection setups.
The Lieb-Liniger model, with Hamiltonian $$\begin{aligned}
{\cal \hat H} &= \int_{-\infty}^\infty (\hat{T}+\hat{W}+\hat{N})dx \\
&= \int_{-\infty}^\infty \Big[ \frac{d \hat \psi^\dagger(x)}{dx} \frac{d \hat \psi(x)}{dx} + v~ \hat \psi^\dagger(x)\hat \psi^\dagger(x) \hat\psi(x) \hat\psi(x)\\
&\hspace{5cm} - \mu \hat \psi^\dagger(x) \hat \psi(x) \Big]
dx,\end{aligned}$$ describes hard-core bosons with contact interaction of strength $v$. We performed variational optimizations for a range of values for $v$. We did this using a simple gradient-descent minimization of the average energy density $f(\lambda) = \langle \Psi(\lambda)|\hat{T}+\hat{W}+\hat{N}|\Psi(\lambda)\rangle$. Using the parameters that minimize $f( \lambda)$ we then calculate other quantities of interest, such as correlation functions for the simulated ground state field. We have focussed on a gradient-descent algorithm for clarity; in practice a more sophisticated optimization procedure using the time-dependent variational principle, or conjugate gradients, could be used.
Our variational parameters $ \lambda = \left( g, \Omega, s \right)$ enter $f(\lambda)$ as follows. The expectation value of the energy density in the Lieb-Liniger model is $$\begin{aligned}
f(\lambda; v,\mu) = \langle \hat{T} \rangle +\langle \hat{W} \rangle + \langle \hat{N} \rangle\end{aligned}$$ where each term may be written in terms of the experimentally observed correlation functions via the correspondence $\hat\psi(x) = \hat E^+(t)/\sqrt{s}$, giving $$\begin{aligned}
\langle \hat{T} \rangle &= \lim_{\epsilon_1,\epsilon_2\rightarrow 0} \frac{1}{s^3\epsilon_1\epsilon_2}\left(g^{(1)}(t+\epsilon_1,t+\epsilon_2)-g^{(1)}(t+\epsilon_1,t)\right.\\
&\left.\quad-g^{(1)}(t,t+\epsilon_2)+g^{(1)}(t,t)\right) \,, \\
\langle \hat{W}\rangle & = \frac{v}{s^2} g^{(2)}(t,t) \,, \\
\langle \hat{N} \rangle & = -\frac{\mu}{s} g^{(1)}(t,t) \,,\end{aligned}$$ where $g^{(1)}(t_1,t_2)=\langle \hat E^{-}(t_1)\hat E^{+}(t_2)\rangle$ and $g^{(2)}(t_1,t_2)=\langle \hat E^{-}(t_1)\hat E^{-}(t_2)\hat E^{+}(t_2)\hat E^{+}(t_1)\rangle$. In an experimental simulation, these correlation functions would be measured directly in the laboratory, and the results would be fed back into a classical computer performing the optimization algorithm. However, for the purposes of our proof-of-principle simulation, we calculate the correlation functions directly by means of the input-output formalism. Making use of the cavity input-output relation $\hat E^{+}(t) = \hat E^+_{\mathrm{(in)}}(t) + \sqrt{\kappa}\hat a(t)$, where $\hat E^+_{\mathrm{(in)}}(t)$ denotes the field impinging on the cavity at time $t$ (which is assumed to be in the vacuum state), the expectation value may be written [@Verstraete2010; @Osborne2010]
$$\begin{aligned}
f(\lambda; v,\mu) = \mathrm{tr} \left\{ \left(\left[\hat Q,\hat R\right]\right)^\dagger \left[\hat Q,\hat R\right] \hat \rho_{\tiny \mbox{ss}} \right\} &+ v~ \mathrm{tr} \left\{ \left(\hat R^\dagger\right)^2\hat R^2 ~\hat\rho_{\tiny \mbox{ss}}\right\}\\
&- \mu~\mathrm{tr} \left\{ \hat R^\dagger \hat R \hat\rho_{\tiny \mbox{ss}}\right\}
\end{aligned}$$
where $\hat R$ and $\hat Q$ are defined in Eq. , and $\hat\rho_{\tiny \mbox{ss}}$ is the unique steady state of the atom-cavity system, satisfying $$\begin{aligned}
\frac{d \hat\rho_{\tiny \mbox{ss}} }{dt} = -i \left[ \hat H_{\tiny\mbox{sys}}, \hat\rho_{\tiny \mbox{ss}}\right] + \kappa \hat a \hat\rho_{\tiny \mbox{ss}} \hat a^\dagger -\frac{\kappa}{2} \left[ \hat a^\dagger \hat a, \hat\rho_{\tiny \mbox{ss}}\right]=0.\end{aligned}$$ Note that above, we write $f(\lambda; v,\mu)$ to highlight the dependence of $f$ on $v$ and $\mu$. Hereafter we set $\mu=1$ and minimize $f(\lambda; v,\mu)$ for a range of different values of $v$. Solutions for other values of $\mu$ can be obtained by means of a scaling transformation, as described in [@Verstraete2010; @Osborne2010].
The general outline of the algorithm to determine our optimum values of the variational parameters $\lambda$, for a given choice of $v$ and $\mu$, is thus:
initialize $\textit{tol}$ (tolerance), and $\epsilon$ (step size) initialize $\lambda$ to an arbitrary value\
set $\lambda^{\prime} = \lambda$\
calculate $\nabla f(\lambda; v,\mu)$\
update experimental parameters as $\lambda \leftarrow \lambda - \epsilon\nabla f(\lambda; v,\mu)$\
calculate $f(\lambda^{\prime}; v,\mu)$ and $f(\lambda; v,\mu)$\
Note that in our simulation, $\nabla f(\lambda; v,\mu)$ is estimated numerically by evaluating $f(\lambda+\Delta_i; v,\mu)$ for small values of $\Delta_i$, while in an experiment, $\nabla f(\lambda; v,\mu)$ is found with the aid of measurements of $\langle \hat{T} \rangle$, $\langle \hat{W}\rangle$, and $\langle \hat{N} \rangle$.
[64]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, (), ISSN .
, ****, (), ISSN .
, ****, (), ISSN .
, ****, (), ISSN .
, , , ****, (), ISSN .
, , , ****, (), ISSN .
, , , ****, ().
, , , , , ****, (), ISSN .
, , , , , , ****, (), ISSN .
, , , , , , , , , ****, (), ISSN .
, , , , , , , , , ****, (), ISSN .
, , , , , , , , , , , ****, (), ISSN .
, , , , , , , , , , , ****, (), ISSN .
, , , , , , ****, (), ISSN .
, , , , , , , , , , ****, (), ISSN .
, , , , , , , , , , , ****, (), ISSN .
, , , ****, (), ISSN .
, , , , , , ****, (), ISSN .
, , , ****, (), ISSN .
, ****, (), ISSN .
, ****, (), ISSN .
, , , ****, ().
, , , , ****, ().
, , , ****, ().
, ****, (), ISSN .
, , , , , ****, (), ISSN .
, , , ****, ().
(), .
, , , ****, (), ISSN .
, , , , , , ****, (), ISSN .
, , , , ****, ().
, ** (, ).
, , , ****, ().
Note1,
, ****, (), ISSN .
, ** (, ),
, , , , , , ****, ().
, , , , , , , , ****, ().
, ****, ().
, , , ****, ().
, , , , , ****, (), ISSN .
, , , , , p. ().
, , , ****, (), ISSN .
, , , ****, ().
, ****, ().
, ****, ().
, ****, (), ISSN .
, ****, ().
().
, ****, (), ISSN .
, , , , , ****, ().
, , , , , ****, ().
, , , , , , , , ****, (), ISSN .
, , , , , , , ****, ().
, ****, (), ISSN .
, , , , (), .
, , , , , , , , , , , ****, (), ISSN .
, , , (), .
, , , , , ****, ().
, , , , , , , , , ****, ().
, ****, ().
, , , , ****, ().
, ****, ().
[****, ()](\doibase
10.1103/PhysRevLett.105.251601)
@noop [ ()]{}
@noop [**]{} (, , )
[^1]: We define $\hat E^{+}(t)$ in units such that $\hat E^{-}(t)\hat E^{+}(t)$ corresponds to the mean number of photons exiting the cavity per unit time.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We derive the nested Bethe Ansatz solution of the fully packed O($n$) loop model on the honeycomb lattice. From this solution we derive the bulk free energy per site along with the central charge and geometric scaling dimensions describing the critical behaviour. In the $n=0$ limit we obtain the exact compact exponents $\gamma=1$ and $\nu=1/2$ for Hamiltonian walks, along with the exact value $\kappa^2 = 3 \sqrt 3 /4$ for the connective constant (entropy). Although having sets of scaling dimensions in common, our results indicate that Hamiltonian walks on the honeycomb and Manhattan lattices lie in different universality classes.'
address:
- '$^a$Department of Mathematics, School of Mathematical Sciences, Australian National University, Canberra ACT 0200, Australia'
- '$^b$Institute of Physics, College of Arts and Sciences, University of Tokyo, Komaba, Meguroku, Tokyo 153, Japan'
author:
- 'M. T. Batchelor$^a$, J. Suzuki$^b$ and C. M. Yung$^a$'
date: 1 June 1994
title: Exact Results for Hamiltonian Walks from the Solution of the Fully Packed Loop Model on the Honeycomb Lattice
---
The configurational statistics of polymer chains have long been modelled by self-avoiding walks. In the low-temperature limit the enumeration of a single self-attracting polymer in dilute solution reduces to that of compact self-avoiding walks. A closely related problem isi that of Hamiltonian walks in which the self-avoiding walk visits each site of a given lattice and thus completely fills the available space. Hamiltonian walks are directly related to the Gibbs-DiMarzio theory for the glass transition of polymer melts[@gkm]. More than thirty years ago now Kasteleyn obtained the exact number of Hamiltonian walks on the Manhattan oriented square lattice[@k]. More recently this work has been significantly extended to yield exactly solved models of polymer melts[@dd]. The critical behaviour of Hamiltonian walks on the Manhattan lattice has also been obtained from the $Q=0$ limit of the $Q$-state Potts model [@d]. In particular this Hamiltonian walk problem has been shown[@dd; @d] to lie in the same universality class as dense self-avoiding walks, which follow from the $n=0$ limit in the low-temperature or densely packed phase of the honeycomb O($n$) model[@n; @dense].
As exact results for Hamiltonian walks are confined to the Manhattan lattice, the behaviour of Hamiltonian walks on non-oriented lattices, and the precise scaling of compact two-dimensional polymers, remains unclear[@opb; @c1; @c2]. The exact value of the (Hamiltonian) geometric exponent $\gamma^H$ was conjectured to be $\gamma^H=\gamma^D$, where $\gamma^D=\frac{19}{16}$ was extracted via the Coulmb gas method for dense self-avoiding walks[@dense; @c1]. However, recent numerical investigations of the collapsed and compact problems are more suggestive of the value $\gamma^H=1$[@c2; @ct; @bbgop].
More recently, Bl[ö]{}te and Nienhuis[@bn2] have argued that a universality class different to dense walks governs the O($n$) model in the zero temperature limit (the fully packed loop model). Based on numerical evidence obtained via finite-size scaling and transfer matrix techniques, along with a graphical mapping at $n=1$, they argued that the model lies in a new universality class characterized by the superposition of a low-temperature O($n$) phase and a solid-on-solid model at a temperature independent of $n$. This model is identical to the Hamiltonian walk problem in the limit $n=0$. In this Letter we present exact results for Hamiltonian walks on the honeycomb lattice from an exact solution of this fully packed loop model. We derive the physical quantities which characterize Hamiltonian walks on the honeycomb lattice. These include a closed form expression for the connectivity, or entropy, and an exact infinite set of geometric scaling dimensions which include a conjectured value by Bl[ö]{}te and Nienhuis[@bn2]. Our results settle the abovementioned controversy in favour of the universal value $\gamma^H=1$.
In general the partition function of the O($n$) loop model can be written as $${\cal Z}_{{\rm O}(n)} = \sum t^{{\cal N}-{\cal N}_b} n^{{\cal N}_L},$$ where the sum is over all configurations of closed and nonintersecting loops covering ${\cal N}_b$ bonds of the honeycomb lattice and ${\cal N}$ is the total number of lattice sites (vertices). Here the variable $t$ plays the role of the O($n$) temperature, $n$ is the fugacity of a closed loop and ${\cal N}_L$ is the total number of loops in a given configuration.
For the particular choice $t = t_c$, where[@n] $$t_c^2 = 2 \pm \sqrt{2-n},$$ the related vertex model is exactly solvable with a Bethe Ansatz type solution for both periodic[@b1; @bb; @s1] and open[@bs] boundary conditions. This critical line is depicted as a function of $n$ in Fig. 1. Here we extend the exact solution curve along the line $t=0$, where the only nonzero contributions in the partition sum (1) are for configurations in which each lattice site is visited by a loop, i.e. with ${\cal N}={\cal N}_b$[@noteb]. This is the fully packed model recently investigated by Blöte and Nienhuis[@bn2].
We consider a lattice of ${\cal N} = 2 M N$ sites as depicted in Fig. 2, i.e. with periodic boundaries across a finite strip of width $N$. The allowed arrow configurations and the corresponding weights of the related vertex model are shown in Fig. 3. Here the parameter $n = s + s^{-1} = 2 \cos \lambda$. In Fig. 2 we also show a seam to ensure that loops which wrap around the strip pick up the correct weight $n$ in the partition function. The corresponding weights along the seam are also given in Fig. 3[@note1]. We find that the eigenvalues of the row-to-row transfer matrix of the vertex model are given by $$\Lambda = \prod_{\alpha=1}^{r_1} -
{\sinh (\theta_{\alpha} - {\rm i} \frac{\lambda}{2}) \over
\sinh (\theta_{\alpha} + {\rm i} \frac{\lambda}{2})}
\prod_{\mu=1}^{r_2} - {\sinh (\phi_{\mu} + {\rm i} \lambda) \over
\sinh \phi_{\mu}}
+
{\rm e}^{{\rm i} \epsilon}
\prod_{\mu=1}^{r_2} - {\sinh (\phi_{\mu} - {\rm i} \lambda) \over
\sinh \phi_{\mu}}$$ where the roots $\theta_{\alpha}$ and $\phi_{\mu}$ follow from $${\rm e}^{{\rm i} \epsilon} \left[ -
{\sinh (\theta_{\alpha} - {\rm i} \frac{\lambda}{2}) \over
\sinh (\theta_{\alpha} + {\rm i} \frac{\lambda}{2})} \right]^N = -
\prod_{\mu=1}^{r_2} -
{\sinh (\theta_{\alpha} - \phi_{\mu}+{\rm i} \frac{\lambda}{2}) \over
\sinh (\theta_{\alpha} - \phi_{\mu} - {\rm i} \frac{\lambda}{2})}
\prod_{\beta=1}^{r_1}
{\sinh (\theta_{\alpha} - \theta_{\beta}-{\rm i} \lambda) \over
\sinh (\theta_{\alpha} - \theta_{\beta}+{\rm i} \lambda)},
\quad \alpha=1,\ldots,r_1.$$ $${\rm e}^{{\rm i} \epsilon} \prod_{\alpha=1}^{r_1} -
{\sinh (\phi_{\mu}-\theta_{\alpha} - {\rm i} \frac{\lambda}{2}) \over
\sinh (\phi_{\mu}-\theta_{\alpha} + {\rm i} \frac{\lambda}{2})} = -
\prod_{\nu=1}^{r_2}
{\sinh (\phi_{\mu} - \phi_{\nu}-{\rm i} \lambda) \over
\sinh (\phi_{\mu} - \phi_{\nu}+{\rm i} \lambda)},
\quad \mu=1,\ldots,r_2.$$ Here the seam parameter $\epsilon=\lambda$ for the largest sector and $\epsilon=0$ otherwise. Apart from the seam, this exact solution on the honeycomb lattice follows from earlier work by Baxter on the colourings of the hexagonal lattice [@b2]. Baxter derived the Bethe Ansatz solution and evaluated the bulk partition function per site in the region $n\ge2$. The corresponding vertex model was later considered in the region $n<2$ with regard to the polymer melting transition at $n=0$[@si].
More generally, the fully packed loop model can be seen to follow from the honeycomb limit of the solvable square lattice $A_2^{(1)}$ loop model [@wn; @r]. Equivalently, the related vertex model on the honeycomb lattice is obtained in the appropriate limit of the $A_2^{(1)}$ vertex model on the square lattice in the ferromagnetic regime. This latter model is the $su(3)$ vertex model[@su3]. One can verify that the above results follow from the honeycomb limit of the Algebraic Bethe Ansatz solution of the $su(3)$ model[@bvv] with appropriate seam. It should be noted that Reshetikhin[@r] has performed similar calculations to those presented here, although in the absence of the seam, which plays a crucial role in the underlying critical behaviour.
Defining the finite-size free energy as $f_N = N^{-1} \log \Lambda_0$, we derive the bulk value to be $$f_\infty = \int_{-\infty}^{\infty}
{\sinh^2\! \lambda x \, \sinh (\pi -\lambda)x
\over x\, \sinh \pi x \, \sinh 3 \lambda x } dx .
\label{fbulk}$$ This result is valid in the region $0 < \lambda \le \pi/2$, where the Bethe Ansatz roots defining the largest eigenvalue $\Lambda_0$ are all real. We note that the most natural system size $N$ is a multiple of 3, for which the largest eigenvalue occurs with $r_1=2N/3$ and $r_2=N/3$ roots. In the limit $\lambda \rightarrow 0$ $f_\infty$ reduces to the known $n=2$ value[@b2; @b1], $$f_\infty = \log \left[ \frac{3 \Gamma^2(1/3)}{4 \pi^2}\right].
\label{flim}$$ There is however, a cusp in the free energy at $\lambda = \pi/2$. For $\lambda > \pi/2$ the largest eigenvalue has roots $\theta_{\alpha}$ shifted by i$\pi/2$. The result for $f_\infty$ is that obtained from (\[fbulk\]) under the interchange $\lambda \leftrightarrow \pi - \lambda$, reflecting a symmetry between the regions $-2 \le n \le 0$ and $0 \le n \le 2$. Thus the value (\[flim\]) holds also at $n = -2$, in agreement with the $t_c \rightarrow 0$ limiting value[@b1].
As our interest here lies primarily in the point $n = 0$ ($\lambda = \pi/2$), we confine our attention to the region $0 \le n \le 2$. At $n = 0$, we find that the above result for $f_\infty$ can be evaluated exactly to give the partition sum per site, $\kappa$, as $$\kappa^2 = 3 \sqrt 3/4,$$ and thus $\kappa = 1.13975 \ldots$ follows as the exact value for the entropy or connective constant of Hamiltonian walks on the honeycomb lattice. This numerical value has been obtained previously via the same route in terms of an infinite sum [@s2]. Our exact result (8) is to be compared with the open self-avoiding walk, for which $\mu^2 = 2 + \sqrt 2$[@n], and so $\mu = 1.84775 \ldots$ It follows that for self-avoiding walks on the honeycomb lattice the entropy loss per step due to compactness, relative to the freedom of open configurations, is exactly given by $$\frac{1}{2} \log \left[ \frac{3 \sqrt 3}{4 (2+\sqrt 2)} \right]
= - 0.483161 \ldots$$
The central charge $c$ and scaling dimensions $X_i$ defining the critical behaviour of the model follow from the dominant finite-size corrections to the transfer matrix eigenvalues[@cx]. For the central charge, $$f_N \simeq f_{\infty} + \frac{\pi \zeta c}{6 N^2}.$$ The scaling dimensions are related to the inverse correlation lengths via $$\xi_i^{-1} = \log ( \Lambda_0/\Lambda_i) \simeq 2 \pi \zeta X_i/N.$$ Here $\zeta=\sqrt 3 /2$ is a lattice-dependent scale factor.
The derivation of the dominant finite-size corrections via the Bethe Ansatz solution of the vertex model follows that given for the $su(3)$ model in the antiferromagnetic regime[@devega] (see, also [@am]). The derivation is straightforward though tedious and we omit the details. In the absence of the seam, we find that the central charge is $c=2$ with scaling dimensions $X = \Delta^{(+)} + \Delta^{(-)}$, where $$\Delta^{(\pm)} = \frac{1}{8}\, g \,
\mbox{\boldmath $n$}^T C\, \mbox{\boldmath $n$}
+ \frac{1}{8\, g}\,
(\mbox{\boldmath $h$}^{\pm})^{T} C^{-1} \mbox{\boldmath $h$}^{\pm} -
\frac{1}{4} \mbox{\boldmath $n$}\cdot\mbox{\boldmath $h$}^{\pm},$$ $C$ is the $su(3)$ Cartan matrix and $\mbox{\boldmath $n$}=(n_1,n_2)$ with $n_1$ and $n_2$ related to the number of Bethe Ansatz roots via $r_1 = 2N/3 - n_1$ and $r_2 = N/3 - n_2$[@note2]. The remaining parameters $\mbox{\boldmath $h$}^{\pm} = (h_1^{\pm},h_2^{\pm})$ define the number of holes in the root distribution in the usual way[@devega]. We have further defined the variable $g = 1 - \lambda/\pi$.
With the introduction of the seam $\epsilon = \lambda$, we find that the central charge of the fully packed O($n$) model is exactly given by $$c = 2 - 6(1-g)^2/g.$$ This is the identification made by Blöte and Nienhuis[@bn2]. At $n=0$ we have $g=1/2$, and thus $c = -1$. On the other hand, both Hamiltonian walks on the Manhattan lattice[@dd; @d] and dense self-avoiding walks[@dense] lie in a different universality class with $c=-2$. However, as we shall see below, they do share common sets of scaling dimensions and thus critical exponents. This sharing of exponents between the fully packed and densely packed loop models has already been anticipated by Blöte and Nienhuis in their identification of the leading thermal and magnetic exponents[@bn2]. Here we derive an exact infinite set of scaling dimensions.
Of most interest is the so-called watermelon correlator, which measures the geometric correlation between $L$ nonintersecting self-avoiding walks tied together at their extremities $\mbox{\boldmath $x$}$ and $\mbox{\boldmath $y$}$. It has a critical algebraic decay, $$\langle
\phi_L(\mbox{\boldmath $x$}) \phi_L (\mbox{\boldmath $y$}) \rangle_c \sim
|\mbox{\boldmath $x$}-\mbox{\boldmath $y$}|^{-2 X_L},$$ where $X_L$ is the scaling dimension of the conformal source operator $\phi_L(\mbox{\boldmath $x$})$[@dense]. As along the line $t=t_c$, these scaling dimensions are associated with the largest eigenvalue in each sector of the transfer matrix. The pertinent scaling dimensions follow from the more general result $$X = \frac{1}{2}\,g\left(n_1^2+n_2^2-n_1 \, n_2 \right) -
\frac{(1-g)^2}{2\,g}.$$ The sectors of the transfer matrix are labelled by the Bethe Ansatz roots via $L=n_1+n_2$. The minimum scaling dimension in a given sector are given by $n_1=n_2=k$ for $L=2k$ and $n_1=k-1, n_2=k$ or $n_1=k, n_2=k-1$ for $L=2k-1$. Thus we have the set of geometric scaling dimensions $X_L$ corresponding to the operators $\phi_L$ for the loop model, $$\begin{aligned}
X_{2 k-1} &=& \frac{1}{2}\,g \left(k^2-k+1\right) - \frac{(1-g)^2}{2\,g},\\
X_{2 k} &=& \frac{1}{2}\,g\, k^2 - \frac{(1-g)^2}{2\,g},\end{aligned}$$ where $k=1,2,\ldots$ The magnetic scaling dimension is given by $X_{\sigma}=X_1$ which agrees with the identification made in [@bn2]. The eigenvalue related to $X_2$ appears in the $n_d=2$ sector of the loop model, i.e. with two dangling bonds[@bn2]. At $n=0$ this more general set of dimensions reduces to $$\begin{aligned}
X_{2 k-1} &= \frac{1}{4} \left(k^2-k\right), \\
X_{2 k} &= \frac{1}{4} \left(k^2-1\right).\end{aligned}$$ In comparison, the scaling dimensions for dense self-avoiding walks are[@dense] $$X^{\rm DSAW}_L = {\mbox{\small $\frac{1}{16}$}}\left(L^2 - 4 \right).$$ Thus we have the relations $$\begin{aligned}
X_{2 k-1} &=& X^{\rm DSAW}_{2 k -1} + {\mbox{\small $\frac{3}{16}$}},\\
X_{2 k} &=& X^{\rm DSAW}_{2 k}.\end{aligned}$$ Note that $X_1=X_2=0$ and $X_L > 0$ for $L>2$. Identifying $X_{\epsilon}=X_2$ as in [@dense], then the exponents $\gamma=1$ and $\nu = 1/2$ follow in the usual way[@note3]. These are indeed the exponents to be expected for compact or space filling two-dimensional polymers.
The corresponding scaling dimensions for Hamiltonian walks on the Manhattan lattice are as given in (19)[@d]. Exact Bethe Ansatz results on this model indicate that the scaling dimensions $X_{\sigma}=X_{\epsilon}=0$, from which one can also deduce that $\gamma=1$ and $\nu = 1/2$[@bosy]. We also expect these results to hold for Hamiltonian walks on the square lattice. Extending the finite-size scaling analysis of the correlation lengths for self-avoiding walks on the square lattice [@dense] down to the zero-temperature limit $t=0$, we see a clear convergence of the central charge and leading scaling dimension to the values $c=-1$ and $X_1=0$ for even system sizes, with $X_2=0$ exactly. These results are the analog of the present study on the honeycomb lattice where $N = 3k$ is most natural in terms of the Bethe Ansatz solution.
Our results indicate that fully packed self-avoiding walks on the honeycomb lattice have the same degree of “solvability" as self-avoiding walks on the honeycomb lattice. The fully packed loop model with open boundaries is also exactly solvable[@yb]. The derivation of the surface critical behaviour of Hamiltonian walks is currently in progress.
It is a pleasure to thank A. L. Owczarek, R. J. Baxter, H. W. J. Blöte, B. Nienhuis and K. A. Seaton for helpful discussions and correspondence. This work has been supported by the Australian Research Council.
See, e.g., M. Gordon, P. Kapadia and A. Malakis, J. Phys. A [**9**]{}, 751 (1976); J. F. Nagle, P. D. Gujrati and M. Goldstein, J. Phys. Chem. [**74**]{}, 2596 (1984); T. G. Schmalz, G. E. Hite and D. J. Klein, J. Phys. A [**17**]{}, 445 (1984); H. S. Chan and K. A. Dill, Macromolecules [**22**]{}, 4559 (1989) and references therein. P. W. Kasteleyn, Physica [**29**]{}, 1329 (1963). B. Duplantier and F. David, J. Stat. Phys. [**51**]{}, 327 (1988). B. Duplantier, J. Stat. Phys. [**49**]{}, 411 (1987). B. Nienhuis, Phys. Rev. Lett. [**49**]{}, 1062 (1982). B. Duplantier, J. Phys. A [**19**]{}, L1009 (1986); H. Saleur, Phys. Rev. B [**35**]{}, 3657 (1987); B. Duplantier and H. Saleur, Nucl. Phys. B [**290**]{}, 291 (1987). A. L. Owczarek, T. Prellberg and R. Brak, Phys. Rev. Lett. [**70**]{} 951 (1993). B. Duplantier, Phys. Rev. Lett. [**71**]{} 4274 (1993). A. L. Owczarek, T. Prellberg and R. Brak, Phys. Rev. Lett. [**71**]{} 4275 (1993). C. J. Camacho and D. Thirumalai, Phys. Rev. Lett. [**71**]{} 2505 (1993). D. Bennett-Wood, R. Brak, A. J. Guttmann, A. L. Owczarek and T. Prellberg, J. Phys. A [**27**]{}, L1 (1994). H. W. J. Blöte and B. Nienhuis, Phys. Rev. Lett. [**72**]{}, 1372 (1994). R. J. Baxter, J. Phys. A [**19**]{}, 2821 (1986). M. T. Batchelor and H. W. J. Blöte, Phys. Rev. Lett. [**61**]{}, 138 (1988); Phys. Rev. B. [**39**]{}, 2391 (1989). J. Suzuki, J. Phys. Soc. Jpn. [**57**]{}, 2966 (1988). M. T. Batchelor and J. Suzuki, J. Phys. A [**26**]{}, L729 (1993). Exact information can also be obtained along the lines $n=0$ and $n=1$. We are indebted to R. J. Baxter for this remark. There are several ways to define the seam. This particular choice is consistent with the vertex weight gauge factors and the seam used in the corresponding solution of the vertex model along the line $t=t_c$: see, Refs. [@b1; @bb; @s1]. R. J. Baxter, J. Math. Phys. [**11**]{}, 784 (1970). J. Suzuki and T. Izuyama, J. Phys. Soc. Jpn. [**57**]{}, 818 (1988). S. O. Warnaar and B. Nienhuis, J. Phys. A [**26**]{}, 2301 (1993). N. Yu. Reshetikhin, J. Phys. A [**24**]{}, 2387 (1991). I. V. Cherednik, Theor. Math. Phys. [**47**]{}, 225 (1981); O. Babelon, H. J. de Vega and C. M. Viallet, Nucl. Phys. B [**190**]{} 542 (1981). O. Babelon, H. J. de Vega and C. M. Viallet, Nucl. Phys. B [**200**]{} 266 (1982). J. Suzuki, J. Phys. Soc. Jpn. [**57**]{}, 687 (1988). H. W. J. Blöte, J. L. Cardy and M. P. Nightingale, Phys. Rev. Lett. [**56**]{}, 742 (1986); I. Affleck, Phys. Rev. Lett. [**56**]{}, 746 (1986); J. L. Cardy, Nucl. Phys. B [**270**]{}, 186 (1986). H. J. de Vega, J. Phys. A [**21**]{}, L1089 (1988). F. C. Alcaraz and M. J. Martins, J. Phys. A [**23**]{}, L1079 (1990). The parameters $n_1$ and $n_2$ are related to de Vega’s parameters $S_1$ and $S_2$ via the action of the Cartan matrix: $S_1=2n_1-n_2$ and $S_2=2n_2-n_1$. Using $\eta=2 X_{\sigma}$, $1/\nu = 2 - X_{\epsilon}$ and $\gamma = (2-\eta)\nu$. M. T. Batchelor, A. L. Owczarek, K. A. Seaton and C. M. Yung, “Surface critical behaviour of an O($n$) loop model related to two Manhattan lattice walk problems", A.N.U preprint MRR051-94. C. M. Yung and M. T. Batchelor, “Integrable vertex and loop models on the square lattice with open boundaries via reflection matrices", A.N.U preprint MRR042-94.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- The ATLAS Collaboration
bibliography:
- 'paper.bib'
title: 'Search for $R$-parity-violating supersymmetry in events with four or more leptons in $\sqrt{s}\,=\,$7$\,$TeV $pp$ collisions with the ATLAS detector'
---
Introduction
============
Events with four or more leptons are rarely produced in Standard Model (SM) processes while being predicted by a variety of theories for physics beyond the SM. These include supersymmetry (SUSY) [@Miyazawa:1966; @Ramond:1971gb; @Golfand:1971iw; @Neveu:1971rx; @Neveu:1971iv; @Gervais:1971ji; @Volkov:1973ix; @Wess:1973kz; @Wess:1974tw], technicolour [@technicolor], and models with extra bosons [@extraBoson] or heavy neutrinos [@heavyNeutrino]. This paper presents a search with the ATLAS detector for anomalous production of events with four or more leptons. “Leptons” refers to electrons or muons, including those from $\tau$ decays, but does not include $\tau$ leptons that decay hadronically. The analysis is based on 4.7$\,$fb$^{-1}$ of proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy $\sqrt{s}\,$$=\,$7$\,$TeV between March and October 2011. The results are interpreted in the context of $R$-parity-violating (RPV) SUSY. Similar searches have been conducted at the Tevatron Collider [@D02006; @CDF2007] and by CMS [@CMS4L].
$R$-parity-violating supersymmetry
==================================
SUSY postulates the existence of SUSY particles, or “sparticles”, each with spin ($S$) differing by one-half unit from that of its SM partner. Gauge-invariant and renormalisable interactions introduced in SUSY models can violate the conservation of baryon ($B$) and lepton ($L$) number and lead to a proton lifetime shorter than current experimental limits [@protonDecay]. This is usually solved by assuming that $R$-parity, defined by $P_R=(-1)^{2S+3B+L}$, is conserved [@Fayet:1976et; @Fayet:1977yc; @Farrar:1978xj; @Fayet:1979sa; @Dimopoulos:1981zb], which makes the lightest supersymmetric particle (LSP) stable. In $P_R$-conserving models where the LSP is neutral and weakly interacting, sparticle production is characterised by large missing transverse momentum ([$E_{\mathrm{T}}^{\mathrm{miss}}$]{}) due to LSPs escaping detection. Many SUSY searches at hadron colliders rely on this large ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}$ signature.
Alternatively, proton decay can be prevented by imposing other symmetries [@herbiMSSMsymm] that require the conservation of either lepton or baryon number, while allowing $R$-parity violation. Such models can accommodate non-zero neutrino masses and neutrino-mixing angles consistent with the observation of neutrino oscillations [@neutrinomass_benall]. If $R$-parity is violated, the LSP decays into SM particles and the signature of large [$E_{\mathrm{T}}^{\mathrm{miss}}$]{}may be lost. Trilinear lepton-number-violating RPV interactions can generate both charged leptons and neutrinos during the LSP decay, and therefore lead to a characteristic signature with high lepton multiplicity and moderate values of [$E_{\mathrm{T}}^{\mathrm{miss}}$]{}compared to $R$-parity-conserving models.
$R$-parity-violating models
===========================
The results of this analysis are interpreted in two Minimal Supersymmetric Standard Model (MSSM) scenarios with an RPV superpotential term given by $W_{RPV} = \lambda_{ijk}L_iL_j\bar{E_k}$. The $i, j, k$ indices of the $\lambda_{ijk}$ Yukawa couplings refer to the lepton generations. The lepton SU(2) doublet superfields are denoted by $L_i$, while the corresponding singlet superfields are given by $E_k$. Single coupling dominance is assumed with $\lambda_{121}$ as the only non-zero coupling. The $\lambda_{121}$ coupling is chosen as a representative model with multiple electrons and muons in the final state. Comparable signal yields are expected with a choice of $\lambda_{122}$ single coupling dominance.
The first scenario is a simplified model [@simplModels1] where the lightest chargino and neutralino are the only sparticles with masses below the TeV scale. Charginos ($\chinojpm$, $i=1,2$) and neutralinos ($\ninoi$, $j=1,2,3,4$) are the mass eigenstates formed from the linear superposition of the SUSY partners of the Higgs and electroweak gauge bosons. These are the Higgsinos, winos and bino. Wino-like charginos are pair-produced and each decays into a $W$ boson and a bino-like ${\ensuremath{\tilde{\chi}_1^0}\xspace}$, which is the LSP. The LSP then undergoes the three-body decay ${\ensuremath{\tilde{\chi}_1^0}\xspace}\rightarrow e \mu \nu_{e}$ or ${\ensuremath{\tilde{\chi}_1^0}\xspace}\rightarrow e e \nu_{\mu}$ through a virtual slepton or sneutrino, with a branching fraction of 50% each, as shown in figure \[fig:staufeynman\](a–c). The width of the ${\ensuremath{\tilde{\chi}_1^0}\xspace}$ is fixed at a value of 100$\,$MeV resulting in prompt decays.
The second scenario is taken from ref. [@Desch:2011] and is the ($m_{1/2}$, $\tan\beta$) slice of the minimal SUper GRAvity/Constrained MSSM (MSUGRA/CMSSM) containing the BC1 point of ref. [@bintosgamma]. The unification parameters $m_0$ and $A_0$ are zero and $\mu$ is positive[^1]. The RPV coupling $\lambda_{121}$ is set to 0.032 at the unification scale [@Kao:2009fg]. Both strong and weak processes contribute to SUSY pair production, where weak processes are dominant above ${m_{1/2}\,\sim\,600\,}$GeV. The lighter stau, ${\ensuremath{\tilde{\tau}_1}\xspace}$, is the LSP over most of the parameter-space, and decays with equal probability through the four-body decays ${\ensuremath{\tilde{\tau}_1}\xspace}\rightarrow \tau e \mu \nu_{e}$ or ${\ensuremath{\tilde{\tau}_1}\xspace}\rightarrow \tau e e \nu_{\mu}$, via a virtual neutralino and sneutrino or slepton, as shown in figure \[fig:stauDecay\]. While the Higgs mass values in this model are lower than those of the recently observed Higgs-like resonance [@HiggsATLAS; @HiggsCMS], the MSUGRA/CMSSM scenario considered nevertheless remains an instructive benchmark model.
Detector description
====================
ATLAS [@atlas-det] is a multipurpose particle detector with forward-backward symmetric cylindrical geometry. It includes an inner tracker (ID) immersed in a 2$\,$T axial magnetic field providing precision tracking of charged particles for pseudorapidities[^2] $|\eta|\,$$<\,$2.5. Sampling calorimeter systems with either liquid argon or scintillator tiles as the active media provide energy measurements over the range $|\eta|\,$$<\,$4.9. The muon detectors are positioned outside the calorimeters and are contained in an air-core toroidal magnetic field produced by superconducting magnets with field integrals varying from 1$\,$T$\cdot$m to 8$\,$T$\cdot$m. They provide trigger and high-precision tracking capabilities for $|\eta|\,$$<\,$2.4 and $|\eta|\,$$<\,$2.7, respectively.
Monte Carlo simulation
======================
Several Monte Carlo (MC) generators are used to simulate SM processes and new physics signals relevant for this analysis. [[SHERPA]{}]{} [@SherpaMC] is used to simulate the diboson processes $WW$, $WZ$ and $ZZ$, where $Z$ also includes virtual photons. These diboson samples correspond to all SM diagrams leading to the $\ell\nu\ell'\nu'$, $\ell\ell\ell'\nu'$, and $\ell\ell\ell'\ell'$ final states, where $\ell,\ell'=e,\mu,\tau$ and $\nu,\nu'=\nu_e,\nu_{\mu},\nu_{\tau}$. Interference between the diagrams is taken into account. [[MadGraph]{}]{} [@madgraph] is used for the $t\bar{t}W$, $t\bar{t}WW$, $t\bar{t}Z$, $W\gamma$ and $Z\gamma$ processes. [[MC@NLO]{}]{} [@mcatnlo] is chosen for the simulation of single and pair production of top quarks, and [[ALPGEN]{}]{} [@alpgen] is used to simulate $W$+jets and $Z$+jets processes. Expected diboson yields are normalised using next-to-leading-order (NLO) QCD predictions obtained with [[MCFM]{}]{} [@mcfm1; @mcfm2]. The top-quark pair-production contribution is normalised to approximate next-to-next-to-leading-order calculations (NNLO) [@hathor] and the $t\bar{t}W$, $t\bar{t}WW$, $t\bar{t}Z$ contributions are normalised to NLO predictions [@ttZ; @ttW]. The $W\gamma$ and $Z\gamma$ yields are normalised to be consistent with the ATLAS cross-section measurements [@wgamma]. The QCD NNLO [[FEWZ]{}]{} [@fewz1; @fewz2] cross-sections are used for normalisation of the inclusive $W$+jets and $Z$+jets processes.
The choice of the parton distribution functions (PDFs) depends on the generator. The [[CTEQ6L1]{}]{} [@CTEQ6L1] PDFs are used with [[MadGraph]{}]{} and [[ALPGEN]{}]{}, and the [[CT10]{}]{} [@CT10pdf] PDFs with [[MC@NLO]{}]{}and [[SHERPA]{}]{}.
The simplified model samples are produced with [[Herwig++]{}]{} [@herwigplusplus] and the MSUGRA/ CMSSM BC1-like samples are produced with [[HERWIG]{}]{} [@herwig]. The yields of the SUSY samples are normalised to the NLO cross-sections obtained from [[PROSPINO]{}]{} [@Beenakker:1996ch] for weak processes, and to next-to-leading-logarithmic accuracy (NLL) for strong processes.
Fragmentation and hadronisation for the [[ALPGEN]{}]{}and [[MC@NLO]{}]{}samples are performed with [[HERWIG]{}]{}, while for [[MadGraph]{}]{}, [[PYTHIA]{}]{} [@pythia] is used, and for [[SHERPA]{}]{}these are performed internally. [[JIMMY]{}]{} [@jimmy] is interfaced to [[HERWIG]{}]{}for simulation of the underlying event. For all MC samples, the propagation of particles through the ATLAS detector is modelled using [[GEANT4]{}]{} [@Agostinelli:2002hh; @simulation]. The effect of multiple proton-proton collisions from the same or different bunch crossings is incorporated into the simulation by overlaying additional minimum-bias events generated by [[PYTHIA]{}]{}onto hard-scatter events. Simulated events are weighted to match the distribution of the number of interactions per bunch crossing observed in data. Simulated data are reconstructed in the same manner as the data.
Event reconstruction and preselection \[sec:presel\]
====================================================
The data sample was collected with an inclusive selection of single-lepton and double-lepton triggers. For single-lepton triggers, at least one reconstructed muon (electron) is required to have transverse momentum $p_{\rm T}^{\mu}$ (transverse energy $E_{\rm T}^e$) above 20$\,$GeV (25$\,$GeV). For dilepton triggers, at least two reconstructed leptons are required to have triggered the event, with transverse energy or momentum above threshold. The two muons are each required to have $p_{\rm T}^{\mu}\,$$>\,$12$\,$GeV for dimuon triggers, and the two electrons to have $E_{\rm T}^{e}\,$$>\,$17$\,$GeV for dielectron triggers, while the thresholds for electron-muon triggers are $E_{\rm T}^e\,$$>\,$15$\,$GeV and $p_{\rm T}^{\mu}\,$$>\,$10$\,$GeV. These thresholds are chosen such that the overall trigger efficiency is high, typically in excess of 90%, and independent of the transverse momentum of the triggerable objects within uncertainties.
Events recorded during normal running conditions are analysed if the primary vertex has five or more tracks associated to it. The primary vertex of an event is identified as the vertex with the highest $\Sigma p_{\rm T}^2$ of associated tracks.
Electrons must satisfy “medium” identification criteria [@electronPerf] and fulfil $|\eta|\,$$<\,$2.47 and $E_{\rm T}\,$$>\,$10$\,$GeV, where $E_{\rm T}$ and $|\eta|$ are determined from the calibrated clustered energy deposits in the electromagnetic calorimeter and the matched ID track, respectively. Muons are reconstructed by combining tracks in the ID and tracks in the muon spectrometer [@muon]. Reconstructed muons are considered as candidates if they have transverse momentum $p_{\rm T}\,$$>\,$10$\,$GeV and $|\eta|\,$$<\,$2.4.
Jets are reconstructed with the anti-$k_t$ algorithm [@Cacciari:2008gp] with a radius parameter of ${R=0.4}$ using clustered energy deposits calibrated at the electromagnetic scale[^3]. The jet energy is corrected to account for the non-compensating nature of the calorimeter using correction factors parameterised as a function of the jet $E_{\rm T}$ and $\eta$ [@fakeCleanUp]. The correction factors were obtained from simulation and have been refined and validated using data. Jets considered in this analysis have $E_{\rm T}\,$$>\,$20$\,$GeV and $|\eta|\,$$<\,$2.5. The $p_{\rm T}$-weighted fraction of the tracks in the jet that are associated with the primary vertex is required to be larger than 0.75.
Events containing jets failing the quality criteria described in ref. [@fakeCleanUp] are rejected to suppress both SM and beam-induced background. Jets are identified as containing $b$-hadron decays, and thus called “$b$-tagged”, using a multivariate technique based on quantities such as the impact parameters of the tracks associated to a reconstructed secondary vertex. The chosen working point of the $b$-tagging algorithm [@btag] correctly identifies $b$-quark jets in simulated top-quark decays with an efficiency of 60% and misidentifies the jets initiated by light-flavour quarks or gluons with a rate of $<\,$1%, for jets with $E_{\rm T}\,$$>\,$20$\,$GeV and $|\eta|\,$$<\,$2.5.
The missing transverse momentum, ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}$, is the magnitude of the vector sum of the transverse momentum or transverse energy of all $p_{\rm T}\,$$>\,$10$\,$GeV muons, $E_{\rm T}\,$$>\,$20$\,$GeV electrons, $E_{\rm T}\,$$>\,$20$\,$GeV jets, and calibrated calorimeter energy clusters with $|\eta|\,$$<\,$4.9 not associated to these objects [@metPerf].
In this analysis, “tagged” leptons are leptons separated from each other and from candidate jets as described below. If two candidate electrons are reconstructed with $\Delta R \equiv \sqrt{(\Delta\phi)^2+(\Delta\eta)^2}\,$$<\,$0.1, the lower energy one is discarded. Candidate jets within $\Delta R\,$$=\,$0.2 of an electron candidate are rejected. To suppress leptons originating from semi-leptonic decays of $c$- and $b$-quarks, all lepton candidates within $\Delta R\,$$=\,$0.4 of any remaining jet candidates are removed. Muons undergoing bremsstrahlung can be reconstructed with an overlapping electron candidate. To reject these, tagged electrons and muons separated from jets and reconstructed within $\Delta R\,$$=\,$0.1 of each other are both discarded. Events containing one or more tagged muons that have transverse impact parameter with respect to the primary vertex $|d_0|\,$$>\,$0.2$\,$mm or longitudinal impact parameter with respect to the primary vertex $|z_0|\,$$>\,$1$\,$mm are rejected to suppress cosmic muon background.
“Signal” leptons are tagged leptons for which the scalar sum of the transverse momenta of tracks within a cone of $\Delta R = 0.2$ around the lepton candidate, and excluding the lepton candidate track itself, is less than 10% of the lepton $E_{\rm T}$ for electrons and less than 1.8$\,$GeV for muons. Tracks selected for the electron and muon isolation requirement defined above are those which have $p_{\rm T}\,$$>\,$1$\,$GeV and are associated to the primary vertex of the event. Signal electrons must also pass “tight” identification criteria [@electronPerf].
Signal region selection \[sec:SRsel\]
=====================================
Selected events must contain four or more signal leptons. The invariant mass of any same-flavour opposite-sign (SFOS) lepton pair, $m_{\rm SFOS}$, must be above 20$\,$GeV, otherwise the lepton pair is discarded to suppress background from low-mass resonances. Events which contain a SFOS lepton pair inside the \[81.2, 101.2\]$\,$GeV interval are vetoed to reject $Z$-boson candidates ($Z$-veto).
Two signal regions are then defined: a signal region with ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}\,$$>\,$50$\,$GeV (SR1) and one with effective mass $m_{\rm eff}\,$$>\,$300$\,$GeV (SR2). The effective mass is defined by the scalar sum shown in (eq. \[eq:meff\]), where $p^\mu_{\rm T}$ ($E^e_{\rm T}$) is the transverse momentum of the signal muons (electrons) and $E^j_{\rm T}$ is the transverse energy of jets with $E_{\rm T}\,$$>\,$40$\,$GeV:
$$m_{\rm eff} = {\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}+ \sum_{\mu} p^\mu_{\rm T} + \sum_{e} E^e_{\rm T} + \sum_{j} E^j_{\rm T}.
\label{eq:meff}$$
SR1 is optimised for regions of the parameter space with values of $m_{1/2}$ below $\sim$700$\,$GeV or $m_{{\ensuremath{\tilde{\chi}_1^\pm}\xspace}}$ below $\sim$300$\,$GeV. SR2 targets the production of heavier sparticles. In general, SR1 is sensitive to models with ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}$ originating from neutrinos and SR2 to scenarios with a large multiplicity of high-$p_{\rm T}$ objects.
Three further regions are defined to validate the expected background against data. The validation regions are described in section \[sec:Bvalidation\]. Table \[tab:SRdef\] summarises the signal and validation region definitions.
Standard Model background estimation
====================================
Several SM processes, which are classified into irreducible and reducible components below, contribute to the background in the signal regions. The dominant sources are $ZZ$, $WZ$, and $t\bar{t}$ production in both SR1 and SR2.
Irreducible background processes
--------------------------------
A background process is considered “irreducible” if it leads to events with four real, isolated leptons, referred to as “real” leptons below. These include $ZZ$, $t\bar{t}Z$ and $t\bar{t}WW$ production, where a gauge boson may be produced off-mass-shell. These contributions are determined using the corresponding MC samples, for which lepton and jet selection efficiencies [@JESuncert1; @JERuncert1; @LeptonScaleUncer1; @LeptonScaleUncer2], and trigger efficiencies are corrected to account for differences with respect to data.
Reducible background processes
------------------------------
A “reducible” process has at least one “fake” lepton, that is either a lepton from a semi-leptonic decay of a $b$- or $c$-quark, referred to as heavy-flavour, or an electron from an isolated single-track photon conversion. The contribution from misidentified light-flavour quark or gluon jets is found to be negligible based on studies in simulation. The reducible background includes $WZ$, $t\bar{t}$, $t\bar{t}W$, $WW$, single $t$-quark, or single $Z$-boson production, in all cases produced in association with jets or photons. The yield of $W$ bosons with three fake leptons is negligible. The $t\bar{t}$ and the $WZ$ backgrounds correspond respectively to 59% and 35% of the reducible background in SR1, and to 46% each in SR2. In both SR1 and SR2, fake leptons are predominantly fake electrons (99%), originating either from $b$-quarks in $t\bar{t}$ candidate events or from conversions in $WZ$ candidate events. Fake muons from $b$-quark decays in $t\bar{t}$ events are suppressed by the object separation scheme described in section \[sec:presel\]. Regardless of the origin, the misidentification probability decreases as the lepton $p_{\rm T}$ increases.
The reducible background is estimated using a weighting method applied to events containing signal leptons ($\ell_S$) and loose leptons ($\ell_L$), which are tagged leptons failing the signal lepton requirements. Since the reducible background is dominated by events with at most two fake leptons, it is estimated as: $$\begin{aligned}
&&\left[N_{\rm data}(3\ell_S+\ell_L)-N_{\rm MCirr}(3\ell_S+\ell_L)\right]\times F(\ell_L) \nonumber\\
&&-\left[N_{\rm data}(2\ell_S+\ell_{L_1}+\ell_{L_2})-N_{\rm MCirr}(2\ell_S+\ell_{L_1}+\ell_{L_2})\right]\times F(\ell_{L_1}) \times F(\ell_{L_2}),\end{aligned}$$ where the second term corrects for the double counting of reducible-background events with two fake leptons in the first term. The term $N_{\rm data}(3\ell_S+\ell_L)$ is the total number of events with three signal and one loose lepton, while $N_{\rm MCirr}(3\ell_S+\ell_L)$ is the irreducible contribution of events obtained from simulation. The definitions of $N_{\rm data}(2\ell_S+\ell_{L_1}+\ell_{L_2})$ and $N_{\rm MCirr}(2\ell_S+\ell_{L_1}+\ell_{L_2})$ are analogous. As a conservative approach, the potential signal contamination in the $3\ell_S+\ell_L$ and $2\ell_S+\ell_{L_1}+\ell_{L_2}$ loose lepton data samples is not taken into account. The average “fake ratio” $F$ depends on the flavour and kinematics of the loose lepton $\ell_L$ and it is defined as:
$$F=\sum_{i, j} \left( \alpha^{i}\times R^{ij} \times f^{ij} \right),$$
where $i\,$ is the type of fake (heavy-flavour leptons or conversion electrons) and $j\,$ is the process category the fake originates from (top quark or $W/Z$ boson). The fake ratios $f^{ij}$ are defined as the ratios of the probabilities that fake tagged leptons are identified as signal leptons to the probabilities that they are identified as loose leptons. The $f^{ij}$ are determined for each relevant fake type and for each reducible-background process, and they are parameterised in muon (electron) $p_{\rm T}$ ($E_{\rm T}$) and $\eta$. The fake ratios are weighted according to the fractional contribution of the process they originate from through $R^{ij}$ fractions. Both $f^{ij}$ and $R^{ij}$ are determined in simulation. Each correction factor $\alpha^{i}$ is the fake ratio measured in data divided by that in simulation, in control samples described below. The fake ratios $F$ are estimated to vary from 0.8 to 0.05 for muons when the $p_{\rm T}$ increases from 10 to 100$\,$GeV. For electrons, there is little $p_{\rm T}$ dependence, and the $F$ values are $\sim$0.3.
The correction factor for heavy-flavour fakes is measured in a $b\bar{b}$-dominated control sample. This is defined by selecting events with only one $b$-tagged jet (containing a muon) and a tagged lepton. The non-$b\bar{b}$ contributions from the single and pair production of top quarks and $W$ bosons produced in association with $b$-quarks are suppressed with ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}\,$$<\,$40$\,$GeV and transverse mass $m_{\rm T}\,$$=\,$$\sqrt{2 \cdot {\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}\cdot p^\ell_{\rm T} \cdot (1 - \cos\Delta\phi_{\ell, {\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}})}\,$$<\,$40$\,$GeV requirements, where $\Delta\phi_{\ell,{\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}}$ is the azimuthal angle between the tagged lepton $\ell$ and the ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}$. The remaining non-$b\bar{b}$ background ($\sim$1% level) is subtracted from the control sample in data using MC predictions. In this control sample, the fake ratio from heavy-flavour decays is calculated using the ratio of tagged leptons passing signal lepton requirements to those that fail. The correction factors are found to be $0.97\pm0.13$ and $0.86\pm0.13$ for electrons and muons respectively.
The correction factor for electron candidates originating from photon conversions is determined in a sample of photons radiated from a muon in $Z\rightarrow\mu\mu$ decays. Events with two opposite-sign muons and one tagged electron are selected and the invariant mass of the $\mu\mu e$ triplet is required to lie within 10$\,$GeV of the nominal $Z$-boson mass value. In this control sample, the fake ratio for conversion electrons is calculated using the ratio of tagged electrons identified as signal electrons to those identified as loose electrons. The correction factor is found to be $1.24\pm0.13$.
Systematic uncertainties
========================
Several sources of systematic uncertainty are considered in the signal, control and validation regions. Correlations of systematic uncertainties between processes and regions are accounted for.
MC-based sources of systematic uncertainty affect the irreducible background, the $R^{ij}$ fractions of the average fake ratios, and the signal yields. The MC-based systematic sources include the acceptance uncertainty due to the PDFs and the theoretical cross-section uncertainties due to the renormalisation/factorisation scale and PDFs. The uncertainty due to the PDF set is determined using the error set of the original PDF and the scale uncertainties are calculated by varying the factorisation and renormalisation scales. Additional systematic uncertainties are those resulting from the jet energy scale [@JESuncert1] and resolution [@JERuncert1], the lepton efficiencies [@LeptonScaleUncer1; @LeptonScaleUncer2], energy scales and resolutions, and uncertainties in $b$-tagging rates [@btagUncer1; @btagUncer2; @btagUncer3; @btagUncer4]. The choice of MC generator is also included for the irreducible background. The systematic uncertainty on the luminosity (3.9%) [@lumiconf; @lumipaper] affects only the yields of the irreducible background and the signal yield.
In SR1, the total uncertainty on the irreducible background is 70%. This is dominated by the uncertainty on the efficiency times acceptance of the signal region selection for the $ZZ$ MC event generator, determined by comparing the [[SHERPA]{}]{}and [[POWHEG]{}]{} [@powheg1; @powheg2; @powheg3; @powheg4] generators and found to be 65% of the [[SHERPA]{}]{}$ZZ$ yield. The next largest uncertainties on the irreducible background are due to the jet energy scale (53%) and that due to the limited number of MC events generated (34%). All the remaining uncertainties in this signal region lie in the range of 0.1–5%. In SR2, the uncertainties are similar, except for smaller uncertainties due to the jet energy scale (3%) and the limited number of generated events (17%).
For the average fake ratio $F$, the $R^{ij}$ fractions are varied between 0% and 100% to account for the uncertainty on the $R^{ij}$ fractions from the sources listed above. The fake ratios $f^{ij}$ are generally found to be similar across processes and types of fakes in most of the kinematic regions. Also included in the uncertainty on the reducible background is the uncertainty from the dependence of the fake ratio on ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}$ (10%) and the uncertainty on the fake-ratio correction factors (15–34%). In SR1, the dominant uncertainty on the reducible background is that due to the $R^{ij}$ fractions (75%). This relatively large uncertainty is due to the limited number of events in the loose lepton data sample, and the fact that those leptons fall in kinematic regions in which the fake ratio has larger variations between processes and types of fakes. The next to leading uncertainties are those from the statistical uncertainty on the data (60%) and the dependence of the fake ratio on ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}$ (25%). In SR2, the uncertainty on the reducible background is dominated by that from the limited number of data events (140%), followed by that due to the $R^{ij}$ fractions (6%) and the dependence of the fake ratio on ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}$ (4%).
The total uncertainties on the signal yields are 10–20% and are calculated using the method described in ref. [@SUSYXS:2012]. Signal cross-sections are calculated to NLO in the strong coupling constant using [[PROSPINO]{}]{}[^4]. An envelope of cross-section predictions is defined using the 68% CL intervals of the [[CTEQ6.6]{}]{} [@Nadolsky:2008zw] (including the $\alpha_S$ uncertainty) and [[MSTW]{}]{} [@Martin:2009iq] PDF sets, together with variations of the factorisation and renormalisation scales by factors of two or one half. The nominal cross-section value is taken to be the midpoint of the envelope and the uncertainty assigned is half the full width of the envelope, closely following the PDF4LHC recommendations [@Botje:2011sn].
Background model validation \[sec:Bvalidation\]
===============================================
The background predictions have been verified in three validation regions (VR). A region (VR1) selects events with three signal leptons, ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}\,$$>\,$50$\,$GeV, and vetos events with $Z$-boson candidates described in section \[sec:SRsel\]. In VR1 the reducible background is dominated by $t\bar{t}$ production. The contribution from $ZZ$ production is validated in a region (VR2) defined by events with four leptons containing a $Z$-boson candidate. A region (VR3) containing events with ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}\,$$<\,$50$\,$GeV, $m_{\rm eff}\,$$<\,$300$\,$GeV, four signal leptons, and no $Z$-boson candidate is used to validate the sum of the reducible and irreducible backgrounds. The data and predictions are in agreement within the quoted statistical and systematic uncertainties, as shown in table \[tab:VRs\]. Reported are the probabilities that the background fluctuates to the observed number of events or higher ($p_0$-value) and the corresponding number of standard deviations ($\sigma$).
Results and interpretation
==========================
The numbers of observed and predicted events in SR1 and SR2 are reported in table \[tab:SRs\]. Distributions of ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}$ and $m_{\rm eff}$ in events that have at least four leptons and no $Z$-boson candidates (before either the ${\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace}$ or $m_{\rm eff}$ requirements) are presented in fig. \[fig:met\].
No significant excess of events is found in the signal regions. Upper limits on the visible cross-section of new physics processes are calculated, defined by the product of production cross-section, acceptance and efficiency, and placed at 95% CL with the modified frequentist CL$_s$ prescription [@CLs]. All systematic uncertainties and their correlations are taken into account via nuisance parameters in a profile likelihood fit [@cowan]. Observed 95% CL limits on the visible cross-section of new physics processes are placed at 1.3$\,$fb in SR1 and 1.1$\,$fb in SR2. The corresponding expected limits are 0.8$\,$fb and 0.7$\,$fb, respectively.
The results of the analysis are interpreted in two RPV SUSY scenarios and shown in fig. \[fig:limits\], where the limits are calculated choosing the signal region with the best expected limit for each of the model points. The uncertainties on the signal cross-section are not included in the limit calculation but their impact on the observed limit is shown by the red dotted lines.
The main features of the exclusion limits can be explained in broad terms as follows. In the simplified model the sensitivity is governed by the production cross-section, which decreases at large $m_{{\ensuremath{\tilde{\chi}_1^\pm}\xspace}}$ values, and by the efficiency of the signal region selection cuts. The requirements on the minimum value of the SFOS dilepton mass, and on the minimum lepton-lepton separation reduce the acceptance and selection efficiency for leptons from light ${\ensuremath{\tilde{\chi}_1^0}\xspace}$, hence only values of 10$\,$GeV and above are considered. As the mass of the ${\ensuremath{\tilde{\chi}_1^0}\xspace}$ approaches zero, the phase space for leptonic decays is greatly reduced, while values of $m_{{\ensuremath{\tilde{\chi}_1^0}\xspace}}\,$=$\,$0$\,$GeV are not possible without the LSP becoming stable. In the MSUGRA/CMSSM model, the production cross-section decreases as $m_{1/2}$ increases, leading to smaller sensitivities in the high-mass region. Due to lower values of the four-body decay branching ratio and an increased ${\ensuremath{\tilde{\tau}_1}\xspace}$ lifetime, the sensitivity drops for $\tan\beta$ values above 40. In the region of the parameter space with $m_{1/2}\sim\,$800 GeV, weak gaugino production contributes 50% to the total SUSY production cross-section, while $\tilde{e}/\tilde{\mu}/\tilde{\nu}$ (${\ensuremath{\tilde{\tau}_1}\xspace}$) production contributes 10–20% (10–30%), and strong production 20%. Similar fractional contributions are also seen in the two signal regions.
Summary
=======
Results from a search for new phenomena in the final state with four or more leptons (electrons or muons) and either moderate values of missing transverse momentum or large effective mass are reported. The analysis is based on 4.7$\,$fb$^{-1}$ of proton-proton collision data delivered by the LHC at $\sqrt{s}\,$$=\,$7$\,$TeV. No significant excess of events is found in data. Observed 95% CL limits on the visible cross-section are placed at 1.3$\,$fb and 1.1$\,$fb in the two signal regions, respectively. The null result is interpreted in a simplified model of chargino pair-production in which each chargino cascades to the lightest neutralino that decays into two charged leptons ($ee$ or $e\mu$) and a neutrino via an RPV coupling. In the simplified model of RPV supersymmetry, chargino masses up to 540$\,$GeV are excluded for LSP masses above 300$\,$GeV. Limits are also set in an RPV MSUGRA model with a $\tilde{\tau_{1}}$ LSP that promptly decays into a $\tau$ lepton, two charged leptons and a neutrino, where values of $m_{1/2}$ below 820$\,$GeV are excluded when $10<\tan\beta<40$.
Acknowledgements
================
We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently.
We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWF and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF, DNSRC and Lundbeck Foundation, Denmark; EPLANET and ERC, European Union; IN2P3-CNRS, CEA-DSM/IRFU, France; GNSF, Georgia; BMBF, DFG, HGF, MPG and AvH Foundation, Germany; GSRT, Greece; ISF, MINERVA, GIF, DIP and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; FOM and NWO, Netherlands; BRF and RCN, Norway; MNiSW, Poland; GRICES and FCT, Portugal; MERYS (MECTS), Romania; MES of Russia and ROSATOM, Russian Federation; JINR; MSTD, Serbia; MSSR, Slovakia; ARRS and MVZT, Slovenia; DST/NRF, South Africa; MICINN, Spain; SRC and Wallenberg Foundation, Sweden; SER, SNSF and Cantons of Bern and Geneva, Switzerland; NSC, Taiwan; TAEK, Turkey; STFC, the Royal Society and Leverhulme Trust, United Kingdom; DOE and NSF, United States of America.
The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN and the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities worldwide.
[^1]: The parameter $m_0$ is the universal scalar mass, $m_{1/2}$ is the universal gaugino mass, $\tan\beta$ is the ratio of the two Higgs vacuum expectation values in the MSSM, $A_0$ is the trilinear coupling and $\mu$ is the Higgs mixing parameter, all defined at the unification scale.
[^2]: ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the $z$-axis along the beam pipe. The $x$-axis points from the IP to the centre of the LHC ring, and the $y$-axis points upward. Cylindrical coordinates $(r,\phi)$ are used in the transverse plane, $\phi$ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle $\theta$ as $\eta=-\ln\tan(\theta/2)$.
[^3]: The electromagnetic scale is the basic calorimeter signal scale for the ATLAS calorimeters. It has been established using test-beam measurements for electrons and muons to give the correct response for the energy deposited in electromagnetic showers, although it does not correct for the lower response of the calorimeter to hadrons.
[^4]: The addition of the resummation of soft gluon emission at NLL [@Beenakker:1996ch; @Kulesza:2008jb; @Kulesza:2009kq; @Beenakker:2009ha; @Beenakker:2011fu] is performed in the case of strong SUSY pair-production.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We have measured the field-dependent heat capacity in the tetragonal antiferromagnets CeRhIn$_{5}$ and Ce$_{2}$RhIn$_{8}$, both of which have an enhanced value of the electronic specific heat coefficient $\gamma \sim 400$ mJ/mol-Ce K$^{2}$ above $T_{N}$. For $T<T_{N},$ the specific heat data at zero applied magnetic field are consistent with the existence of an anisotropic spin-density wave opening a gap in the Fermi surface for CeRhIn$%
_{5},$ while Ce$_{2}$RhIn$_{8}$ shows behavior consistent with a simple antiferromagnetic magnon. From these results, the magnetic structure, in a manner similar to the crystal structure, appears more two-dimensional in CeRhIn$_{5}$ than in Ce$_{2}$RhIn$_{8}$ where only about 12% of the Fermi surface remains ungapped relative to 92% for Ce$_{2}$RhIn$_{8}$. When $%
B||c, $ both compounds behave in a manner expected for heavy fermion systems as both $T_{N}$ and the electronic heat capacity decrease as field is applied. When the field is applied in the tetragonal basal plane ($B||a$), CeRhIn$_{5} $ and Ce$_{2}$RhIn$_{8}$ have very similar phase diagrams which contain both first- and second-order field-induced magnetic transitions .
address:
- |
Department of Physics, University of Nevada, Las Vegas, Nevada,\
89154-4002
- |
Materials Science and Technology Division, Los Alamos National\
Laboratory,\
Los Alamos, NM 87545
author:
- 'A.L. Cornelius'
- 'P.G. Pagliuso, M.F. Hundley, and J.L. Sarrao'
title: 'Field-induced magnetic transitions in the quasi-two-dimensional heavy-fermion antiferromagnets Ce$_{n}$RhIn$_{3n+2}$ ($n=1$ or $2)$'
---
10000
Introduction
============
Ce$_{n}$RhIn$_{2n+3}$ ($n=1$ or 2) crystallize in the quasi-two-dimensional (quasi-2D) tetragonal structures Ho$_{n}$CoGa$_{2n+3}$, and both are moderately heavy-fermion antiferromagnets ($\gamma \sim 400$ mJ/mol-Ce K$%
^{2} $ for both systems above $T_{N}=3.8$ K for$\ n=1$ and 2.8 K for $n=2$). The evolution of the ground states of CeRhIn$_{5}$ as a function of applied pressure, including a pressure-induced first-order superconducting transition at 2.1 K, is unlike any previously studied heavy-fermion system and is attributed to the quasi-2D crystal structure [@Hegger00]. In a similar manner to CeRhIn$_{5}$, $T_{N}$ is seen to change only slightly with pressure and abruptly disappear in Ce$_{2}$RhIn$_{8}$, though superconductivity has not yet been observed [@Thompson01].
A previous zero field heat capacity study on CeRhIn$_{5}$ revealed that the anisotropic crystal structure leads to a quasi-2D electronic and magnetic structure [@Cornelius00]. We have performed measurements of the heat capacity in applied magnetic fields for CeRhIn$_{5}$ and Ce$_{2}$RhIn$_{8}$ in an attempt to further understand the electronic and magnetic properties of these compounds. We find that for magnetic fields applied along the tetragonal $c$-axis, both systems behave like typical heavy-fermion compounds [@Stewart84] as both $T_{N}$ and $\gamma _{0}$ decrease as field is increased. Very different behavior is seen when the field is directed along the $a$-axis as $T_{N}$ is found to increase and numerous field-induced transitions, both of first- and second-order, are observed. These transitions correspond to magnetic field-induced changes in the magnetic structure. In agreement with what one might expect, the magnetic properties seem less 2D as the crystal structure becomes less 2D going from single layer CeRhIn$_{5}$ to double layer Ce$_{2}$RhIn$_{8}$ (note that as $%
n\rightarrow \infty $, one gets the 3D cubic system CeIn$_{3}$).
Results
=======
Single crystals of Ce$_{n}$RhIn$_{2n+3}$ were grown using a flux technique described elsewhere [@Canfield92]. The residual resistivity ratio between 2 K and 300 K using a standard 4-probe measurement and was found to be greater than 100 for all measured crystals, indicative of high-quality samples. A clear kink was observed in the resistivity at $T_{N}=3.8$ K for CeRhIn$_{5}$ and $T_{N}=2.8$ K in Ce$_{2}$RhIn$_{8}$. Magnetization measurements have shown that the effective magnetic moment is slightly reduced from the value expected for Ce$^{3+}$ due to crystal fields [Thompson01]{}. The specific heat was measured on a small ($\sim 10$ mg) sample employing a standard thermal relaxation method.
The single crystals were typically rods with dimensions from 0.1 to 10 mm with the long axis of the rod found to be along the $\langle 100\rangle $ axis of the tetragonal crystal. The samples were found to crystallize in the primitive tetragonal Ho$_{n}$CoGa$_{2n+3}$-type structure [Grin79,Grin86]{} with lattice parameters of $a=0.4652(1)$ nm and $c=0.7542(1)$ nm for $n=1$ and $a=0.4665(1)$ nm and $c=1.2244(5)$ nm for $n=2$ [Hegger00,Thompson01]{}. The crystal structure of Ce$_{n}$RhIn$_{2n+3}$ can be viewed as (CeIn$_{3}$)$_{n}$(RhIn$_{2}$) with alternating $n$ cubic (CeIn$%
_{3}$) and one (RhIn$_{2}$) layers stacked along the $c$-axis. By looking at the crystal structure, we would expect that AF correlations will develop in the (CeIn$_{3}$) layers in a manner similar to bulk CeIn$_{3}$ [Lawrence80]{}. The AF (CeIn$_{3}$) layers will then be weakly coupled by an interlayer exchange interaction through the (RhIn$_{2}$) layers which leads to a quasi-2D magnetic structure. This has been shown to be true as the moments are AF ordered within the tetragonal basal $a$-plane but display a modulation along the $c$-axis which is incommensurate with the lattice for CeRhIn$_{5}$ [@Bao00]. As $n$ is increased, the crystal structure should become more 3D ($n=\infty $ being the 3D cubic system CeIn$_{3}$) and the effects of the interlayer coupling should become less important causing the magnetic and electronic structure to be more 3D. Indeed, the magnetic structure in Ce$_{2}$RhIn$_{8}$ does not display an incommensurate spin density wave (SDW) [@Bao01].
The zero field data from specific heat measurements are shown in Fig. [cp]{}. A peak at $T_{N}$ is clearly seen for both samples, indicating the onset of magnetic order. The entropy associated with the magnetic transition is $\symbol{126}0.3R\ln 2$ with the remaining $0.7R\ln 2$ recovered by 20 K for both $n=1$ and 2. For $T>T_{N}$ the data could not be fit by simply using $C/T=\gamma +\beta _{l}T^{2},$where $\gamma $ is the electronic specific heat coefficient and $\beta _{l}$ is the lattice Debye term. As found previously, one needs to use isostructural, nonmagnetic La$_{n}$RhIn$%
_{2n+3}$ to subtract the lattice contribution to $C$ [@Hegger00]. After subtracting the lattice contribution, it is still difficult to extract a value of $\gamma $ from the data. However, by performing a simple entropy balance construction, a value of $\gamma \approx 400$ mJ/mol-Ce K$^{2}$ is found for both $n=1$ and 2.
For temperatures below $T_{N}$, as found before on CeRhIn$_{5}$ [Cornelius00]{}, the magnetic heat capacity data, where the corresponding La compound is used to subtract the lattice contribution, can be fit using the equation$$C_{m}/T=\gamma _{0}+\beta _{M}T^{2}+\beta _{M}^{\prime }\left(
e^{-E_{g}/k_{B}T}\right) T^{2} \label{magnon}$$where $\gamma _{0}$ is the zero temperature electronic term, $\beta
_{M}T^{2} $ is the standard AF magnon term, and the last term is an activated AF magnon term. The need for an activated term to describe heat capacity data been seen before in other Ce and U compounds [Cornelius00,Bredl87,Dijk97,Murayama97]{}, and the term rises from an AF SDW with a gap in the excitation spectrum due to anisotropy. As discussed previously, the CeRhIn$_{5}$ magnetic structure indeed displays an anisotropic SDW with modulation vector (1/2,1/3,0.297) [@Bao00] which is consistent with this picture. The inset to Fig. \[cp\] shows the data for $%
T<T_{N}$ and the lines are fits to Eq. \[magnon\] for $%
T^{2}<(0.85T_{N})^{2}$. We find that the the activated term is NOTnecessary to fit the Ce$_{2}$RhIn$_{8}$ data. Rather, there appears to be a small feature in the heat capacity centered around 2 K whose origin is unknown. This feature is also observed in transport measurements and is known to persist as a function of pressure [@Sidorov01]. A summary of the fit parameters, along with data on CeIn$_{3}$ [@Berton79], is given in Table I. These results lead us to the conclusion that the magnetically ordered state in CeRhIn$_{5}$ consists of an anisotropic SDW that opens up a gap on the order of 8 K in the Fermi surface, while no such gap is seen in Ce$_{2}$RhIn$_{8}$, consistent with its commensurate structure. Note that the values for CeRhIn$_{5}$ are slightly different from a previous report where the lattice contribution from LaRhIn$_{5}$ was not subtracted from the raw data [@Cornelius00]. From the ratio of the electronic contribution for temperatures above and below $T_{N},$ we estimate that approximately $\gamma _{0}/\gamma \sim 0.12$ (12%) of the Fermi surface remains ungapped below $T_{N}$ for CeRhIn$_{5}$ while for Ce$%
_{2}$RhIn$_{8}$ 92% of the Fermi surface remains ungapped. The results on CeIn$_{3}$ of Berton [*et al.*]{} are also shown in Table I for comparison where $\gamma _{0}/\gamma \sim 0.056$ (5.6%). Clearly, the electronic structure, as evidenced by the ratio $\gamma _{0}/\gamma ,$ becomes more 3D in the Ce$_{n}$RhIn$_{2n+3}$ series as $n$ is increased [@Berton79].
Fig. \[c115\] shows the heat capacity in applied magnetic fields for $B||c$ for both compounds (CeRhIn$_{5}$ on the top and Ce$_{2}$RhIn$_{8}$ on the bottom). As the magnetic moments are known to lie within the $a$-plane CeRhIn$_{5}$, the magnetic field is perpendicular to the magnetic moments in the ordered state in this orientation. The applied field is not sufficient to cause a field-induced magnetic transition in either compound. Rather, it is found that $T_{N}$ and $\gamma _{0}$ decrease as $B$ is increased as is usually observed in heavy fermion systems [@Stewart84].
Fig. \[a115\] shows the heat capacity in applied magnetic fields for $B||a$. For both samples, the Neel point (the onset of antiferromagnetic order), and magnetic field-induced transitions of both first- ($T_{1}$) and second-order ($T_{2}$) are clearly observed. The complete phase diagrams for both CeRhIn$_{5}$ and Ce$_{2}$RhIn$_{8}$ showing the various observed transitions are plotted in Fig. \[phasediag\]. The open symbols correspond to second order transitions ($T_{N}$ and $T_{2}$) and the filled symbols represent the first order transition ($T_{1}$) [@firstord]. The dashed lines are merely guides for the eyes. Remarkably, the phase diagrams for both CeRhIn$_{5}$ and Ce$_{2}$RhIn$_{8}$ are extremely similar. Region I corresponds to the standard modulated spin density wave that is incommensurate with the lattice as reported previously in CeRhIn$_{5}$ [Bao00]{}. The nature of the first- and second- order transitions going from Region I to Regions II and III are not know, and work is underway to determine the magnetic structures in Regions II and III. For Ce$_{2}$RhIn$%
_{8}$, Region II terminates at 70 kOe while for CeRhIn$_{5}$ it extends beyond the highest measured field of 90 kOe. The first order transition going from Region I or Region II to Region III is the same hysteretic field induced transition observed in magnetization measurements on CeRhIn$_{5}$ where an increase in the magnetization of $\lesssim 0.006$ $\mu _{B}/$Ce [@Cornelius00]. These results clearly show that Ce$_{2}$RhIn$_{8}$ has some 2D electronic and magnetic character.
Conclusion
==========
In summary, we have measured the anisotropic heat capacity in applied magnetic fields in the quasi-2D heavy-fermion antiferromagnets CeRhIn$_{5}$ and Ce$_{2}$RhIn$_{8}$. The magnetic and electronic properties of CeRhIn$%
_{5} $ can be well explained by the formation of an anisotropic SDW, leading to a 2D electronic and magnetic structure. The phase diagram of magnetic field-induced magnetic transitions is remarkably similar for both systems as both a first- and second-order transition are observed in both compounds when the magnetic field is along the tetragonal $a$-axis. From the heat capacity measurements, we estimate that 12%(92%) of the Fermi surface remains ungapped below the magnetic ordering temperature for CeRhIn$%
_{5}$ (Ce$_{2}$RhIn$_{8}$). The 2D nature of the electronic properties is a result of the tetragonal crystal structure of Ce$_{n}$RhIn$_{2n+3}$ which consists of $n$ cubic (CeIn$_{3}$) blocks which are weakly interacting along the $c$-axis through a (RhIn$_{2}$) layer. Not surprisingly, it appears that the case of $n=1$ (CeRhIn$_{5}$) is more anisotropic than the double layered Ce$_{2}$RhIn$_{8}$ where the amount of ungapped Fermi surface below $T_{N}$ is greater than 90% , as is found for the 3D system $(n=\infty )$ CeIn$_{3}$. The similarity of the field-induced transitions in both CeRhIn$_{5}$ and Ce$_{2}$RhIn$_{8}$ clearly shows that there is still some 2D character to the magnetic properties of Ce$_{2}$RhIn$_{8}$. The results here shed light on the unusual magnetic and electronic structure of CeRhIn$_{5}$ and Ce$_{2}$RhIn$_{8}$.
Work at UNLV is supported by DOE/EPSCoR Contract No. ER45835. Work at LANL is performed under the auspices of the U.S. Department of Energy.
H. Hegger, C. Petrovic, E. B. Moshopoulou, M. F. Hundley, J. L. Sarrao, Z. Fisk, and J. D. Thompson, Phys. Rev. Lett. [**84**]{}, 4986 (2000).
J. D. Thompson, R. Movshovich, Z. Fisk, F. Bouquet, N. J. Curro, R. A. Fisher, P. C. Hammel, H. Hegger, M. F. Hundley, M. Jaime, P. G. Pagliuso, C. Petrovic, N. E. Phillips, and J. L. Sarrao, cond-mat/0012260, to appear in J. Magn. Magn. Mater. (2001).
A. L. Cornelius, A. J. Arko, J. L. Sarrao, M. F. Hundley, and Z. Fisk, Phys. Rev. B [**62**]{}, 14181 (2000).
G. R. Stewart, Rev. Mod. Phys. [**56**]{}, 755 (1984).
P. C. Canfield and Z. Fisk, Phil. Mag. B [**65**]{}, 1117 (1992).
Y. N. Grin, Y. P. Yarmolyuk, and E. I. Gladyshevskii, Sov. Phys. Crystallogr. [**24**]{}, 137 (1979).
Y. N. Grin, P. Rogl, and K. Hiebl, J. Less-Common Met. [**121**]{}, 497 (1986).
J. M. Lawrence and S. M. Shapiro, Phys. Rev. B [**22**]{}, 4379 (1980).
W. Bao, P. G. Pagliuso, J. L. Sarrao, J. D. Thompson, Z. Fisk, J. W. Lynn, and R. W. Erwin, Phys. Rev. B [**62**]{}, R14621 (2000).
W. Bao, P. G. Pagliuso, J. L. Sarrao, J. D. Thompson, Z. Fisk, and J. W. Lynn, cond-mat/0102221 (unpublished).
C. D. Bredl, J. Magn. Magn. Mat. [**63-64**]{}, 355 (1987).
N. H. [v]{}an Dijk, F. Bourdarot, J. P. Klaasse, I. H. Hagmusa, E. Bruck, and A. A. Menovsky, Phys. Rev. B [**56**]{}, 14493 (1997).
S. Murayama, C. Sekine, A. Yokoyanagi, and Y. Onuki, Phys. Rev. B [**56**]{}, 11092 (1997).
V. I. Sidorov (unpublished).
A. Berton, J. Chaussy, G. Chouteau, B. Cornut, J. Flouquet, J. Odin, J. Palleau, J. Peyrard, and R. Tournier, J. Phys. (Paris) [**40**]{}, C5 (1979).
The software used to measure the heat capacity uses average values to fit the thermal relaxation data. From the raw traces, the transitions which we have assigned as first order are clearly first order in nature. Experience tells us the averaging method underestimates the value of the heat capacity near the peak of the first order transition by a factor of four or five.
-- ---------------- ---------- ------------- --------------- ----------- -------------- ----------------- ------------------- --
$n$ $T_{N}$ (K) $\gamma _{0}$ $\gamma $ $\beta _{M}$ $\beta $E_{g}/k_{B}$ (K)
_{M}^{\prime }$
CeRhIn$_5$ 1 3.72 56 400 24.1 706 8.2
Ce$_2$RhIn$_8$ 2 2.77 370 400 93.2 - -
$^*$CeIn$_3$ $\infty$ 10 136 144 15 - -
-- ---------------- ---------- ------------- --------------- ----------- -------------- ----------------- ------------------- --
: Calculated zero field heat capacity fit parameters for Ce$_{n}$RhIn$%
_{2n+3}$. $^*$Data for CeIn$_3$ is taken from Ref. 15. The various parameters are defined in the text. Units for $\protect\gamma _{0}$ and $%
\protect\gamma $ are mJ/mol Ce-K$^{2}$ and for $\protect\beta _{M}$ and $%
\protect\beta_{M}^{\prime }$ are mJ/mol Ce-K$^{4}$. []{data-label="table"}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Recent experiments demonstrate that a ballistic version of spin resonance, mediated by spin-orbit interaction, can be induced in narrow channels of a high-mobility GaAs two-dimensional electron gas by matching the spin precession frequency with the frequency of bouncing trajectories in the channel. Contrary to the typical suppression of Dyakonov-Perel’ spin relaxation in confined geometries, the spin relaxation rate increases by orders of magnitude on resonance. Here, we present Monte Carlo simulations of this effect to explore the roles of varying degrees of disorder and strength of spin-orbit interaction. These simulations help to extract quantitative spin-orbit parameters from experimental measurements of ballistic spin resonance, and may guide the development of future spintronic devices.'
author:
- 'S. Lüscher'
- 'S. M. Frolov'
- 'J. A. Folk'
title: 'Numerical study of resonant spin relaxation in quasi-1D channels'
---
Introduction
============
Spin-orbit interaction is the primary source of spin relaxation for free carriers in semiconductors.[@Pikus:1984; @ZuticRMP] At the same time, it offers the potential to control the spin orientation of those carriers without the need for conventional high frequency resonance techniques.[@BardarsonPRL07; @Datta:1990; @frolov_nature] Controlling carrier spins using spin-orbit interaction requires the ability to tune its effect with external parameters such as a magnetic field or voltages on electrostatic gates. In the Datta-Das spin transistor concept, for example, the spins of carriers in a 2D quantum well rotate in response to a spin-orbit interaction whose strength can be tuned by a gate.[@Datta:1990; @Koo:2009]
Another way that electrostatic gates can tune the effects of spin-orbit interaction in a quantum well is by defining the lateral confinement geometry of a spintronic device. Recent experiments have shown that bouncing trajectories in gate-defined channels of high mobility GaAs two-dimensional electron gas (2DEG), in an external magnetic field, lead to rapid spin relaxation through a process we refer to as ballistic spin resonance (BSR).[@frolov_nature] On resonance the effect of spin-orbit interaction is amplified by matching the bouncing frequency to the Larmor precession frequency, and the bouncing frequency depends on the gate-defined channel width. Although the mechanism of BSR is straightforward, it is not obvious that the effect should be visible for realistic parameters in a practical device.
In this report, semi-classical Monte Carlo simulations of spin dynamics are used to test the resilience of BSR over a wide range of device parameters. The simulation models varying degrees and types of disorder, confinement potential from the electrostatic gates, and lack of perfect specularity on scattering off the channel walls. We restrict our attention to electron-doped GaAs 2DEGs at low temperature, where the Dyakonov-Perel’ mechanism has been shown to be the dominant source of spin relaxation.[@Pikus:1984; @ZuticRMP] A range of spin-orbit interaction strengths are explored in the simulation, including linear Rashba and Dresselhaus terms as well as the cubic Dresselhaus term. BSR is found to be robust over a wide range of experimentally-accessible parameters, and not to depend sensitively on specific model of disorder.
Dyakonov-Perel’ mechanism
=========================
At the most general level, spin-orbit interaction couples an electron’s spin degree of freedom to its momentum. Spin-orbit interaction in III-V semiconductor quantum wells is characterized by Rashba ($\alpha$) and Dresselhaus ($\beta$ and $\gamma$) terms, due respectively to structural and bulk crystal inversion asymmetry.[@ZuticRMP] Including both types of spin-orbit interaction, the spin-orbit Hamiltonian is: $$\begin{aligned}
\label{eq:hso}
H_{so} & = & \alpha(k_{010}\sigma_{100}-k_{100}\sigma_{010})+\beta(k_{100}\sigma_{100}-k_{010}\sigma_{010}) \nonumber \\ & & { } + \gamma(k_{010}^2 k_{100}\sigma_{100}-k_{100}^2 k_{010}\sigma_{010})\end{aligned}$$ where $k_{100}$ is the component of the Fermi wavevector along the \[100\] crystal axis, $\sigma_{100}$ is the Pauli spin operator along the \[100\] axis. We note that the sign convention adopted for this Hamiltonian gives $\beta\approx-\gamma<k_{001}^2>$. [@Winkler; @Krich2007]
The linear-in-$k$ terms in the Hamiltonian become simpler when described along \[110\] and $[\overline{1}10]$ axes. For the rest of this paper, the \[110\] axis is referred to as the $x$-axis; the $[\overline{1}10]$ axis is referred to as the $y$-axis. $H_{so}$ can be interpreted in terms of a momentum-dependent effective Zeeman field, $\vec{B}_{so}$, which takes the following form when expressed along $x$- and $y$-axes: $$\label{eq:bso}
\vec{B}_{so}=\frac{2}{g\mu_B}((\alpha-\beta)k_y\hat{x}-(\alpha+\beta)k_x\hat{y}) + O(k^3),$$ This effective field corresponds to a spin-orbit precession time $\tau_{so}=\hbar/g\mu_B B_{so}$, where $g=0.44$ is the Landé g-factor in GaAs.
Spin precession according to $H_{so}$ is coherent from a microscopic point of view. At a practical level, however, this term gives rise to spin relaxation in any real conductor due to momentum scattering. Electron spins precess around an effective magnetic field that changes, as the momentum changes, at each scattering event. An ensemble of polarized spins, initially oriented in the same direction but following different random trajectories, will be distributed randomly around the Bloch sphere after a relaxation time, $\tau_{sr}$. This relaxation process is known as the Dyakonov-Perel’ (DP) mechanism.[@DyakPerel]
External magnetic fields have a strong effect on DP relaxation. These effects can be quite complicated when both orbital and spin effects are included. In this paper we discuss only the case of in-plane magnetic fields, which give rise to Zeeman splitting but not to Landau quantization or cyclotron motion. When $\vec{B}_{so}$ is added to an in-plane magnetic field $\vec{B}_{ext}$, it is the total effective field $\vec{B}_{tot} =\vec{B}_{so}+ \vec{B}_{ext}$ that sets the spin precession axis and precession time for DP spin dynamics. The relaxation time, $\tau_{sr}$, due to the DP mechanism has been calculated[@Kiselev:2000] for disordered 2D systems with momentum scattering time $\tau_p$, giving $$\label{dp2d}
\tau_{sr}(B_{ext})\sim\frac{\tau_{so}^2}{\tau_p}(1+(\tau_p\frac{g\mu_B B_{ext}}{\hbar})^2).$$ Here $\tau_p$ corresponds to a mean free path $\lambda=\tau_p v_F$.
The monotonic dependence $\tau_{sr}(B_{ext})$ described by Eq. (\[dp2d\]) does not hold in confined geometries, such as the channels studied here, where the mean free path and spin-orbit length are on the order of or greater than the channel width. [@HolleitnerPRL06] It is the goal of this work to study $\tau_{sr}(B_{ext})$ numerically in these cases.
Model
=====
Semi-classical Monte Carlo simulations of DP spin dynamics in a 2DEG channel were performed by calculating momentum-dependent spin precession along an ensemble of randomly-generated classical trajectories, $\vec{r}_i(t)$, analogous to the calculations described in Refs. . Instantaneous velocity was determined from $\vec{v}_i(t)=d\vec{r}_i/dt$, giving momentum $\hbar\vec{k}_i(t)=m^*\vec{v}_i(t)$ for effective mass $m^*$. Throughout this paper the magnitude of the velocity was $|\vec{v}_i|=v_F=10^5$m/s, corresponding to electron sheet density $n_s\approx5\times 10^{10} cm^{-2}$.[@frolov_prl; @frolov_nature]
Each spin $\vec{s}_i$ was initialized to lie along the external field, $\vec{s}_i(t=0)\parallel\vec{B}_{ext}$. The spins evolved in time by precessing around the trajectory-dependent $\vec{B}_{tot}(t)$, calculated using Eq. (\[eq:bso\]) and $\vec{k}_i(t)$. The ensemble-averaged projection of the spin on the initial axis was then calculated as a function of time, $P(t)=\langle\vec{s}_i(t)\cdot\vec{s}_i(0)\rangle_i$, and fit to an exponential decay model, $P(t)=P_0 e^{-t/\tau_{sr}}$, to extract the spin relaxation time $\tau_{sr}$. Although the DP mechanism can give rise to an oscillatory behavior for $P(t)$ when the external magnetic field is zero (data not shown), the oscillatory component in $P(t)$ disappeared above $B_{ext}\approx 1T$ for the device parameters studied here.
The trajectories, $\vec{r}_i(t)$, were confined to 1 $\mu$m wide channels (Fig. 1(a)), reflecting the devices used in Refs. . The range of spin-orbit parameters and mean free paths explored in this work represent a broader range than would commonly be encountered in a transport experiment. The channels were assumed to be infinitely long, and each trajectory started from the middle of the wire with a random initial velocity direction. (It was confirmed that initial conditions had no effect on the calculated spin relaxation times after averaging over an ensemble of trajectories.)
Disorder was taken into account primarily through small-angle scattering, although [Fig. \[disorder\]]{} compares the effect of various scattering mechanisms and of soft vs. hard-wall confinement. With the exception of [Fig. \[disorder\]]{}, scattering from channel walls was assumed to be specular. The semiclassical approximation used in this simulation–classical trajectories with coherent spin precession–is valid when both the orbital phase coherence time and the momentum scattering time are shorter than the spin relaxation time, and when electron trajectories can be assumed to be independent of spin direction. The latter criterion implies that the Fermi energy is much larger than the spin-orbit energy $H_{so}$. This is a valid approximation in *n*-type GaAs quantum wells but not in *p*-type samples or narrow-gap semiconductors.[@Winkler]
Results
=======
Figure \[traj\] shows a short segment of a trajectory defined by small-angle scattering with mean free path of $\lambda=10\mu$m; this level of disorder that is experimentally accessible in high mobility electron gases. The qualitatively different characteristics of the $x$- and $y$-components of momentum (Figs. \[traj\](b) and \[traj\](c)) highlight the importance of device geometry in the low-disorder regime studied here. Electrons bounce off the channel walls many times before their momentum is randomized, because the mean free path is much larger than the channel width and scattering from the walls is specular.
The trajectories in such a system are characterized by rapid, nearly-periodic changes in the sign of the momentum transverse to the channel, $k_y$, while the magnitude of the longitudinal momentum, $k_x$, changes only diffusively over a longer timescale. The square-wave character of $k_y(t)$ can be seen in [Fig. \[traj\]]{}(c), and in its power spectrum $S_{ky}(f)$ ([Fig. \[traj\]]{}(d)). Notice that the $k_y$ frequency spectrum is strongly peaked despite the random angles of electron motion in a typical trajectory ([Fig. \[traj\]]{}(a)). The peak frequency in $S_{ky}(f)$ reflects an average over the random distribution of trajectory angles, where the bouncing frequency for angle $\theta$ is $\frac{v_Fcos(\theta)}{2w}$ for channel width $w$.
Relaxation of spins that are aligned initially along $\vec{B}_{ext}$ results from fluctuating fields transverse to $\vec{B}_{ext}$. In the DP mechanism, those transverse fields are the momentum-dependent effective fields arising from spin-orbit interaction. The first-order component of the effective magnetic field due to $k_y$ is always in the $x$-direction (Eq. (\[eq:bso\])), independent of the relative strength of Rashba and Dresselhaus terms in the spin-orbit interaction. Similarly, the effective field due to $k_x$ is always in the $y$-direction. Hence spins in an external field along $\hat{x}$ relax due to fluctuations in the motion along $\hat{x}$; spins in a field along $\hat{y}$ relax due to fluctuations in the motion along $\hat{y}$.
The qualitatively different power spectral densities of the two momentum components, $S_{kx}(f)$ and $S_{ky}(f)$ (Fig. 1(d)), give rise to qualitatively different relaxation behaviors for spins in $x$- and $y$-oriented fields respectively (Fig. 2(a)). $\tau_{sr}(B_x)$ increases smoothly with $B_x$ (spins initialized along $\hat{x}$), matching the $\tau_{sr}\propto B_{ext}^2$ behavior expected at high field in 2D disordered systems (Eq. (\[dp2d\])) despite the confinement to a micron-wide channel in the simulation. The sharp periodic dips in $\tau_{sr}(B_y)$ are the BSR features that are the subject of this paper: the short relaxation time at these dips is spin resonance due to the peak frequencies in $S_y(f)$. This resonance occurs when peaks in $S_y(f)$ occur at the Larmor frequency of the external field, $f_L=g\mu_B B_{ext}/h$.
Figure 2(a) also compares the inverse of the spectral densities of the two momentum components to the spin relaxation times, $\tau_{sr}(B_{ext})$, extracted from the simulations. Clearly, ${S_{k}}^{-1}(f)$ is directly proportional to $\tau_{sr}(B_{ext})$ when $S_{k}(f)$ is evaluated at the Larmor frequency, $f_L$. This is reminiscent of the nuclear spin relaxation time, $T_1$, for nuclear magnetic resonance, where it has been shown that ${T_1}^{-1}=(g\mu_B/\hbar)^2 {S_{B_\perp}}(f_L)$, with ${S_{B_\perp}}(f_L)$ representing the spectral density of fluctuations in the transverse magnetic field, $B_{\perp}$, at the Larmor frequency of the static NMR field.[@Slichter] Fluctuations in $B_{so}$ are proportional to $S_{k}(f)$ by Eq. (2), and it is these fluctuations that lead to relaxation in the present case. As seen in Fig. 2(a), the approximation $\tau_{sr}(B)\propto {S_{k}}^{-1}(f)$ becomes significantly worse when the mean free path is longer than 10$\mu$m, perhaps because the approximation of exponentially-correlated noise in the NMR result breaks down.
Because $\tau_{sr}(B_y)$ depends on the $x$-component of $\vec{B}_{so}$, it is controlled by $(\alpha-\beta)$ and is nearly independent of $(\alpha+\beta)$ (Eq. ). The accuracy of this approximation can be tested in the simulation by varying $\alpha$ and $\beta$ independently. As seen in Fig. 2(b), curves with identical $(\alpha-\beta)$ but different $(\alpha+\beta)$ fall on top of each other for $B_{y}\gtrsim 3$T. (The stronger dependence on $(\alpha+\beta)$ at low field comes about because the direction of $\vec{B}_{tot}$ fluctuates significantly when $B_{ext}\lesssim B_{so}$.) The dependence of $\tau_{sr}$ on $\alpha$ is shown in Fig. 2(c), holding $\beta=0$ for all curves: when examined for particular values of magnetic field, the relation $\tau_{sr}\propto\tau_{so}^2\propto\alpha^2$ expected for 2D (Eq. (\[dp2d\])) carries over to the channel data (Fig. 2(c)).
The discussion thus far has ignored the $O(k^3)$ term in Eqs. (1) and (2). Averaged over the Fermi circle, the strength of this term is $\frac{2}{\pi}\gamma k_F^3$. Using values for $\gamma$ reported in the literature ($9-34\text{eV\AA} ^3$)[@Krich2007] and 2DEG parameters $v_F=1\times 10^5m/s$ and $|\alpha|+|\beta|\approx3\text{meV\AA}$ reported in Ref. , the third-order spin-orbit field, $B_{so}^{(3)}$, is an order of magnitude smaller than the first-order field, $B_{so}^{(1)}$. For this reason, the simulations presented in most of this paper set $\gamma$ explicitly to zero for ease of calculation.
For significantly larger values of $v_F$ or $\gamma$, on the other hand, $B_{so}^{(3)}$ is of the same order or larger than $B_{so}^{(1)}$. Because of the more complicated symmetry of $B_{so}^{(3)}$, its effect on BSR is not monotonic in $\gamma$. Figure \[gamma\] explores the role of $B_{so}^{(3)}$ by raising $\gamma$ while holding $v_F=1\times10^5 m/s$, $\alpha=3\text{meV\AA}$, and $\beta=0$. The BSR dips disappear around $\gamma\approx300 \text{eV\AA} ^3$, where $B_{so}^{(3)}\approx2\times B_{so}^{(1)}$, but then revive for larger values of $\gamma$.
Significant changes in spin relaxation were observed when the overall magnitude of disorder (set by $\lambda$) was changed ([Fig. \[mfp\]]{}). When $\lambda$ was much smaller than the channel width, resonant dips were absent. In that case, the bouncing frequency ceases to be a relevant parameter, as electrons seldom make it across the channel without scattering, and the 2D limit of Eq. (\[dp2d\]) is approached. The dips become deeper as $\lambda$ is increased, but reach a minimum value around $10\mu$m before rising again for even longer mean free paths.
In order to understand this non-monotonic dependence, we study the $\lambda$-dependence of $\tau_{sr}$ at the first resonant dip, around $B_y=7T$ (Fig. 4(b)). Starting from very short mean free paths, $\tau_{sr}$ reaches a minimum at $\lambda_{min}= 2\pi v_F \tau_{so}$, then rises again for very long mean free path. The length-scale, $\lambda_{min}$, corresponds to the distance an electron would have to travel in order for the spin to rotate by $2\pi$ due to the spin-orbit effective field.
This behavior can be explained at a qualitative level by considering spin relaxation in a reference frame that rotates at the Larmor frequency. Working in this frame effectively removes precession due the external field, [*and*]{} it removes flips in $\vec{B}_{so}$ that occur at frequency $f_L$ due to bouncing between the channel walls. In other words, spin relaxation in the ballistic channel at the BSR condition is approximately mapped onto spin relaxation in a disordered 2D system at zero external magnetic field. In 2D at zero external field, one expects $\tau_{sr}\sim\tau_{so}^2/\tau_p\equiv \tau_{so}^2v_F/\lambda$ (Eq. (3)) to decrease with increasing $\lambda$ in the motional narrowing regime, i.e. for fast momentum relaxation, $\tau_p < \tau_{so}$. In the ballistic limit, $\tau_p > \tau_{so}$, on the other hand, one expects $\tau_{sr}$ to increase with $\lambda$ as $\tau_{sr}\sim\tau_p=\lambda/v_F$ because spins precess coherently between scattering.[@Kiselev:2000; @Koop] It is the crossover from motional narrowing to ballistic regimes that gives rise to the non-monotonic behavior of $\tau_{sr}$ in Fig. 4(b).
Finally, we show that the particular type of disorder used to generate trajectories, and the type of scattering off channel walls, has only a small effect on the simulated spin relaxation curves. Figure \[disorder\] shows spin relaxation for three different types of disorder:
1. [*small-angle scattering*]{}. The direction of motion changed from timestep to timestep by a small angle that was Gaussian-distributed around zero, with standard deviation calculated to give the desired mean free path. This is believed to be the dominant scattering mechanism in high-mobility GaAs 2DEGs.[@Coleridge:1991; @juranphys]
2. [*large-angle scattering*]{} was implemented as a probability for complete randomization of momentum angle at each timestep. The probability was calculated to give the desired mean free path.
3. [*rough potential walls*]{} Upon reflection off channel walls, the angle of reflection was randomly distributed around the angle of incidence with a spread of $\phi_{spec}$. $\phi_{spec}=0$ corresponds to specular scattering from channel walls. This effect is believed to be weak in electrostatically-defined GaAs 2DEG nanostructures, as shown by clear transverse focusing signals even up to high order, which require many specular bounces.[@VanHouten:1989]
Each curve shown in Fig. 5(b) corresponds to disorder from only one of the three mechanisms. The mean free path is $\lambda=10\mu$m in each case, confirmed by monitoring the autocorrelation of $k_x(t)$. As seen in the figure, the simulated spin relaxation time depends only slightly on the precise model of disorder, despite the importance of ballistic transport to the resonant dips in $\tau_{sr}(B_y)$. Figure 5(b) also compares BSR for the case of simple reflections from hard-wall channel boundaries to the more realistic case of soft walls, with a 150nm depletion length as might be expected in nanostructures defined by electrostatic surface gates. Small angle scattering is implemented to give $\lambda=10\mu$m in both hard-and soft-wall simulations. The difference between the hard- and soft-wall data is nearly indistinguishable, except for a small shift in the field at which the resonance dips occur.
[ The authors thank J.C. Egues, M. Lundeberg, and G. Usaj for valuable discussions. Work at UBC supported by NSERC, CFI, and CIFAR.]{}
[18]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, p. ().
, , , ****, ().
, , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, ** (, ).
, ****, ().
, ****, ().
, ****, ().
, , , , , ****, ().
, , , ().
, , , , , , ****, ().
, , , , , ****, ().
, ** (, ).
, ****, ().
, , , , , , , , ****, ().
, , , , , , , , , ****, ().
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'For a vehicle moving in an $n$-dimensional Euclidean space, we present a construction of a hybrid feedback that guarantees both global asymptotic stabilization of a reference position and avoidance of an obstacle corresponding to a bounded spherical region. The proposed hybrid control algorithm switches between two modes of operation: stabilization (motion-to-goal) and avoidance (boundary-following). The geometric construction of the flow and jump sets of the hybrid controller, exploiting a hysteresis region, guarantees robust switching (chattering-free) between the stabilization and avoidance modes. Simulation results illustrate the performance of the proposed hybrid control approach for a 3-dimensional scenario.'
author:
- 'Soulaimane Berkane, Andrea Bisoffi, Dimos V. Dimarogonas [^1]'
bibliography:
- 'IEEEabrv.bib'
- 'Bibliography\_multiagent.bib'
date: September 2018
title: |
A Hybrid Controller for Obstacle Avoidance\
in an $n$-dimensional Euclidean Space
---
Introduction
============
The obstacle avoidance problem is a long lasting problem that has attracted the attention of the robotics and control communities for decades. In a typical robot navigation scenario, the robot is required to reach a given goal (destination) while avoiding to collide with a set of obstacle regions in the workspace. Since the pioneering work by Khatib [@khatib1986real] and the seminal work by Koditscheck and Rimon [@koditschek1990robot], artificial potential fields or navigation functions have been widely used in the literature, see. [*e.g.,*]{} [@khatib1986real; @koditschek1990robot; @dimarogonas2006feedback; @filippidis2013navigation], to deal with the obstacle avoidance problem. The idea is to generate an artificial potential field that renders the goal attractive and the obstacles repulsive. Then, by considering trajectories that navigate along the negative gradient of the potential field, one can ensure that the system will reach the desired target from all initial conditions except from a set of measure zero. This is a well known topological obstruction to global asymptotic stabilization by continuous time-invariant feedback when the free state space is not diffeomorphic to a Euclidean space, see, e.g., [@wilson1967structure Thm. 2.2]. This topological obstruction occurs then also in the navigation transform [@loizou2017navigation] and (control)-barrier-function approaches [@prajna2007framework; @wieland2007constructive; @romdlony2016stabilization; @ames2017control].
To deal with such a limitation, the authors in [@sanfelice2006robust] have proposed a hybrid state feedback controller to achieve robust global asymptotic regulation, in $\mathbb{R}^2$, to a target while avoiding an obstacle. This approach has been exploited in [@poveda2018hybrid] to steer a planar vehicle to the source of an unknown but measurable signal while avoiding an obstacle. In [@braun2018unsafe], a hybrid control law has been proposed to globally asymptotically stabilize a class of linear systems while avoiding an unsafe single point in $\mathbb{R}^n$.
In this work, we propose a hybrid control algorithm for the global asymptotic stabilization of a single-integrator system that is guaranteed to avoid a non-point spherical obstacle. Our approach considers trajectories in an $n-$dimensional Euclidean space and we resort to tools from higher-dimensional geometry [@meyer2000matrix] to provide a construction of the flow and jump sets where the different modes of operation of the hybrid controller are activated.
Our proposed hybrid algorithm employs a hysteresis-based switching between the avoidance controller and the stabilizing controller in order to guarantee forward invariance of the obstacle-free region (related to safety) and global asymptotic stability of the reference position. The parameters of the hybrid controller can be tuned so that the hybrid control law matches the stabilizing controller in arbitrarily large subsets of the obstacle-free region.
Preliminaries are in Section \[section:preliminaries\], the problem is formulated in Section \[section:problem\], and our solution is in Sections \[section:controller\]-\[section:main\], with a numerical exemplification in Section \[section:example\]. All the proofs of the intermediate lemmas are in the appendix.
Preliminaries {#section:preliminaries}
=============
Throughout the paper, $\mathbb{R}$ denotes the set of real numbers, $\mathbb{R}^n$ is the $n$-dimensional Euclidean space and $\mathbb{S}^n$ is the $n$-dimensional unit sphere embedded in $\mathbb{R}^{n+1}$. The Euclidean norm of $x\in\mathbb{R}^n$ is defined as $\|x\|:=\sqrt{x^\top x}$ and the geodesic distance between two points $x$ and $y$ on the sphere $\mathbb{S}^n$ is defined by $\mathbf{d}_{\mathbb{S}^n}(x,y):=\arccos(x^\top y)$ for all $x,y\in\mathbb{S}^n$. The closure, interior and boundary of a set $\mathcal{A}\subset\mathbb{R}^n$ are denoted as $\overline{\mathcal{A}}, \mathcal{A}^\circ$ and $\partial\mathcal{A}$, respectively. The relative complement of a set $\mathcal{B}\subset\mathbb{R}^n$ with respect to a set $\mathcal{A}$ is denoted by $\mathcal{A}\setminus\mathcal{B}$ and contains the elements of $\mathcal{A}$ which are not in $\mathcal{B}$. Given a nonzero vector $z\in\mathbb{R}^n\setminus\{0\}$, we define the maps: $$\label{eq:proj-refl-maps}
\pi^\parallel(z):=\tfrac{zz^\top}{\|z\|^2},\, \pi^\perp(z):=\!I_n\!-\tfrac{zz^\top}{\|z\|^2},\, \rho^\perp(z)=\!I_n\!-2\tfrac{zz^\top}{\|z\|^2}$$ where $I_n$ is the $n\times n$ identity matrix. The map $\pi^\parallel(\cdot)$ is the parallel projection map, $\pi^\perp(\cdot)$ is the orthogonal projection map [@meyer2000matrix], and $\rho^\perp(\cdot)$ is the reflector map (also called Householder transformation). Consequently, for any $x\in\mathbb{R}^n$, the vector $\pi^\parallel(z)x$ corresponds to the projection of $x$ onto the line generated by $z$, $\pi^\perp(z)x$ corresponds to the projection of $x$ onto the hyperplane orthogonal to $z$ and $\rho^\perp(z)x$ corresponds to the reflection of $x$ about the hyperplane orthogonal to $z$. For each $z\in{\ensuremath{\mathbb{R}}}^n \setminus\{ 0\}$, some useful properties of these maps follow: $$\begin{aligned}
\label{eq:propLine1}
\pi^\parallel(z)z&=z,&\pi^\perp(z)\pi^\perp(z)&=\pi^\perp(z),\\
\label{eq:propLine2}
\pi^\perp(z)z&=0,&\pi^\parallel(z)\pi^\parallel(z)&=\pi^\parallel(z), \\
\label{eq:propLine3}
\rho^\perp(z)z&=-z,&\rho^\perp(z)\rho^\perp(z)&=I_n,\\
\label{eq:propLine4}
\pi^\perp(z)\pi^\parallel(z)&=0,&\pi^\perp(z)+\pi^\parallel(z)&=I_n, \\
\label{eq:propLine5}
\pi^\parallel(z)\rho^\perp(z)&=-\pi^\parallel(z),& 2\pi^\perp(z)-\rho^\perp(z)&=I_n,\\
\label{eq:propLine6}
\pi^\perp(z)\rho^\perp(z)&=\pi^\perp(z),& 2\pi^\parallel(z)+\rho^\perp(z)&=I_n.\end{aligned}$$ We define for $z\in{\ensuremath{\mathbb{R}}}^n\setminus\{ 0\}$ and $\theta\in{\ensuremath{\mathbb{R}}}$ the parametric map $$\begin{aligned}
\label{eq:def:piTheta}
\pi^\theta(z):=\cos^2(\theta)\pi^\perp(z)-\sin^2(\theta)\pi^\parallel(z).\end{aligned}$$
![The helmet region (dark grey) defined in .[]{data-label="fig:helmet"}](helmet)
In –, we define for $v\in{\ensuremath{\mathbb{R}}}^n\setminus\{ 0\}$ some geometric subsets of $\mathbb{R}^n$, which are described after : $$\begin{aligned}
\label{eq:def:ball}
\mathcal{B}_\epsilon(c)&:=\{x\in\mathbb{R}^n: \|x-c\|\leq\epsilon\}, \\
\label{eq:def:line}
\mathcal{L}(c,v)&:=\{x\in\mathbb{R}^n: x=c+\lambda v, \lambda\in\mathbb{R}\}, \\
\label{eq:def:plane}
\mathcal{P}^{\bigtriangleup}(c,v)&:=\{x\in\mathbb{R}^n: v^\top(x-c)\bigtriangleup 0\},\\
\label{eq:def:cone}
\mathcal{C}^{\bigtriangleup}(c,v,\theta)&:=\{x\in\mathbb{R}^n:\!(x\!-c)^\top\!\pi^\theta\!(v)(x\!-c)\!\bigtriangleup\!0\}\\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=\{x\in\mathbb{R}^n:\cos^2(\theta)\|v\|^2\|x-c\|^2\!\bigtriangleup\! (v^\top(x-c))^2\} \nonumber\\
\label{eq:def:half_cone}
\mathcal{C}_{\bigtriangledown}^{\bigtriangleup}(c,v,\theta)&:=\mathcal{C}^{\bigtriangleup}(c,v,\theta)\cap\mathcal{P}^{\bigtriangledown}(c,v),\\
\label{eq:helmet}
\mathcal{H}(c,\epsilon,\epsilon^\prime,\mu)&:=\overline{\mathcal{B}_{\epsilon^\prime}(c)\setminus\mathcal{B}_{\epsilon}(c)\setminus\mathcal{B}_{\|\mu c\|}(\mu c)},\end{aligned}$$ where the symbols $\bigtriangleup$ and $\bigtriangledown$ can be selected as $\bigtriangleup\in\{=,<,>,\leq,\geq\}$ and $\bigtriangledown\in\{<,>,\leq,\geq\}$. The set $\mathcal{B}_\epsilon(c)$ in is the *ball* centered at $c\in\mathbb{R}^n$ with radius $\epsilon$. The set $\mathcal{L}(c,v)$ in is the $1-$dimensional *line* passing by the point $c\in\mathbb{R}^n$ and with direction parallel to $v$. The set $\mathcal{P}^=(c,v)$ in is the $(n-1)-$dimensional *hyperplane* that passes through a point $c\in\mathbb{R}^n$ and has normal vector $v$. The hyperplane $\mathcal{P}^=(c,v)$ divides the Euclidean space $\mathbb{R}^n$ into two closed sets $\mathcal{P}^{\geq}(c,v)$ and $\mathcal{P}^\leq(c,v)$. The set $\mathcal{C}^=(c,v,\theta)$ in is the right circular *cone* with vertex at $c\in\mathbb{R}^n$, axis parallel to $v$ and aperture $2\theta$. The set $\mathcal{C}^{\bigtriangleup}(c,v,\theta)$ in with $\leq$ as $\bigtriangleup$ (or $\geq$ as $\bigtriangleup$, respectively) is the region inside (or outside, respectively) the cone $\mathcal{C}^=(c,v,\theta)$. The plane $\mathcal{P}^=(c,v)$ divides the conic region $\mathcal{C}^{\bigtriangleup}(c,v,\theta)$ into two regions $\mathcal{C}^{\bigtriangleup}_{\leq}(c,v,\theta)$ and $\mathcal{C}^{\bigtriangleup}_{\geq}(c,v,\theta)$ in . The set $\mathcal{H}(c,\epsilon,\epsilon^\prime,\mu)$ in is called a [*helmet*]{} and is obtained by removing from the spherical shell (annulus) $\mathcal{B}_{\epsilon^\prime}(c)\setminus\mathcal{B}_{\epsilon}(c)$ the portion contained in the ball $\mathcal{B}_{\|\mu c\|}(\mu c)$, see Fig. \[fig:helmet\]. The following geometric fact will be used.
\[lemma:cones\] Let $c\in\mathbb{R}^n$ and $v_1,v_2\in\mathbb{S}^{n-1}$ be some arbitrary unit vectors such that $\mathbf{d}_{\mathbb{S}^{n-1}}(v_1,v_2)=\theta$ for some $\theta\in(0,\pi]$. Let $\psi_1,\psi_2\in[0,\pi]$ such that $\psi_1+\psi_2<\theta<\pi-(\psi_1+\psi_2)$. Then $$\begin{aligned}
\mathcal{C}^{\leq}(c,v_1,\psi_1)\cap\mathcal{C}^{\leq}(c,v_2,\psi_2)=\{c\}.\end{aligned}$$
Finally, we consider in this paper hybrid dynamical systems [@goebel2012hybrid], described through constrained differential and difference inclusions for state $X \in {\ensuremath{\mathbb{R}}}^n$: $$\label{Hybrid:general}
\begin{cases}
\dot X\in\mathbf{F}(X), &X\in\mathcal{F},\\
X^+\in \mathbf{J}(X), & X\in\mathcal{J}.
\end{cases}$$ The data of the hybrid system (i.e., the *flow set* $\mathcal{F}\subset\mathbb{R}^n$, the *flow map* $\mathbf{F}:\mathbb{R}^n\rightrightarrows\mathbb{R}^n$, the *jump set* $\mathcal{J}\subset\mathbb{R}^n$, the *jump map* $\mathbf{J}:\mathbb{R}^n\rightrightarrows\mathbb{R}^n$) is denoted as $\mathscr{H}=(\mathbf{\mathcal{F}},\mathbf{F},\mathcal{J},\mathbf{J})$.
Problem Formulation {#section:problem}
===================
We consider a vehicle moving in the $n$-dimensional Euclidean space according to the following single integrator dynamics: $$\begin{aligned}
\dot x=u\end{aligned}$$ where $x\in\mathbb{R}^n$ is the state of the vehicle and $u\in\mathbb{R}^n$ is the control input. We assume that in the workspace there exists an obstacle considered as a spherical region $\mathcal{B}_\epsilon(c)$ centered at $c\in\mathbb{R}^n$ and with radius $\epsilon>0$. The vehicle needs to avoid the obstacle while stabilizing its position to a given reference. Without loss of generality we consider $n\geq 2$ and take our reference position at $x=0$ (the origin)[^2].
\[assumption:obstacle\] $\|c\|>\epsilon>0$.
Assumption \[assumption:obstacle\] requires that the reference position $x=0$ is not inside the obstacle region, otherwise the following control objective would not be feasible. Our objective is indeed to design a control strategy for the input $u$ such that:
- the obstacle-free region $\mathbb{R}^n\setminus\mathcal{B}_{\epsilon}(c)$ is forward invariant;
- the origin $x=0$ is globally asymptotically stable;
- for each $\epsilon^\prime>\epsilon$, there exist controller parameters such that the control law matches, in $\mathbb{R}^n\setminus\mathcal{B}_{\epsilon^\prime}(c)$, the law $u=-k_0x$ ($k_0>0$) used in the absence of the obstacle.
Objective i) guarantees that all trajectories of the closed-loop system are safely avoiding the obstacle by remaining outside the obstacle region. Objectives i) and ii), together, can not be achieved using a continuous feedback due to the topological obstruction discussed in the introduction. Objective iii) is the so-called [*semiglobal preservation*]{} property [@braun2018unsafe]. This property is desirable when the original controller parameters are optimally tuned and the controller modifications imposed by the presence of the obstacle should be as minimal as possible. Such a property is also accounted for in the quadratic programming formulation of [@wang2017safety III.A.]. The obstacle avoidance problem described above is solved via a hybrid feedback strategy in Sections \[section:controller\]-\[section:main\].
Proposed Hybrid Control Algorithm for Obstacle Avoidance {#section:controller}
========================================================
In this section, we propose a hybrid controller that switches suitably between an [*avoidance*]{} controller and a [*stabilizing*]{} controller. Let $m\in\{-1,0,1\}$ be a discrete variable dictating the control mode where $m=0$ corresponds to the activation of the stabilizing controller and $|m|=1$ corresponds to the activation of the avoidance controller, which has two configurations $m\in\{-1,1\}$. The proposed control input, depending on both the state $x\in\mathbb{R}^n$ and the control mode $m\in\{-1,0,1\}$, is given by the feedback law $$\label{eq:u}
\begin{aligned}
u& =\kappa(x,m):=\begin{cases}
-k_0 x, & m=0\\
- k_m \pi^\perp(x-c)(x-p_m),&m \in\{-1,\, 1\}
\end{cases}
\end{aligned}$$ where $k_m>0$ (with $m\in\{-1,0,1\}$) and $p_m\in\mathbb{R}^n$ (with $m\in\{-1,1\}$) are design parameters. During the stabilization mode ($m=0$), the control input above steers $x$ towards $x=0$. During the avoidance mode ($|m|=1$), the control input above minimizes the distance to the *auxiliary* attractive point $p_m$ [*while*]{} maintaining a constant distance to the center of the ball $\mathcal{B}_{\epsilon}(c)$, thereby avoiding to hit the obstacle. This is done by projecting the feedback $-k_m(x-p_m)$ on the hyperplane orthogonal to $(x-c)$. This control strategy resembles the well-known path planning Bug algorithms (see, [*e.g.,*]{} [@lumelsky1990incorporating]) where the motion planner switches between motion-to-goal objective and boundary-following objective. We refer the reader to Fig. \[fig:flowAndJumpSets\] from now onward for all of this section. For $\theta>0$ (further bounded in ), the points $p_1, p_{-1}$ are selected to lie on the cone[^3] $\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}$: $$\begin{aligned}
\label{eq:p-1}
p_1\in\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\} \text{ and } p_{-1}:=-\rho^\perp(c)p_1.\end{aligned}$$ Note that, by , $p_{-1}$ opposes $p_1$ diametrically with respect to the axis of the cone $\mathcal{C}^=_\leq(c,c,\theta)$ and also belongs to $\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}$ as shown in the following lemma.
\[lemma:p-1\] $p_{-1}\in\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}.$
The logic variable $m$ is selected according to a hybrid mechanism that exploits a suitable construction of the flow and jump sets. This hybrid selection is obtained through the hybrid dynamical system
\[eq:hs\_1obs\] $$\begin{aligned}
&\left\{\begin{aligned}
\dot x&=\kappa(x,m)\\
\dot m&=0
\end{aligned}\right.&&(x,m)\in\bigcup_{m \in \{-1,0,1\}} \!\!\!\! \mathcal{F}_m \times \{m\}\label{eq:hs_1obs:flowMap}\\
&\left\{\begin{aligned}
x^+ &=x\\
m^+ &\in\mathbf{M}(x,m)
\end{aligned}\right.&&(x,m)\in\bigcup_{m \in \{-1,0,1\}} \!\!\!\! \mathcal{J}_m \times \{m\}. \label{eq:hs_1obs:JumpMap}\end{aligned}$$ The flow and jump sets for each mode $m\in\{-1,0,1\}$ are defined as (see for the definition of the helmet $\mathcal{H}$): $$\begin{aligned}
\label{eq:F0}
& \mathcal{F}_0:=\overline{\mathbb{R}^n\setminus(\mathcal{J}_0\cup\mathcal{B}_{\epsilon}(c))}, & & \\
\label{eq:J0}
&\mathcal{J}_0:=\mathcal{H}(c,\epsilon,\epsilon_s,1/2), & &\\
\label{eq:Fm}
&\mathcal{F}_m:=\mathcal{H}(c,\epsilon,\epsilon_h,\mu)\cap\mathcal{C}_\leq^\geq(c,p_m-c,\psi), & & |m|=1,\\
\label{eq:Jm}
&\mathcal{J}_m:=\overline{\mathbb{R}^n\setminus(\mathcal{F}_m\cup\mathcal{B}_{\epsilon}(c))},& & |m|=1,\end{aligned}$$ see their depiction in Fig. \[fig:flowAndJumpSets\], and the (set-valued) jump map is defined as $$\begin{aligned}
\mathbf{M}(x,0)&\!:=\left\{m^\prime\!\in\!\{-1,1\}\colon x\in\mathcal{C}^{\geq}(c,p_{m^\prime}\!-c,\bar\psi)\right\} \label{eq:hs_1obs:M(x,0)}\\
\mathbf{M}(x,m)&\!:= 0, \quad \text{ for } m\in \{-1,1\},\end{aligned}$$
![2D illustration of flow and jump sets considered in Sections \[section:controller\]-\[section:main\].[]{data-label="fig:flowAndJumpSets"}](newFlowJumpSets-v3.pdf){width="\columnwidth"}
where $\epsilon_s$, $\epsilon_h$, $\theta$, $\psi$, $\bar \psi$ are design parameters selected later as in Assumption \[assumption:parameters\]. Before we state our main result, a discussion motivating the above construction of flow and jump sets is in order.
During the stabilization mode $m=0$, the closed-loop system should not flow when $x$ is close enough to the surface of the obstacle region $\mathcal{B}_{\epsilon}(c)$ and the vector field $-k_0x$ points inside $\mathcal{B}_{\epsilon}(c)$. Indeed, by computing the time derivative of $\|x-c\|^2$, we can obtain the set where the stabilizing vector field $-k_0x$ causes a decrease in the distance $\|x-c\|^2$ to the centre of the obstacle region $\mathcal{B}_\epsilon(c)$. This set is characterized by the inequality $$\begin{aligned}
\label{eq:dist_to_c_decreases}
-k_0 x^\top(x-c)\leq 0 \Longleftrightarrow
\left\|x-{c}/{2}\right\|^2\geq\left\|{c}/{2}\right\|^2.\end{aligned}$$ The closed set in corresponds to the region outside the ball $\mathcal{B}_{\|c/2\|}(c/2)$. Therefore, to keep the vehicle safe during the stabilization mode, we define around the obstacle a helmet region $\mathcal{H}(c,\epsilon,\epsilon_s,1/2)$, which is used as the jump set $\mathcal{J}_0$ in . In other words, if during the stabilization mode the vehicle hits this [*safety helmet*]{}, then the controller jumps to avoidance mode. The amount $\epsilon_s-\epsilon$ represents the thickness of the safety helmet that defines the jump set $\mathcal{J}_0$.
During the avoidance mode $|m|=1$, we want our controller to slide on the helmet $\mathcal{H}(c,\epsilon,\epsilon_h,\mu)$ while maintaining a constant distance to the center $c$. Note that, with $\epsilon_h>\epsilon_s$ and $\mu<1/2$, the helmet $\mathcal{H}(c,\epsilon,\epsilon_h,\mu)$ (see also Fig. \[fig:helmet\]) is an [*inflated*]{} version of the helmet $\mathcal{H}(c,\epsilon,\epsilon_s,1/2)$ and creates a hysteresis region useful to prevent infinitely many consecutive jumps (Zeno behavior). Let us then characterize in the following lemma the equilibria of the avoidance vector field $\kappa(x,m)= - k_m \pi^\perp(x-c)(x-p_m)$ ($|m|=1$).
\[lemma:equilibria\] For each $x\in{\ensuremath{\mathbb{R}}}^n \setminus \{c\}$ and $m \in\{-1, 1\}$, $\pi^\perp(x-c)(x-p_m)=0$ if and only if $x\in\mathcal{L}(c,p_m-c)$.
Since we want the trajectories to leave the set $\mathcal{F}_m$ during the avoidance mode, it is necessary to select the point $p_m$ and the flow set $\mathcal{F}_m$ such that $\mathcal{L}(c,p_m-c)\cap\mathcal{F}_m = \emptyset$ for each $m\in\{-1,1\}$, otherwise trajectories can stay in the avoidance mode indefinitely. This motivates the intersection with the conic region in and Lemma \[lemma:empty1\], in view of which we pose the following assumption.
\[assumption:parameters\] The parameters in are selected as: $$\begin{aligned}
& \epsilon_h \in \big(\epsilon, \sqrt{\epsilon\|c\|}\big) &&\epsilon_s\in(\epsilon, \epsilon_h)
&&\mu \in (\mu_{\min},1/2) \label{ineq:eps_h,eps_s}\\
&\theta\in(0,\theta_{\max})
&& \psi\in(0,\psi_{\max}) \label{ineq:parameters}
&& \bar\psi \in(\psi,\psi_{\max})\end{aligned}$$ where $\mu_{\min}$, $\theta_{\max}$ and $\psi_{\max}$ are defined as $$\begin{aligned}
&\mu_{\min}:=\frac{1}{2} \frac{\epsilon_h^2+ \| c \|^2 - 2 \epsilon \| c \|}{\| c \|^2 - \epsilon \| c \|} \in (0,{1}/{2}),\\
&\theta_{\max}:=\arccos\left(\frac{\epsilon_h^2+\| c \|^2(1-2 \mu)}{2\epsilon \| c\|(1-\mu)}\right) \in (0,{\pi}/{2}) \label{eq:theta_max},\\
& \psi_{\max} := \min(\theta,\pi/2-\theta) \in (0,\pi/4).
\label{eq:psi_max}\end{aligned}$$
The intervals in – are well defined. They can be checked in this order. The intervals of $\epsilon_h$ and $\epsilon_s$ are well defined by Assumption \[assumption:obstacle\]. Then, those of $\mu_{\min}$, $\mu$, $\theta_{\max}$ ($\theta_{\max}>0$ directly from $\mu > \mu_{\min}$), $\theta$, $\psi_{\max}$ and, finally, those of $\psi$ and $\bar \psi$ (corresponding to $0< \psi < \bar \psi < \psi_{\max}$) are also well defined.
\[lemma:empty1\] Under Assumption \[assumption:parameters\], $\mathcal{F}_m\cap\mathcal{L}(c,p_m-c)=\emptyset$, for $m\in\{-1,1\}$.
Main Result {#section:main}
===========
In this section, we state and prove our main result, which corresponds to the objectives discussed in Section \[section:problem\]. Let us first write more compactly flow/jump sets and maps: $$\begin{aligned}
\label{eq:hs_1obs:flowJumpSets}
\mathcal{F}:=\!\!\!\! \bigcup_{m \in \{-1,0,1\}} \!\!\!\! & \mathcal{F}_m \times \{m\},\, \mathcal{J}:=\!\!\!\! \bigcup_{m \in \{-1,0,1\}} \!\!\!\!\mathcal{J}_m \times \{m\}\\
(x,m) & \mapsto \mathbf{F}(x,m) := (\kappa(x,m),0),\\
(x,m) & \mapsto \mathbf{J}(x,m) := (x,\mathbf{M}(x,m)).\end{aligned}$$ The mild regularity conditions satisfied by the hybrid system , as in the next lemma, guarantee the applicability of many results in the proof of our main result.
\[lemma:hbc\] The hybrid system with data $(\mathcal{F},\mathbf{F},\mathcal{J},\mathbf{J})$ satisfies the hybrid basic conditions in [@goebel2012hybrid Ass. 6.5].
Let us define the obstacle-free set $\mathcal{K}$ and the attractor $\mathcal{A}$ as: $$\label{eq:KandA}
\mathcal{K}:=\overline{\mathbb{R}^n\setminus\mathcal{B}_\epsilon(c)}\times\{-1,0,1\},\quad {\ensuremath{\mathcal{A}}}:=\{0\}\times\{0\}.$$ Our main result is given in the following theorem.
\[theorem:invariance\] Consider the hybrid system under Assumptions \[assumption:obstacle\]-\[assumption:parameters\]. Then,
- all maximal solutions do not have finite escape times, are complete in the ordinary time direction, and the obstacle-free set $\mathcal{K}$ in is forward invariant;
- the set ${\ensuremath{\mathcal{A}}}$ in is globally asymptotically stable;
- for each $\epsilon^\prime>\epsilon$, it is possible to tune the hybrid controller parameters so that the resulting hybrid feedback law matches, in $\mathbb{R}^n\setminus\mathcal{B}_{\epsilon^\prime}(c)$, the law $u=-k_0x$.
Theorem \[theorem:invariance\] shows that the three objectives discussed in Section \[section:problem\] are fulfilled.
Proof of Theorem \[theorem:invariance\]
---------------------------------------
$$\begin{array}{ll}
\toprule
\text{Set to which $x$ belongs}& \mathbf{T}_{\mathcal{F}_0}(x)\\
\midrule
\partial\mathcal{B}_\epsilon(c)\cap\mathcal{B}^\circ_{\|c/2\|}(c/2) & \mathcal{P}^\geq(0,x-c)\\
\partial\mathcal{B}_{\epsilon_s}(c)\setminus\mathcal{B}_{\|c/2\|}(c/2)& \mathcal{P}^\geq(0,x-c)\\
(\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\mathcal{B}^\circ_{\epsilon_s}(c))\setminus\mathcal{B}_\epsilon(c) & \mathcal{P}^\leq(0,x-c/2)\\
\partial\mathcal{B}_{\epsilon}(c)\cap\partial\mathcal{B}_{\|c/2\|}(c/2) & \mathcal{P}^\geq(0,x-c)\cap\mathcal{P}^\leq(0,x-c/2)\\
\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\partial\mathcal{B}_{\epsilon_s}(c) & \mathcal{P}^\geq(0,x-c)\cup\mathcal{P}^\leq(0,x-c/2)\\
\bottomrule
\end{array}$$ $$\begin{array}{ll}
\toprule
\text{Set to which $x$ belongs}& \mathbf{T}_{\mathcal{F}_{\bar m}}(x) \\
\midrule
\partial\mathcal{B}_\epsilon(c)\!\setminus\!\mathcal{B}_{\|\mu c\|}(\mu c)\!\setminus\!\mathcal{C}^\leq(c,p_{\bar m}\!-\!c,\psi) & \mathcal{P}^\geq(0,x-c)\\
\partial\mathcal{B}_{\epsilon_h}\!(c)\!\setminus\!\mathcal{B}_{\|\mu c\|}\!(\mu c)\!\setminus\!\mathcal{C}^\leq(c,p_{\bar m}\!-\!c,\psi)
& \mathcal{P}^\leq(0,x-c)\\
\partial\mathcal{B}_{\|\mu c\|}(\mu c)\cap\mathcal{B}^\circ_{\epsilon_h}(c)\setminus\mathcal{B}_{\epsilon}(c) & \mathcal{P}^\geq(0,x-\mu c)\\
\mathcal{C}^=_{\le}(c,p_{\bar m}-c,\psi)\cap\mathcal{B}^\circ_{\epsilon_h}(c)\setminus\mathcal{B}_{\epsilon}(c) & \mathcal{P}^\geq(0,n_{\bar m}(x))\\
\partial\mathcal{B}_{\epsilon}(c)\cap\partial\mathcal{B}_{\|\mu c\|}(\mu c) & \mathcal{P}^\geq(0,x\!-\!c)\!\cap\!\mathcal{P}^\geq(0,x-\mu c)\\
\partial\mathcal{B}_{\epsilon_h}(c)\cap\partial\mathcal{B}_{\|\mu c\|}(\mu c) & \mathcal{P}^\leq(0,x\!-\!c)\!\cap\!\mathcal{P}^\geq(0,x-\mu c)\\
\partial\mathcal{B}_{\epsilon}(c)\cap\mathcal{C}^=(c,p_{\bar m}-c,\psi) & \mathcal{P}^\geq(0,x\!-\!c)\!\cap\!\mathcal{P}^\geq(0,n_{\bar m}(x))\\
\partial\mathcal{B}_{\epsilon_h}(c)\cap\mathcal{C}^=(c,p_{\bar m}-c,\psi) & \mathcal{P}^\leq(0,x\!-\!c)\!\cap\!\mathcal{P}^\geq(0,n_{\bar m}(x))\\
\bottomrule
\end{array}$$
To prove item i), we resort to [@chai2018forward Thm. 4.3]. We first establish for $\mathscr{H}$ in the relationships invoked in [@chai2018forward Thm. 4.3], and we refer the reader to Fig. \[fig:flowAndJumpSets\] for a two-dimensional visualization. In particular, the boundary of the flow set $\mathcal{F}$ is given by $\partial\mathcal{F}=\{(x,m):x\in\partial\mathcal{F}_m\}$, where the sets $\partial\mathcal{F}_0$ and $\partial\mathcal{F}_m, m\in\{-1,1\}$, are $$\begin{aligned}
\nonumber
\partial\mathcal{F}_0&=\big(\partial\mathcal{B}_\epsilon(c)\cap\mathcal{B}_{\|c/2\|}(c/2)\big)
\cup\big(\partial\mathcal{B}_{\epsilon_s}(c)\setminus\mathcal{B}_{\|c/2\|}(c/2)\big)\\
&\quad\cup\big((\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\mathcal{B}_{\epsilon_s}(c))\setminus\mathcal{B}_\epsilon(c)\big),\\
\nonumber
\partial\mathcal{F}_m&=\big((\partial\mathcal{B}_\epsilon(c)\cup\partial\mathcal{B}_{\epsilon_h}(c))\setminus\mathcal{B}_{\|\mu c\|}(\mu c)\setminus\mathcal{C}^\leq_\le(c,p_m-c,\psi)\big)\\
&\cup\big((\partial\mathcal{B}_{\|\mu c\|}(\mu c)\cup\mathcal{C}^=_\leq(c,p_m-c,\psi))\cap\mathcal{B}_{\epsilon_h}(c)\setminus\mathcal{B}^\circ_{\epsilon}(c)\big).\end{aligned}$$ The tangent cone[^4], evaluated at the boundary of $\mathcal{F}$, is given in Table \[eq:tangent\_cone\]. Consider $m=0$ and let $z:=\kappa(x,0)=-k_0x$. If $x\in\partial\mathcal{B}_\epsilon(c)\cap\mathcal{B}^\circ_{\|c/2\|}(c/2)$ then one has $(x-c)^\top z=-k_0x^\top(x-c)>0$ (since $x \in \mathcal{B}^\circ_{\| c/2 \|}(c/2)$, see ), i.e., $z\in\mathcal{P}^>(0,x-c)$. If $x\in(\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\mathcal{B}^\circ_{\epsilon_s}(c))\setminus\mathcal{B}_{\epsilon}(c)$ then one has $(x-c/2)^\top z=-k_0x^\top(x-c/2)=-k_0x^\top c/2=-k_0\|x\|^2/2 \le 0$ since $x^\top(x-c)=0$ from $\|x-c/2\|=\|c/2\|$. Then, $z\in\mathcal{P}^\le (0,x-c/2)$. If $x\in\partial\mathcal{B}_{\epsilon}(c)\cap\partial\mathcal{B}_{\|c/2\|}(c/2)$ or $x\in\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\partial\mathcal{B}_{\epsilon_s}(c)$ then $z^\top(x-c)=0$ and $z^\top(x-c/2)=-k_0\|x\|^2/2\leq 0$ showing, respectively, that $z\in\mathcal{P}^\geq(0,x-c)\cap\mathcal{P}^\leq(0,x-c/2)$. Finally, if $x\in\partial\mathcal{B}_{\epsilon_s}(c)\setminus\mathcal{B}_{\|c/2\|}(c/2)$, then one has $(x-c)^\top z=-k_0x^\top(x-c)< 0$ (since $x\notin \mathcal{B}_{\| c / 2\|}(c/2)$), i.e., $z\in\mathcal{P}^<(0,x-c)$. Let $\mathcal{L}_0:=\partial\mathcal{B}_{\epsilon_s}(c)\setminus\mathcal{B}_{\|c/2\|}(c/2)$. Therefore, by all the previous arguments, $$\label{eq:tgConeBndF0}
\begin{aligned}
x\in\mathcal{L}_0 & \implies \kappa(x,0)\cap\mathbf{T}_{\mathcal{F}_0}(x)=\emptyset\\
x\in\partial\mathcal{F}_0\setminus\mathcal{L}_0 & \implies \kappa(x,0)\subset\mathbf{T}_{\mathcal{F}_0}(x).
\end{aligned}$$ Consider then $m\in\{-1,1\}$ and let now $z:=\kappa(x,m)=-k_m\pi^\perp(x-c)(x-p_m)$. If $x\in\partial\mathcal{B}_{\epsilon}(c)$ or $x\in\partial\mathcal{B}_{\epsilon_h}(c)$ then one has $(x-c)^\top z=-k_m(x-c)^\top\pi^\perp(x-c)(x-p_m)=0$, which implies that both $z\in\mathcal{P}^\geq(0,x-c)$ and $z\in\mathcal{P}^\leq(0,x-c)$. Define $n_m(x):=\pi^{\psi}(p_m-c)(x-c)$, which is a normal vector to the cone $\mathcal{C}^=(c,p_m-c,\psi)$ at $x$. If $x\in\mathcal{C}^=_\leq(c,p_m-c,\psi)$, then[^5] $$\begin{aligned}
&n_m(x)^\top z=-k_m n_m(x)^\top\pi^\perp(x-c)(x-p_m)\\
&\overset{\eqref{eq:propLine2}}{=}k_m(x-c)^\top\pi^{\psi}(p_m-c) \pi^\perp(x-c)(p_m-c)\\
&\overset{\eqref{eq:def:piTheta},\eqref{eq:propLine4}}{=}\!k_m(x-c)^\top\!(\pi^\perp\!(p_m-c)\! -\!\sin^2(\psi)I_n )\pi^\perp\!(x-c)(p_m-c)\\
&\overset{\eqref{eq:propLine2}}{=}k_m(x-c)^\top\pi^\perp(p_m-c)\pi^\perp(x-c)(p_m-c)\\
&\overset{\eqref{eq:propLine4}}{=} k_m(x-c)^\top\pi^\perp(p_m-c)\big(I_n - \pi^\parallel(x-c)\big)(p_m-c)\\
&\overset{\eqref{eq:propLine2}}{=}-k_m(x-c)^\top\pi^\perp(p_m-c)\pi^\parallel(x-c)(p_m-c)\\
&\overset{\eqref{eq:proj-refl-maps}}{=}-k_m\frac{(x-c)^\top\pi^\perp(p_m-c)(x-c)}{\|x-c\|^2}(x-c)^\top(p_m-c)\geq 0\end{aligned}$$ where the last bound follows from $\pi^\perp(p_m-c)$ positive semidefinite and $(x-c)^\top(p_m-c)\leq 0$ (since $x\in\mathcal{C}^=_\leq(c,p_m-c,\psi)\subset\mathcal{P}^{\leq}(c,p_m-c)$). Hence, $z\in\mathcal{P}^\geq(0,n_m(x))$. Finally, let $x\in\partial\mathcal{B}_{\|\mu c\|}(\mu c)\cap\mathcal{B}_{\epsilon_h}(c)\setminus\mathcal{B}^\circ_{\epsilon}(c)$. With $\theta_{\max}$ in , we have
\[eq:proofRel\] $$\begin{aligned}
& 0 \le c^\top (c- x) \le \cos(\theta_{\max}) \| c \| \| x- c\| \label{eq:proofRel1} \\
& |(x-c)^\top (p_m -c)| \le \| x-c \| \| p_m - c\| \label{eq:proofRel2}\\
& c^\top (p_m - c) = - \cos(\theta) \| c \| \| p_m - c \| \label{eq:proofRel3}\end{aligned}$$
where the bounds in follow from in the proof of the previous Lemma \[lemma:empty1\], $\mu<1/2$, and $x\in\partial\mathcal{B}_{\|\mu c\|}(\mu c)\cap\mathcal{B}_{\epsilon_h}(c)\setminus\mathcal{B}^\circ_{\epsilon}(c) \subset \mathcal{H} (c,\epsilon,\epsilon_h, \mu)$; follows from $p_m \in \mathcal{C}^=_\leq(c,c,\theta)$ (by and Lemma \[lemma:p-1\]). So $$\begin{aligned}
&(x-\mu c)^\top z=-k_m(x-\mu c)^\top\pi^\perp(x-c)(x-p_m)\\
&\overset{\eqref{eq:propLine2}}{=}k_m(c-\mu c)^\top\pi^\perp(x-c)(p_m-c)\\
&\overset{\eqref{eq:proj-refl-maps}}{=}k_m(1-\mu)(c^\top(p_m-c)+\\
&\qquad c^\top(c-x)(x-c)^\top(p_m-c)/\|x-c\|^2)\\
&\overset{\eqref{eq:proofRel}}{\leq} k_m(1-\mu)(-\cos(\theta)+\cos(\theta_{\max}))\|c\|\|p_m-c\|<0
\end{aligned}$$ since $k_m>0$, $1-\mu >0$ (from ) and $\theta < \theta_{\max}$ (from ). $(x-\mu c)^\top z < 0$ implies then $z\in\mathcal{P}^<(0,x-\mu c)$. Let $\mathcal{L}_m:=\partial\mathcal{B}_{\|\mu c\|}(\mu c)\cap\mathcal{B}_{\epsilon_h}(c)\setminus\mathcal{B}^\circ_{\epsilon}(c)$. Therefore, by all the previous arguments, $$\label{eq:tgConeBndFm}
\begin{aligned}
x\in\mathcal{L}_m & \implies \kappa(x,m)\cap\mathbf{T}_{\mathcal{F}_m}(x)=\emptyset\\
x\in\partial\mathcal{F}_m\setminus\mathcal{L}_m & \implies \kappa(x,m)\subset\mathbf{T}_{\mathcal{F}_m}(x).
\end{aligned}$$ We can now apply [@chai2018forward Thm. 4.3]. With $\mathcal{K}$ in , let $\hat{\mathcal{F}}:=\partial(\mathcal{K}\cap\mathcal{F})\setminus\mathcal{L}$ with $\mathcal{L}:=\{(x,m)\in\partial\mathcal{F}: \mathbf{F}(x,m)\cap\mathbf{T}_{\mathcal{F}}(x,m)=\emptyset\}$. By and and $\mathcal{K}\cap\mathcal{F}=\mathcal{F}$, we have $\hat{\mathcal{F}}=\cup_{m=-1,0,1}(\partial\mathcal{F}_m\setminus\mathcal{L}_m)\times\{m\}$ and $\mathcal{L}=\cup_{m=-1,0,1}\mathcal{L}_m\times\{m\}$. It follows from and that for every $(x,m)\in\hat{\mathcal{F}}$, $\mathbf{F}(x,m)\subset\mathbf{T}_{\mathcal{F}}(x,m)$. Also, $\mathbf{J}(\mathcal{K}\cap\mathcal{J})\subset\mathcal{K}$, $\mathcal{F}$ is closed, the map $\mathbf{F}$ satisfies the hybrid basic conditions as proven in Lemma \[lemma:hbc\] and it is, moreover, locally Lipschitz since it is continuously differentiable. We conclude then that the set $\mathcal{K}$ is forward pre-invariant [@chai2018forward Def. 3.3]. In addition, since $\mathcal{L}_0\subset\mathcal{J}_0$ and $\mathcal{L}_m\subset\mathcal{J}_m$ with $m\in\{-1,1\}$, one has $\mathcal{L}\subset\mathcal{J}$. Besides, finite escape times can only occur through flow, and since the sets $\mathcal{F}_{-1}$ and $\mathcal{F}_1$ are bounded by their definitions in , finite escape times cannot occur for $x \in \mathcal{F}_{-1} \cup \mathcal{F}_1$. They can neither occur for $x \in \mathcal{F}_{0}$ because they would make $x^\top x$ grow unbounded, and this would contradict that $\tfrac{d}{dt}(x^\top x) \le 0$ by the definition of $\kappa(x,0)$ and by . Therefore, all maximal solutions do not have finite escape times. By [@chai2018forward Thm. 4.3] again, the set $\mathcal{K}$ is actually forward invariant [@chai2018forward Def. 3.3], and solutions are complete. Finally, we anticipate here a straightforward corollary of completeness and Lemma \[lemma:finiteJumps\] below: since the number of jumps is finite by Lemma \[lemma:finiteJumps\], all maximal solutions to are actually complete in the ordinary time direction.
Now, we will prove item ii) in two steps. First, we prove in the following Lemma \[lemma:GAS\_jumpless\] that the set ${\ensuremath{\mathcal{A}}}$ is globally asymptotically stable for the system without jumps. To this end, the *jumpless system* has data $
\mathscr{H}^0 =(\mathbf{F}, \mathcal F, \emptyset, \emptyset )
$ with flow map $\mathbf{F}$ and flow set $\mathcal F$ defined in . We emphasize that $\mathscr{H}^0$ is obtained in accordance to [@goebel2009hybrid Eqq. (38)-(39)] by identifying *all* jumps with events.
\[lemma:GAS\_jumpless\] ${\ensuremath{\mathcal{A}}}$ in is globally asymptotically stable for the jumpless hybrid system $\mathscr{H}^0$.
Second, we prove in the following Lemma \[lemma:finiteJumps\] that the number of jumps is finite for the given hybrid dynamics in .
\[lemma:finiteJumps\] For $\mathscr{H}$ in , each solution starting in $\mathcal{K}$ experiences no more than $3$ jumps.
Based on Lemmas \[lemma:GAS\_jumpless\]-\[lemma:finiteJumps\], global asymptotic stability of ${\ensuremath{\mathcal{A}}}$ follows straightforwardly from [@goebel2009hybrid Thm. 31] since the hybrid system in satisfies the Basic Assumptions [@goebel2009hybrid p. 43], as proven in Lemma \[lemma:hbc\], the set ${\ensuremath{\mathcal{A}}}$ is compact and has empty intersection with the jump set. Lastly, to prove item iii), let $\epsilon^\prime>\epsilon$. Select the parameter $\epsilon_h\in(\epsilon,\min(\epsilon^\prime,\sqrt{\epsilon\|c\|}))$ while all other hybrid controller parameters are selected as in Assumption \[assumption:parameters\]. Then this implies that the flow sets $\mathcal{F}_m, m\in\{-1,1\},$ of the avoidance mode are entirely contained in $\mathcal{B}_{\epsilon^\prime}(c)$. Therefore, as long as the state $x$ remains in $\mathbb{R}^n\setminus\mathcal{B}_{\epsilon^\prime}(c)$, solutions are enforced to flow only with the stabilizing mode $m=0$, which corresponds to the feedback law $u=-k_0x$.
Numerical example {#section:example}
=================
We illustrate our results through a three-dimensional example. The hybrid system in is fully specified by the following parameters. The obstacle has center $c=(1,1,1)$ and radius $\epsilon=0.700$. The controller gains are $k_m= 1$ for $m \in \{-1,0,1\}$. The parameters used in the construction of the flow and jump sets are $\epsilon_h = 0.901$, $\epsilon_s = 0.800$, $\mu=0.444$, $\theta= 0.276$, which satisfy Assumption \[assumption:parameters\]. To select a point $p_1\in\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}$, we proceed as follows. Select $v\in\mathbb{S}^n$ such that $v^\top c=0$ and consider $\mathbf{R}(v,\theta)\in\mathbb{SO}(3)$, i.e., an orthogonal rotation matrix specified by axis $v$ and angle $\theta$. Then, we can verify that the point $p_1=(I_3-\mathbf{R}(v,\theta))c$ is a point on the cone $\mathcal{C}^=_\leq(c,c,\theta)$. By letting $v=(0,1,-1)$, we determine $p_1=(0.424,-0.155,-0.155)$ and $p_{-1}=(-0.348,0.231,0.231)$ as in . We also select $\psi= 0.249$ and $\bar\psi = 0.266$, which satisfy Assumption \[assumption:parameters\]. Fig. \[figure:construction\] shows that the objectives posed in Section \[section:problem\] and proven in Theorem \[theorem:invariance\] are fulfilled. The top part of the figure illustrates the relevant sets. The middle part shows that the origin is globally asymptotically stable, and the control law matches the stabilizing one sufficiently away from the obstacle. The bottom part shows that the solutions are safe since they all stay away from the obstacle set $\mathcal{B}_\epsilon(c)$.
![Top left: sets $\mathcal{F}_{-1}$ (green) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top center: sets $\mathcal{J}_0$ (red), $\mathcal{J}_{-1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (green), and $\mathcal{J}_{1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (blue) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top right: sets $\mathcal{F}_1$ (blue) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Middle: phase portrait of solutions with different initial conditions and $\mathcal{B}_\epsilon(c)$ (grey). Bottom: distance to the obstacle for the solutions and radii $\epsilon_s$, $\epsilon$ of $\mathcal{H}(c,\epsilon,\epsilon_s, 1/2)$, $\mathcal{B}_\epsilon(c)$.[]{data-label="figure:construction"}](sim_sets3.png "fig:"){width="0.30\columnwidth"} ![Top left: sets $\mathcal{F}_{-1}$ (green) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top center: sets $\mathcal{J}_0$ (red), $\mathcal{J}_{-1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (green), and $\mathcal{J}_{1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (blue) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top right: sets $\mathcal{F}_1$ (blue) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Middle: phase portrait of solutions with different initial conditions and $\mathcal{B}_\epsilon(c)$ (grey). Bottom: distance to the obstacle for the solutions and radii $\epsilon_s$, $\epsilon$ of $\mathcal{H}(c,\epsilon,\epsilon_s, 1/2)$, $\mathcal{B}_\epsilon(c)$.[]{data-label="figure:construction"}](sim_sets2.png "fig:"){width="0.25\columnwidth"} ![Top left: sets $\mathcal{F}_{-1}$ (green) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top center: sets $\mathcal{J}_0$ (red), $\mathcal{J}_{-1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (green), and $\mathcal{J}_{1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (blue) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top right: sets $\mathcal{F}_1$ (blue) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Middle: phase portrait of solutions with different initial conditions and $\mathcal{B}_\epsilon(c)$ (grey). Bottom: distance to the obstacle for the solutions and radii $\epsilon_s$, $\epsilon$ of $\mathcal{H}(c,\epsilon,\epsilon_s, 1/2)$, $\mathcal{B}_\epsilon(c)$.[]{data-label="figure:construction"}](sim_sets1.png "fig:"){width="0.30\columnwidth"}\
\
![Top left: sets $\mathcal{F}_{-1}$ (green) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top center: sets $\mathcal{J}_0$ (red), $\mathcal{J}_{-1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (green), and $\mathcal{J}_{1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (blue) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top right: sets $\mathcal{F}_1$ (blue) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Middle: phase portrait of solutions with different initial conditions and $\mathcal{B}_\epsilon(c)$ (grey). Bottom: distance to the obstacle for the solutions and radii $\epsilon_s$, $\epsilon$ of $\mathcal{H}(c,\epsilon,\epsilon_s, 1/2)$, $\mathcal{B}_\epsilon(c)$.[]{data-label="figure:construction"}](sim_sol.png "fig:"){width=".97\columnwidth"}\
![Top left: sets $\mathcal{F}_{-1}$ (green) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top center: sets $\mathcal{J}_0$ (red), $\mathcal{J}_{-1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (green), and $\mathcal{J}_{1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (blue) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top right: sets $\mathcal{F}_1$ (blue) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Middle: phase portrait of solutions with different initial conditions and $\mathcal{B}_\epsilon(c)$ (grey). Bottom: distance to the obstacle for the solutions and radii $\epsilon_s$, $\epsilon$ of $\mathcal{H}(c,\epsilon,\epsilon_s, 1/2)$, $\mathcal{B}_\epsilon(c)$.[]{data-label="figure:construction"}](obstDist.png "fig:"){width=".97\columnwidth"}
\[section:appendix\] All the lemmas are proven in this appendix.
### Proof of Lemma \[lemma:cones\]
Let $x_i\in\mathcal{C}^{\leq}(c,v_i,\psi_i)\setminus\{c\}, i=1,2,$ and be otherwise arbitrary. Define then $z_i:=(x_i-c)/\|x_i-c\|\in\mathbb{S}^{n-1}$ for $i=1,2$. Hence, $z_i\in\mathcal{S}_i$ with $\mathcal{S}_i:=\mathcal{C}^{\leq}(0,v_i,\psi_i)\cap\mathbb{S}^{n-1}$, $i=1,2$. Since $z_i\in\mathcal{C}^{\leq}(0,v_i,\psi_i)$, either $\mathbf{d}_{\mathbb{S}^{n-1}}(v_i, z_i) \le \psi_i$ (upper half cone) or $\mathbf{d}_{\mathbb{S}^{n-1}}(-v_i, z_i) \le \psi_i$ (lower half cone), for $i=1,2$. Consider all possible cases.
If $\mathbf{d}_{\mathbb{S}^{n-1}}(v_i, z_i) \le \psi_i$ for both $i=1,2$, then it follows from the triangle inequality that $
\theta = \mathbf{d}_{\mathbb{S}^{n-1}} (v_1,v_2) \leq \mathbf{d}_{\mathbb{S}^{n-1}}(v_1,z_1)+\mathbf{d}_{\mathbb{S}^{n-1}}(z_1,z_2)+\mathbf{d}_{\mathbb{S}^{n-1}}(v_2,z_2)\leq\mathbf{d}_{\mathbb{S}^{n-1}}(z_1,z_2)+\psi_1+\psi_2.
$ Hence, in view of the condition $\psi_1+\psi_2<\theta$, $\mathbf{d}_{\mathbb{S}^{n-1}}(z_1,z_2)>0$.
If, on the other hand, $\mathbf{d}_{\mathbb{S}^{n-1}}(-v_1, z_1) \le \psi_1$ and $\mathbf{d}_{\mathbb{S}^{n-1}}(v_2, z_2) \le \psi_2$, we have $
\pi-\theta=\mathbf{d}_{\mathbb{S}^{n-1}}(-v_1,v_2)\leq\mathbf{d}_{\mathbb{S}^{n-1}}(-v_1,z_1)+\mathbf{d}_{\mathbb{S}^{n-1}}(z_1,z_2)+\mathbf{d}_{\mathbb{S}^{n-1}}(v_2,z_2) \leq\mathbf{d}_{\mathbb{S}^{n-1}}(z_1,z_2)+\psi_1+\psi_2.
$ Hence, in view of the condition $\theta<\pi-(\psi_1+\psi_2)$, $\mathbf{d}_{\mathbb{S}^{n-1}}(z_1,z_2)>0$.
The two cases of $\mathbf{d}_{\mathbb{S}^{n-1}}(-v_1, z_1) \le \psi_1$ and $\mathbf{d}_{\mathbb{S}^{n-1}}(-v_2, z_2) \le \psi_2$, and of $\mathbf{d}_{\mathbb{S}^{n-1}}(v_1, z_1) \le \psi_1$ and $\mathbf{d}_{\mathbb{S}^{n-1}}(-v_2, z_2) \le \psi_2$ lead analogously to the same conclusion. $\mathbf{d}_{\mathbb{S}^{n-1}}(z_1,z_2)>0$ implies that the sets $\mathcal{S}_1$ and $\mathcal{S}_2$ (and in turn $\mathcal{C}^{\leq}(c,v_i,\psi_i)\setminus\{c\}, i=1,2$) are disjoint.
### Proof of Lemma \[lemma:p-1\]
By and , $p_{-1}-c=-\rho^\perp(c)(p_1-c)$. We can then show the claim. First, since $\rho^\perp(c)\pi^\theta(c)\rho^\perp(c)=\pi^\theta(c)$ (by , , ) and $p_1\in\mathcal{C}^=(c,c,\theta)\setminus\{c\}$, $$\begin{aligned}
&(p_{-1}\!-\!c)^\top\!\pi^\theta(c)(p_{-1}\!-\!c)=(p_1\!-\!c)^\top\!\rho^\perp(c)\pi^\theta(c)\rho^\perp(c)(p_1\!-\!c)\\
&=(p_1-c)^\top\pi^\theta(c)(p_1-c)=0,\end{aligned}$$ i.e., $p_{-1}\in\mathcal{C}^=(c,c,\theta)\setminus\{c\}$. Second, by $p_1\in\mathcal{P}^\leq(c,c)$, $$\begin{aligned}
c^\top(p_{-1}-c)=-c^\top\rho^\perp(c)(p_1-c)=c^\top(p_1-c)\leq 0,\end{aligned}$$ i.e., $p_{-1}\in\mathcal{P}^\leq(c,c)$. Therefore, $p_{-1}\in(\mathcal{C}^=(c,c,\theta)\setminus\{c\})\cap\mathcal{P}^\leq(c,c):=\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}$.
### Proof of Lemma \[lemma:equilibria\]
Let $m$ be either $-1$ or $1$. The $\Longleftarrow$ implication is straightforward. As for the $\Longrightarrow$ implication, let $x\in\mathbb{R}^n\setminus\{c\}$ be such that $\pi^\perp(x-c)(x-p_m)=0$, which is equivalent to $\pi^\perp(x-c)(p_m-c)=0$. By the definition of the map $\pi^\perp(\cdot)$, one obtains $\|x-c\|^2(p_m-c)=(p_m-c)^\top(x-c)(x-c)$. However, $(p_m-c)^\top(x - c) \neq 0$, otherwise we would have $p_m = c$ (not true by and Lemma \[lemma:p-1\]). Therefore, by letting $\lambda = \| x - c\|^2 /\big( (p_m-c)^\top (x-c) \big)$ in , one deduces that $x\in\mathcal{L}(c,p_m-c)$.
### Proof of Lemma \[lemma:empty1\]
Let $m$ be either $-1$ or $1$. To deduce the claim, we prove first the relations:
\[eq:facts\] $$\begin{aligned}
\label{eq:fact1}
&\mathcal{L}(c,p_m-c)\subset\mathcal{C}^=(c,c,\theta),\\
\label{eq:fact2}
&\mathcal{L}(c,p_m-c)\setminus\{c\}\subset\mathcal{C}^<(c,p_m-c,\psi),\\
\label{eq:fact3}
&\big(\mathcal{L}(c,p_m\!-\!c)\!\cap\!\mathcal{P}^\geq(c,p_m\!-\!c)\big)\subset\big(\mathcal{L}(c,p_m\!-\!c)\!\cap\!\mathcal{P}^\leq(c,c)\big),\\
\label{eq:fact4}
&\mathcal{H}(c,\epsilon,\epsilon_h,\mu)\cap\mathcal{C}^=_\leq(c,c,\theta)=\emptyset.\end{aligned}$$
As for , let $x\in\mathcal{L}(c,p_m-c)$. Then there exists $\lambda$ such that $x-c=\lambda(p_m-c)$ and, hence, $$\begin{aligned}
(x-c)^\top\pi^{\theta}(c)(x-c)=\lambda^2(p_m-c)^\top\pi^{\theta}(c)(p_m-c)=0 \end{aligned}$$ since $p_m\in\mathcal{C}^=(c,c,\theta)$ by and Lemma \[lemma:p-1\], so is proven. As for , let $x\in\mathcal{L}(c,p_m-c)\setminus\{c\}$. Then there exists $\lambda\neq 0$ such that $x-c=\lambda(p_m-c)$ and, hence, $$\begin{aligned}
&(x-c)\!^\top\!\pi^{\psi}(p_m\!-c)(x-c)\!=\!\lambda^2(p_m\!-c)\!^\top\!\pi^{\psi}\!(p_m\!-c)(p_m-c)\\
&=-\lambda^2\sin^2(\psi)\|p_m-c\|^2<0 \end{aligned}$$ by , , , so is proven. As for , let $x\in\mathcal{L}(c,p_m-c)\cap\mathcal{P}^\geq(c,p_m-c)$. Then there exists $\lambda\geq 0$ such that $x-c=\lambda(p_m-c)$ and, hence, $$\begin{aligned}
c^\top(x-c)=\lambda c^\top(p_m-c)=-\lambda\cos(\theta)\|c\|\|p_m-c\|\leq 0\end{aligned}$$ where we used $p_m\in\mathcal{C}^=_\leq(c,c,\theta)$ and $0<\theta<\theta_{\max}<\pi/2$ by Assumption \[assumption:parameters\]. Hence, one has $x\in\mathcal{P}^\leq(c,c)$, so is proven. As for , let $x\in\mathcal{H}(c,\epsilon,\epsilon_h,\mu)$, then $x\in\mathcal{B}_{\epsilon_h}(c)$, $x\in\overline{\mathbb{R}^n\setminus\mathcal{B}_{\|\mu c\|}(\mu c)}$, and $x\in\overline{\mathbb{R}^n\setminus\mathcal{B}_{\epsilon}(c)}$ by . So, $$\label{eq:whenInH(epsilon_h,mu)}
\begin{aligned}
&c^\top(c-x)=\frac{\|x-c\|^2+(1-\mu)^2\|c\|^2-\|x-\mu c\|^2}{2(1-\mu)}\\
&\leq\frac{\epsilon_h^2+(1-\mu)^2\|c\|^2-\mu^2\|c\|^2}{2(1-\mu)}\!\!=\!\!\frac{\epsilon_h^2+\|c\|^2(1-2\mu)}{2(1-\mu)} \\
&=\cos(\theta_{\max})\epsilon\|c\|\leq\cos(\theta_{\max})\|x-c\| \|c\|.
\end{aligned}$$ However, for all $z$, $z\in\mathcal{C}^=_\leq(c,c,\theta)$ is equivalent to $(z-c)^\top\pi^{\theta}(c)(z-c)=0$ and $c^\top(z-c)\leq 0$, i.e., $c^\top(z-c)=-\cos(\theta)\|z-c\|\|c\|<-\cos(\theta_{\max})\|z-c\| \|c\|$ by $\theta\in(0,\theta_{\max})$ in Assumption \[assumption:parameters\]. Then, by comparing with , $x\notin\mathcal{C}^=_\leq(c,c,\theta)$, so is proven. Thanks to , the claim of the lemma is deduced as follows: $$\begin{aligned}
&\mathcal{F}_m\cap\mathcal{L}(c,p_m-c)\\
&=(\mathcal{F}_m\cap\mathcal{L}(c,p_m-c)\cap\mathcal{P}^\geq(c,p_m-c))\\
&\qquad\qquad \cup(\mathcal{F}_m\cap\mathcal{L}(c,p_m-c)\cap\mathcal{P}^<(c,p_m-c))\\
&\overset{\eqref{eq:facts},\eqref{eq:def:half_cone}}{\subset}(\mathcal{F}_m\cap\mathcal{C}^=_\leq(c,c,\theta))\cup(\mathcal{F}_m\cap\mathcal{C}^<_<(c,p_m-c,\psi))\\
&\overset{\eqref{eq:Fm}}{=}\mathcal{F}_m\cap\mathcal{C}^=_\leq(c,c,\theta)\\
&\overset{\eqref{eq:Fm}}{=}\mathcal{H}(c,\epsilon,\epsilon_h,\mu)\cap\mathcal{C}_\leq^\geq(c,p_m-c,\psi)\cap\mathcal{C}^=_\leq(c,c,\theta)=\emptyset.
\end{aligned}$$
### Proof of Lemma \[lemma:hbc\]
$\mathcal{F}$ and $\mathcal{J}$ are closed subsets of ${\ensuremath{\mathbb{R}}}^{n}\times\{-1,0,1\}$. $\mathbf{F}$ is a continuous function in $\mathcal F$ (hence, it is outer semicontinuous and locally bounded relative to $\mathcal{F}$, $\mathcal{F} \subset \operatorname{dom}\mathbf{F}$, and $\mathbf{F}(x,m)$ is convex for every $(x,m)\in\mathcal{F}$). $\mathbf{J}$ has a closed graph in $\mathcal{J}$, is locally bounded relative to $\mathcal{J}$ and is nonempty on $\mathcal{J}$. In particular, let us show that $\mathbf{M}(x,0) \neq \emptyset$ for all $x \in \mathcal{J}_0$.
We preliminarily show that $\cap_{m=-1,1}\mathcal{C}^{\leq}(c,p_m-c,\bar\psi)=\{ c \}$. Let $v_m=(p_m-c)/\|p_m-c\|$, and substitute in $$\begin{aligned}
v_1^\top v_{-1}&=\frac{(p_1-c)^\top(p_{-1}-c)}{\|p_1-c\|\|p_{-1}-c\|}=\frac{-(p_1-c)^\top\rho^\perp(c)(p_1-c)}{\|p_1-c\|\|\rho^\perp(c)(p_1-c)\|}\\
&=-\frac{(p_1-c)^\top(2\pi^\theta(c)-\cos(2\theta)I_n)(p_1-c)}{\|p_1-c\|\|p_1-c\|}\\
&=\cos(2\theta)\frac{(p_1-c)^\top(p_1-c)}{\|p_1-c\|\|p_1-c\|}=\cos(2\theta)\end{aligned}$$ where we have used, in this order, the facts that $\rho^\perp(c)=2\pi^\theta(c)-\cos(2\theta)I_n$, $\rho^\perp(c)\rho^\perp(c)=I_n$ and $(p_1-c)^\top\pi^\theta(c)(p_1-c)=0$ (since $p_1\in\mathcal{C}^=(c,c,\theta)$ is implied by ). Then, by Lemma \[lemma:cones\] and $2\bar\psi<2\theta<\pi-2\bar\psi$ (from and , $\bar \psi < \min(\theta,\pi/2-\theta)$), $\cap_{m=-1,1}\mathcal{C}^{\leq}(c,p_m-c,\bar\psi)=\{ c \}$. Hence, it can be shown by a contradiction argument that $\cup_{m=-1,1}\mathcal{C}^{\geq}(c,p_m-c,\bar\psi)=\mathbb{R}^n$. Therefore, in view of , the set $\mathbf{M}(x,0)$ is nonempty.
Finally, $\mathbf{M}(x,0)$ has a closed graph since the construction in allows $\mathbf{M}$ to be set-valued whenever $x\in\cap_{m=-1,1}\mathcal{C}^{\geq}(c,p_m-c,\bar\psi)\cap \mathcal{J}_0$.
### Proof of Lemma \[lemma:GAS\_jumpless\]
Consider the Lyapunov function $$\label{eq:V}
\mathbf{V}(x,m):= m^2/2 + \| x - p_m \|^2/2,$$ with $p_0:=0$ and $p_{m}$ ($m\in\{-1,1\}$) defined in . One has $\mathbf{V}(x,m)=0$ for all $(x,m) \in {\ensuremath{\mathcal{A}}}$ in , $\mathbf{V}(x,m)>0$ for all $(x,m) \notin {\ensuremath{\mathcal{A}}}$, and is radially unbounded relative to $\mathcal{F} \cup \mathcal{J}$. Straightforward computations show that $$\begin{aligned}
& \langle \nabla \mathbf{V} (x,0), \mathbf{F}(x,0) \rangle =-k_0 x^\top x < 0 \quad \forall x \in \mathcal{F}_0 \setminus \{0 \}
\\
& \begin{aligned}
&\langle \nabla \mathbf{V} (x,m), \mathbf{F}(x,m) \rangle = -k_m (x-p_m)^\top \pi^\perp(x-c) (x-p_m) \\
& =\! -k_m \| \pi^\perp\! (x-c) (x-p_m) \|^2 \! < \! 0\quad \forall m \in\{-1,1\}, x \in \mathcal{F}_m.
\end{aligned} $$ The last inequality follows from projection matrices being positive semidefinite and Lemma \[lemma:equilibria\], which implies that it cannot be $\langle \nabla \mathbf{V} (x,m), \mathbf{F}(x,m) \rangle = 0$ for $m \in\{-1,1\}$ and all $x \in \mathcal{F}_m$ since $\mathcal{L}(c,p_m - c)$ is excluded from $\mathcal{F}_m$ by Lemma \[lemma:empty1\]. All the above conditions satisfied by $\mathbf{V}$ suffice to conclude global asymptotic stability of ${\ensuremath{\mathcal{A}}}$ for $\mathscr{H}^0$ since ${\ensuremath{\mathcal{A}}}$ is compact and $\mathscr{H}^0$ satisfies [@goebel2012hybrid Ass. 6.5].
### Proof of Lemma \[lemma:finiteJumps\]
We prove, case by case, that the number of jumps, denoted $N$, does not exceed $3$.
*(i)* [**Case $m(0,0)=0$.**]{} Let us define the disjoint sets $$\begin{aligned}
&\mathcal{R}_a:=\mathcal{C}_\geq^{\leq}(0,c,\gamma)\setminus\mathcal{B}_{\|c/2\|}(c/2)\setminus\mathcal{B}_{\epsilon_s}(c), \label{eq:Ra}\\
&\mathcal{R}_b:=\mathcal{F}_0 \setminus(\mathcal{R}_a\cup\mathcal{J}_0)\end{aligned}$$ with $\cos(\gamma):=\sqrt{1-{\epsilon_s^2}/{\|c\|^2}}$ (well-defined by Assumption \[assumption:obstacle\] and ). Note that $\mathcal{R}_a\cup\mathcal{R}_b\cup\mathcal{J}_0=\overline{\mathbb{R}^n\setminus\mathcal{B}_\epsilon(c)}$. *(i.1)* $x(0,0)\in\mathcal{R}_b$: Solutions can only flow. Consider then the jumpless hybrid system in ${\ensuremath{\mathbb{R}}}^n$ with data $(-k_0x,\mathcal{R}_b,\emptyset,\emptyset)$ and let us show that maximal solutions are complete. Since finite escape times are excluded, it is sufficient (by, e.g., [@goebel2012hybrid Prop. 2.10]) to show that the viability condition $\{-k_0 x\}\subset\mathbf{T}_{\overline{\mathcal{R}_b}}(x)$ holds for all $x\in\partial\mathcal{R}_b$, with $$\begin{aligned}
\partial\mathcal{R}_b&=\big( \partial\mathcal{B}_\epsilon(c)\cap\mathcal{B}_{\|c/2\|}(c/2)\big)
\cup
\big(\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\mathcal{B}_{\epsilon_s}(c)\\
&\setminus\mathcal{B}_\epsilon(c)\big)
\cup
\big(\mathcal{C}^=_\geq(0,c,\gamma)\setminus\mathcal{B}_{\|c/2\|}(c/2)\big)\end{aligned}$$ and $\mathbf{T}_{\overline{\mathcal{R}_b}}(x)$ in the following table.
[Xl]{} Set to which $x$ belongs & $\mathbf{T}_{\overline{\mathcal{R}_b}}(x)$\
$\partial\mathcal{B}_\epsilon(c)\cap\mathcal{B}^\circ_{\|c/2\|}(c/2)$ & $\mathcal{P}^\geq(0,x-c)$\
$(\partial\mathcal{B}_{\|c/2\|}\!(c/2) \cap \mathcal{B}^\circ_{\epsilon_s}\!(c))\!\setminus\!\mathcal{B}_\epsilon(c)$ & $\mathcal{P}^\leq(0,x-c/2)$\
$\mathcal{C}^=_\geq(0,c,\gamma)\setminus\mathcal{B}_{\|c/2\|}(c/2)$ & $\mathcal{P}^\geq(0,\pi^\gamma(c)x)$\
$\partial\mathcal{B}_{\epsilon}(c)\cap\partial\mathcal{B}_{\|c/2\|}(c/2)$ & $\mathcal{P}^\geq(0,x-c)\cap\mathcal{P}^\leq(0,x-c/2)$\
$\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\mathcal{C}^=_\geq(0,c,\gamma)$ & $\mathcal{P}^\geq(0,\pi^\gamma(c)x)\cup\mathcal{P}^\leq(0,x-c/2)$\
Let $z:=-k_0x$ and let us show that $z\in\mathbf{T}_{\overline{\mathcal{R}_b}}(x)$ for all $x \in \partial \mathcal{R}_b$. If $x\in\mathcal{B}_{\|c/2\|}(c/2)$, then $z^\top(x-c)=-k_0 x^\top(x-c)\geq 0$, hence $z\in\mathcal{P}^\geq(0,x-c)$. If $x\in\partial\mathcal{B}_{\|c/2\|}(c/2)$, then $z^\top (x-c/2)=-k_0x^\top(x-c/2)=-k_0x^\top c/2=-k_0\|x\|^2/2 \le 0$, hence $z\in\mathcal{P}^\leq(0,x-c/2)$. Finally, if $x\in\mathcal{C}^=_\geq(0,c,\gamma)$, then $z^\top\pi^\gamma(c)x=-k_0 x^\top\pi^\gamma(c)x=0$ implying that $z\in\mathcal{P}^\ge(0,\pi^\gamma(c)x)$, where $\pi ^\gamma(c) x$ is a normal vector to $\mathcal{C}^=_\geq(0,c,\gamma)$ at $x$. By combining these cases and inspecting the previous table, the above viability condition holds for all $x \in \partial \mathcal{R}_b$, hence solutions are complete. Therefore, $N=0$ for each solution with this initial condition. *(i.2)* $x(0,0)\in\mathcal{R}_a$: We argue that $\mathcal{J}_0$ is reached in finite time. Let us preliminarily show that $$\label{eq:RaBndry}
\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\mathcal{C}_\geq^{<}(0,c,\gamma)\subset \mathcal{B}^\circ_{\epsilon_s}(c).$$ Let $x\in\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\mathcal{C}_{\geq}^<(0,c,\gamma)$. Since $x\in\partial\mathcal{B}_{\|c/2\|}(c/2)$, one has $\|x-c/2\|^2=\|c/2\|^2$, i.e., $\|x\|^2=c^\top x$. Besides, since $x\in\mathcal{C}_{\geq}^<(0,c,\gamma)$, one has $c^\top x=\|x\|^2>\cos(\gamma)\|x\|\|c\|$, i.e., $-\|x\|^2<\epsilon_s^2 -\|c\|^2$ by the definition of $\cos(\gamma)$ in *(i)*. By $\| x \|^2 = c^\top x$ and the last bound, we have $\|x-c\|^2=\|c\|^2-\|x\|^2<\epsilon_s^2$, i.e., $x\in\mathcal{B}^\circ_{\epsilon_s}(c)$. Therefore, by and , it can only be $\partial\mathcal{R}_a = \big( \mathcal{C}^=_\geq (0,c,\gamma ) \setminus \mathcal{B}_{\| c/2 \|} (c/2)\big) \cup\big(\partial \mathcal{B}_{\epsilon_s} (c) \setminus \mathcal{B}^\circ_{\| c/2 \|} (c/2)\big)$.
Then, note that maximal solutions to with the current initial condition are complete by item i), previously proven. Since $\mathbf{V}(x,0):=\| x \|^2/2$ in is strictly decreasing along the flow in $\overline{\mathcal{R}_a}$ and is bounded from below, such complete solutions cannot flow indefinitely in $\mathcal{R}_a \times \{ 0 \}$ and must leave this set in finite time. On the other hand, they cannot leave through $\mathcal{C}^=_\geq(0,c,\gamma)\setminus \mathcal{B}_{\| c/2 \|}(c/2)$. Indeed, for all $x\in\mathcal{C}^=_\geq(0,c,\gamma)$, $(-k_0 x)^\top\pi^\gamma(c)x=0$ and thus $\{-k_0x\}\in\mathcal{P}^=(0,\pi^\gamma(c)x)\subset\mathcal{P}^\leq(0,\pi^\gamma(c)x)$ which is the tangent cone of $\mathcal{R}_a$ at $x$ ($\pi^\gamma(c)x$ is defined in item *(i.1)*). It follows that solutions must leave $\mathcal{R}_a$ through $\partial\mathcal{B}_{\epsilon_s}(c)\setminus\mathcal{B}^\circ_{\|c/2\|}(c/2)\subset\mathcal{J}_0$, that is, they reach $\mathcal{J}_0$ in finite time. From there, the analysis boils down to that in item *(i.3)*. Therefore, $N = 2$ for each solution with this initial condition. *(i.3)* $x(0,0)\in\mathcal{J}_0$: According to the jump map, $m(0,1)= m^\prime$ for some $ m^\prime\in \{ -1, 1\}$ and the jump map in ensures $x(0,0)=x(0,1) \in \mathcal{C}^\ge (c,p_{m^\prime}-c, \bar \psi)$. Therefore, since we selected $\bar \psi>\psi$ in , one has $x(0,1) \in \mathcal{C}^\ge (c,p_{m^\prime}-c, \bar \psi)\cap \mathcal{J}_0 \subset \mathcal{C}^>(c,p_{m^\prime}-c,\psi)\cap\mathcal{J}_0\subset\mathcal{F}_{ m^\prime} \setminus \mathcal{J}_{ m^\prime}$. Hence, $x(0,1) \in \mathcal{F}_{ m^\prime} \setminus \mathcal{J}_{ m^\prime}$, thereby excluding a further consecutive jump. We show in item *(ii.2)* that after a flow, one jump is experienced. Therefore, $N = 2$ for each solution with this initial condition. *(ii)* [**Case $m(0,0)=\bar m\in\{-1,1\}$.**]{} *(ii.1)* $x(0,0)\in\mathcal{J}_{\bar m}$: According to the jump map, one has $m(0,1)=0$ and the cases *(i.1)*, *(i.2)*, or *(i.3)* can occur. Therefore $N\le 3$ for each solution with this initial condition. *(ii.2)* $x(0,0)\in \mathcal{F}_{\bar m}\setminus\mathcal{J}_{\bar m}$. An argument similar to that in *(i.2)* concludes that solutions to with this initial condition must leave $\mathcal{F}_{\bar m}\setminus\mathcal{J}_{\bar m}$ in finite time. Indeed, solutions are complete by Theorem \[theorem:invariance\] and $\mathbf{V}(x,\bar m)$ in is strictly decreasing along the flow in $\mathcal{F}_{\bar m}$ by the proof in Lemma \[lemma:GAS\_jumpless\] and bounded from below, so solutions cannot flow indefinitely in $\mathcal{F}_{\bar m}$. Then, by similar arguments as in the previously proven item i) of Theorem \[theorem:invariance\], solutions can reach in finite time only the set $\mathcal{L}_{\bar m}$ (defined there, above ). However, $\mathcal{L}_{\bar m} \subset \mathcal{R}_b$, and we have shown in item *(i.1)* that no jumps are experienced in $\mathcal{R}_b$. Therefore, $N=1$ for each solution with this initial condition. Because all the possible cases for $x$ and $m$ are covered without circularity, we conclude then that each solution starting in $\mathcal{K}$ experiences no more than 3 jumps.
[^1]: This research was supported in part by the Swedish Research Council (VR), the European Research Council (ERC) through ERC StG BUCOPHSYS, the Swedish Foundation for Strategic Research (SSF), the EU H2020 Co4Robots project, and the Knut and Alice Wallenberg Foundation (KAW). The authors are with the Department of Automatic Control, KTH Royal Institute of Technology, Sweden. `berkane@kth.se` (S. Berkane), `bisoffi@kth.se` (A. Bisoffi), `dimos@kth.se` (D. V. Dimarogonas).
[^2]: \[footnote:n=1\] For $n=1$ (i.e., the state space is a line), global asymptotic stabilization with obstacle avoidance is physically impossible to solve via any feedback.
[^3]: Following the remark in Footnote \[footnote:n=1\], note that the set $\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}$ is nonempty for all $n\geq 2$.
[^4]: For the definition of tangent cone, see [@goebel2012hybrid Def. 5.12 and Fig. 5.4].
[^5]: Each (in)equality is obtained thanks to the relationship reported over it. \[note:overset\]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Using fundamental-measure density functional theory we investigate entropic wetting in an asymmetric binary mixture of hard spheres with positive non-additivity. We consider a general planar hard wall, where preferential adsorption is induced by a difference in closest approach of the different species and the wall. Close to bulk fluid-fluid coexistence the phase rich in the minority component adsorbs either through a series of first-order layering transitions, where an increasing number of liquid layers adsorbs sequentially, or via a critical wetting transition, where a thick film grows continuously.'
author:
- Paul Hopkins
- Matthias Schmidt
date: '17 January 2011, submitted to Phys. Rev. E, Rapid Communication'
title: 'First-order layering and critical wetting transitions in non-additive hard sphere mixtures'
---
Studying the interfacial properties of liquid mixtures is of significant fundamental and technological relevance [@bonn2009wetting]. Bulk liquid-liquid phase separation, which can arise at or close to room temperature, is usually associated with rich phenomenology of interfacial behaviour at a substrate. Gaining a systematic understanding of how the different types of intermolecular and of substrate-molecule interactions induce phenomena such as wetting, layering, and drying at substrates constitutes a major theoretical challenge. Relevant for surface adsorption of liquids are Coulombic and dispersion forces, but also solvent-mediated and depletion interactions which occur in complex liquids. Arguably the most important source for the emergence of structure in dense liquids is the short-ranged repulsion between the constituent particles; this may stem from the overlap of the outer electron shells in molecular systems or from screened charges or steric stabilization in colloidal dispersions.
Hard sphere fluids form invaluable reference models for investigating the behaviour of liquids at substrates. Both the pure [@hansen2006tsl; @roth2010fundamental] and binary [@roth2000binary] hard sphere fluids are relevant, the latter playing an important role when adding e.g. electrostatic interactions in order to study wetting of ionic liquids at a substrate [@oleksyhansen]. The most general binary mixture is characterized by independent hard core distances between all different pairs of species, and is referred to the non-additive hard sphere (NAHS) model. Here the cross species interaction distance can be smaller or larger than the arithmetic mean of the like-species diameters. The NAHS model gives a simplified representation of more realistic pair potentials, i.e. charge renormalisation effects in ionic mixtures in an explicit solvent induce non-additive effective interactions between the ions [@kalcher]. It is also a reference model to which attractive or repulsive tails can be added [@referenceNAHS]. The Asakura-Oosawa-Vrij (AOV) model of colloids and non-adsorbing polymers [@asakuraoosawavrij] is a special case where one of the diameters (that of the polymers) vanishes.
It is surprising that the wetting behaviour of the general NAHS model is largely unknown, given the fundamental status of the model. In this Letter we address this problem and consider the NAHS fluid at a general, non-additive hard wall. We find a rich phenomenology of interfacial phase transition, including two distinct types of surface transitions: one is layering, where the adsorption of one of the phases occurs through a number of abrupt jumps, and the other is critical wetting, where the thickness of the adsorbed film grows continuously when varying the statepoint along the bulk fluid-fluid binodal. Via changing the wall properties a crossover between these transitions occurs.
![\[fig:diagram\] (a) Illustration of the asymmetric NAHS model with positive non-additivity. The solid boundaries represent the hard cores of the small and big species. The dotted line represents the non-additive hard core between unlike species, which here is attributed only to smaller particles. (b) Three examples of general planar hard walls. The additive wall treats the two species equally, while the $b$-type and $s$-type walls have properties similar to the big and small particles, respectively.](fig1.pdf){width="8cm"}
The binary NAHS model is defined by the pair potentials $v_{ij}(r)=\infty$ for $r<\sigma_{ij}$ and $0$ otherwise, where $i,j$ = $s,b$ refers to the small and big species, respectively, $\sigma_{ss}$ and $\sigma_{bb}$ are the diameters of the small and big particles, respectively, and $r$ is the center-to-center distance. The cross-species diameter is $\sigma_{sb}=\tfrac{1}{2}(1+\Delta)(\sigma_{ss}+\sigma_{bb})$, where $\Delta\geq-1$ measures the degree of non-additivity, see Fig. \[fig:diagram\](a) for an illustration of the length scales. The model is characterised by the size ratio, $q=\sigma_{ss}/\sigma_{bb}\leq1$, and by $\Delta$. In this Letter we restrict ourselves to the asymmetric size ratio $q=0.5$, and to positive non-additivity $\Delta=0.2$, as a representative case. We relate $\Delta$ to a length scale via $d=\tfrac{1}{2}(\sigma_{ss}+\sigma_{bb})\Delta\equiv\sigma_{sb}-
\tfrac{1}{2}(\sigma_{ss}+\sigma_{bb})$, where here $d=0.3\sigma_{ss}$. The statepoint is characterised by two partial bulk packing fractions, $\eta_i=\pi\sigma_{ii}^3\rho_i/6$, where $\rho_i$ is the number density of species $i$. We define a general planar hard wall via the external potentials $u_i(z)=\infty$ if $z<l_i$, and 0 otherwise; here $z$ is the distance between the wall and the particle center, and $l_i$ is the minimal distance of approach of species $i=s,b$. Clearly the origin in $z$ is irrelevant, so the only further control parameter is the wall offset, $\delta
l=l_b-l_s$. For additive hard sphere mixtures it is common to set $l_i=\sigma_{ii}/2$; for our model parameters this results in $\delta
l=0.5\sigma_{ss}$. Besides this ‘additive wall’, two further special cases are shown in Fig. \[fig:diagram\](b). The $b$-type wall has properties similar to the big particles so that it sees these with their ‘intrinsic’ size $l_b=\sigma_{bb}/2$, but sees the small particles with their ‘non-additive’ size $l_s=\sigma_{ss}/2+d$, such that $\delta l=0.2\sigma_{ss}$. We expect that the bigger particles adsorb more strongly to this wall. Conversely, the $s$-type wall has properties similar to the small particles, so that it sees these with their ‘intrinsic’ size $l_s=\sigma_{ss}/2$, and sees the big particles with their ‘non-additive’ size $l_b=\sigma_{bb}/2+d$, so that $\delta
l=0.8\sigma_{ss}$. Thus, one expects the small particles to adsorb more strongly.
We investigate the inhomogeneous NAHS fluid using a fundamental measure density functional theory [@schmidt2004rfn; @hopkins2010binary]. Comparison of theoretical results to Monte Carlo simulation data for bulk fluid-fluid phase diagrams [@schmidt2004rfn; @hopkins2010binary], partial radial distribution functions [@schmidt2004rfn; @ayadim2010generalization] and density profiles in planar slits [@hopkins2010all] indicates very good quantitative agreement. We obtain equilibrium density distributions $\rho_i(z)$ from the grand potential functional, $\Omega[\rho_s,\rho_b]$, by numerical solution of $\delta\Omega/\delta\rho_i(z)=0$, $i=s,b$. To calculate coexisting (bulk or surface) states we use the equality of the chemical potentials $\mu_s$, $\mu_b$, and $\Omega$ in the two phases. The NAHS functional [@schmidt2004rfn] features both a large number of terms and a large number of convolutions that take account of the non-locality. Therefore the accurate calculation of density profiles close to phase coexistence, and close to interfacial transitions, is a challenging task.
![\[fig:1w\_profiles\] (a) Pairs of density profiles, $\rho_s(z)$ (left panel) and $\rho_b(z)$ (right panel), of the NAHS fluid with $q=0.5$ and $\Delta=0.2$ at a $b$-type wall and at bulk coexistence on the $s$-rich side of the phase diagram. The shaded regions represent the range of profiles possessing $n=0$ or $1,2,3,4$ or $5$ adsorbed $b$-rich layers, and the region where the adsorbed film becomes infinitely thick. The solid lines represent specific examples from the middle of each range. The inset shows the adsorption of each species, $\Gamma_i$, as a function of $\eta_s$. (b) The corresponding phase diagram in the ($\eta_s,\eta_b$) plane. There is a series of layering transitions that intersect the bulk binodal ($\ast$) and descend into the $s$-rich one phase region, ending in a surface critical point ($\circ$). For clarity only the first two transitions are shown in full, while the remaining transitions are represented only by their intersection with the bulk binodal. The inset shows the location of the first layering and the ‘wetting’ transitions in relation to the bulk critical point ($\bullet$).](fig2a.pdf "fig:"){width="8.0cm"} ![\[fig:1w\_profiles\] (a) Pairs of density profiles, $\rho_s(z)$ (left panel) and $\rho_b(z)$ (right panel), of the NAHS fluid with $q=0.5$ and $\Delta=0.2$ at a $b$-type wall and at bulk coexistence on the $s$-rich side of the phase diagram. The shaded regions represent the range of profiles possessing $n=0$ or $1,2,3,4$ or $5$ adsorbed $b$-rich layers, and the region where the adsorbed film becomes infinitely thick. The solid lines represent specific examples from the middle of each range. The inset shows the adsorption of each species, $\Gamma_i$, as a function of $\eta_s$. (b) The corresponding phase diagram in the ($\eta_s,\eta_b$) plane. There is a series of layering transitions that intersect the bulk binodal ($\ast$) and descend into the $s$-rich one phase region, ending in a surface critical point ($\circ$). For clarity only the first two transitions are shown in full, while the remaining transitions are represented only by their intersection with the bulk binodal. The inset shows the location of the first layering and the ‘wetting’ transitions in relation to the bulk critical point ($\bullet$).](fig2b.pdf "fig:"){width="8.0cm"}
![\[fig:1w\_profiles\_2\] Same as Fig. \[fig:1w\_profiles\](a), but for the $s$-type wall. The coexistence curve is traced on the $b$-rich side of the phase diagram. As the wetting critical point is approached the smaller particles strongly adsorb at the wall, replacing the bigger particles and growing a thick film. Below the wetting critical point the film is infinitely thick. The inset shows the adsorptions, $\Gamma_i$, against the difference in the packing fraction of the small species from its value at the wetting critical point, $\eta_s^{*}-\eta_s$, on a logarithmic scale, where $\eta^*_s=0.0043$.](fig3.pdf){width="8.0cm"}
![\[fig:pd\_deltal\_eta1\] Value of $\eta_s$ at (i) the intercept of the layering transitions with the bulk binodal (solid lines) and (ii) the location of the critical wetting transition critical point (dashed line), as a function of the scaled wall offset $\delta l/\sigma_{ss}$. As $\delta l/\sigma_{ss}$ is increased from 0.2 ($b$-type wall) the layering transitions move along the binodal towards the bulk critical point, located at $\eta^{\rm crit}_s\simeq0.05$. At $\delta l /\sigma_{ss}\simeq 0.27$ the layering transitions coalesce and the surface transition becomes critical wetting. As $\delta l$ increases towards the additive case, the critical wetting transition approaches the bulk critical point. Increasing $\delta l$ further, the wetting critical point moves to the other side of the binodal. The inset shows the $s$-type wall wetting critical point ($\blacktriangle$) in relation to the bulk critical point ($\bullet$).](fig4.pdf){width="8.0cm"}
For $q=0.5$ and $\Delta=0.2$ the DFT predicts fluid-fluid phase separation with a critical point at $\eta_s=0.049$, $\eta_b=0.151$ – see Fig. 2(b). We start with the $b$-type wall, which we find does indeed preferentially adsorb the bigger particles. For $b$-rich statepoints the preferred species is already at the wall and no surface transitions occur. For $s$-rich statepoints at bulk coexistence, but far from the bulk critical point, we find that the small particles dominate the region close to the wall, but that there is a small amount of adsorption of the bigger particles. To illustrate this, see the pair of density profiles, $\rho_s(z)$ and $\rho_b(z)$, furthest from the bulk critical point in Fig. \[fig:1w\_profiles\](a). Reducing $\eta_s$ along the binodal in the direction towards the bulk critical point, there occurs a series of discontinuous jumps of the density profiles. The first jump corresponds to the big particles displacing the small particles from the wall and forming a layer at a distance $\sigma_{bb}$ away from the wall, see Fig. \[fig:1w\_profiles\](a). Each subsequent jump corresponds to the adsorption of an extra $b$-rich liquid layer at the wall. Using the coexistence criteria we have located five distinct layering transitions. Beyond the fifth transition we find that the layer rich in the big particles becomes macroscopically thick. We discuss the possible nature of this transition below. The inset to Fig. \[fig:1w\_profiles\](a) shows the adsorption, $\Gamma_i=\int{{\mathrm d}}z\,\left[\rho_i(z)-\rho_i(\infty)\right]$, of each species $i=s,b$ as a function of $\eta_s$. Each plateau represents the range of statepoints along the binodal which have a particular number of adsorbed layers. The formation of the infinitely thick layer corresponds to $\Gamma_b$ jumping to $+\infty$, and $\Gamma_s$ to $-\infty$. The layering transitions are first-order surface phase transitions, characterised by a range of coexisting states. In Fig. \[fig:1w\_profiles\](b) we plot the coexistence lines of the first two transitions in the ($\eta_s,\eta_b$) plane. We find that the layering transitions intersect the bulk binodal [@footnote] and that they lie very close to the binodal on the $s$-rich side of the phase diagram. Each transition terminates at a surface critical point, where the jump in $\Gamma_i$ vanishes. The first layering transition, where the big particles strongly adsorb at the wall and form the first layer, is the largest both in terms of the change in the adsorptions and its size on the phase diagram. Each subsequent transition is smaller than the previous one.
We next turn to the $s$-type wall. As this preferentially adsorbs the smaller particles, tracing the bulk coexistence curve on its $b$-rich side is interesting. For statepoints far from the bulk critical point, we find that there is some adsorption of the small particles, but that big particles dominate the region close to the wall, see the pair of density profiles furthest from the bulk critical point in Fig. 3, where $\rho_b(z)$ exhibits oscillatory decay that indicates high-density packing effects. Increasing $\eta_s$ along the binodal in the direction of the bulk critical point, we find that the small particles start to adsorb more strongly at the wall, replacing the big particles. On moving further towards the bulk critical point, a thick film rich in the small particles grows. No jumps are observed and the thickness increases continuously (and reversibly) with the state point up to a wetting critical point, beyond which the film is infinitely thick, see Fig. \[fig:1w\_profiles\_2\]. Hence we conclude that this wetting transition is critical. In such a case the adsorption can be shown [@cahn1977critical; @dietrich1988phase] to diverge as $\Gamma_i\propto\log(|\eta^*_s-\eta_s|)$ on the mean-field level, where $\eta_s^*$ is the value of $\eta_s$ at the wetting critical point. We find the value of $\eta_s^*$ by fitting $\Gamma_i$ to its asymptotic form. The inset to Fig. \[fig:1w\_profiles\_2\] compares the adsorptions to the asymptotic logarithmic form. The location of the wetting critical point, $\eta_s^*=0.0043$ is shown in relation to the bulk binodal in the inset to Fig. \[fig:pd\_deltal\_eta1\].
We next vary the wall offset parameter, $\delta l$, between the two cases discussed above. Starting with the $b$-type wall, $\delta
l/\sigma_{ss}=0.2$, and increasing $\delta l$ we find that the location of the layering transitions moves towards the bulk critical point. In Fig. \[fig:pd\_deltal\_eta1\] we show the value of $\eta_s$ at each of the intersections of a layering transition and the bulk binodal as a function of $\delta l$. The jump in adsorption at each layering transition becomes smaller and the extent of the line in the phase diagram becomes shorter (not shown). Decreasing $\delta l$ further, we find that at $\delta l/\sigma_{ss}\simeq0.27$ the individual layering transitions bunch up and become indistinguishable from each other. For smaller $\delta l$ there is a single continuous wetting transition, where the thickness of the adsorbed $b$-rich layer grows logarithmically, in a similar manner to the behaviour at the $s$-type wall described above. We establish the location of the surface critical point by fitting $\Gamma_i$ to its asymptotic form and plot the value of $\eta^*_s$ at the wetting critical point in Fig. \[fig:pd\_deltal\_eta1\]. Increasing $\delta l$ further results in the location of the wetting critical point moving further along the bulk binodal towards the bulk critical point so that at $\delta
l/\sigma_{ss}\simeq0.43$ the wetting transition critical point coincides with the bulk critical point, and the wall is neutral such that neither species is preferentially adsorbed at the wall. As $\delta l$ is increased beyond $0.43$ we find that the wetting transition moves to the $b$-rich side of the phase diagram. The additive wall, $\delta l/\sigma_{ss}=0.5$, has a critical wetting transition, but located very close to the bulk critical point. As $\delta l$ is increased, the wetting critical point moves further along the bulk binodal, so that we return back to the $s$-type wall, $\delta l/\sigma_{ss}=0.8$.
In order to ascertain the generality of our findings, we have investigated the trends upon changing the model parameters. For size ratio $q=0.5$ and vanishing wall offset, $\delta l= 0$, we find layering transitions far from the bulk critical point for a range of non-additivity parameters $\Delta = 0.1, 0.2, 0.5$. Adjusting $\delta
l$ towards the case of the additive wall, the layering transitions move towards the bulk critical point. We also investigated symmetric mixtures with $q=1$ and $\Delta=0.1$. Clearly, for the additive wall, $\delta = 0$, there is no preferential adsorption at the wall and hence no layering transitions. Introducing preferential adsorption via a non-vanishing wall offset, $\delta l= 0.1, 0.2, 0.3$, layering transitions occur, and these move away from the bulk critical point upon increasing $\delta l$.
In summary, we have shown that the NAHS model exhibits both layering and critical wetting transitions depending on the hard wall offset parameter. We expect this wetting scenario to be general and to occur in a large variety of systems where steric exclusion is relevant. A set of layering transitions had been previously found in the AOV model at a hard wall [@dft_layering]. In these studies the wall parameter is equivalent to the $b$-type wall. As in these previous papers the existence of an infinite number of layering transitions is a possibility within our mean-field DFT treatment. The effects of fluctuations would be to smear out the higher-order layering transitions to produce a final ‘wetting’ transition as found here. A change from a first-order to a critical wetting transition is not uncommon [@dietrich1988phase]. What is remarkable here is that tri-critical behaviour can be induced in a purely entropic system by merely changing a non-additive wall parameter, $\delta l$. Moreover, the NAHS model is much less special than the AO model, as here both species (not only the AO colloids) display short-ranged repulsion and hence packing effects. In future work, it would be interesting to see the effects of non-additivity on wetting in charged systems where first-order and critical wetting transitions occur [@oleksyhansen].
We acknowledge funding by EPSRC under EP/E06519/1 and by DFG under SFB840/A3.
[10]{}
D. Bonn, J. Eggers, J. Indekeu, J. Meunier, and E. Rolley, Rev. Mod. Phys. [ **81**]{}, 739 (2009).
J. P. Hansen and I. R. McDonald, [*[Theory of Simple Liquids]{}*]{}, 3rd ed. (Academic Press, London, 2006).
R. Roth, J. Phys.: Condens. Matter [**22**]{}, 063102 (2010).
R. Roth and S. Dietrich, Phys. Rev. E [**62**]{}, 6926 (2000).
A. Oleksy and J. P. Hansen, Mol. Phys. [**104**]{}, 2871 (2006); Mol. Phys. [ **107**]{}, 2609 (2009).
I. Kalcher, D. Horinek, R. Netz, and J. Dzubiella, J. Phys.: Condens. Matter [**21**]{}, 424108 (2009); I. Kalcher, J. C. F. Schulz, and J. Dzubiella, Phys. Rev. Lett. [**104**]{}, 097802 (2010).
A. Harvey and J. Prausnitz, Fluid Phase Equil. [**48**]{}, 197 (1989); G. Kahl, J. Chem. Phys. [**93**]{}, 5105 (1990); L. Woodcock, Ind. Eng. Chem. Re. 2290 (2010).
S. Asakura and F. Oosawa, J. Chem. Phys. [**22**]{}, 1255 (1954); A. Vrij, Pure Appl. Chem. [**48**]{}, 471 (1976).
M. Schmidt, J. Phys.: Condens. Matter [**16**]{}, 351 (2004).
P. Hopkins and M. Schmidt, J. Phys.: Condens. Matter [**22**]{}, 325108 (2010).
A. Ayadim and S. Amokrane, J. Phys.: Condens. Matter [**22**]{}, 035103 (2010).
P. Hopkins and M. Schmidt, to be published.
J. Cahn, J. Chem. Phys. [**66**]{}, 3667 (1977).
S. Dietrich, Vol. XII, C. Domb and J.L. Lebowitz, Eds. (Academic Press, London) (1988).
J. M. Brader et. al., J. Phys.: Condens. Matter [**14**]{}, L1 (2002); M. Dijkstra and R. van Roij, Phys. Rev. Lett. [**89**]{}, 208303 (2002); P. P. F. Wessels, M. Schmidt, and H. Löwen, J. Phys.: Condens. Matter [**16**]{}, S4169 (2004).
For a quantitative comparison of the bulk fluid-fluid demixing phase diagram from DFT to simulation results for $q=0.1$ and 1, see Fig. 3 of Ref. [@schmidt2004rfn] (location of the critical point) and Fig. 4 of Ref. [@hopkins2010binary] (binodals).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We investigate the localization of two interacting particles in one-dimensional random potential. Our definition of the two-particle localization length, $\xi$, is the same as that of v. Oppen [*et al.*]{} \[Phys. Rev. Lett. [**76**]{}, 491 (1996)\] and $\xi$’s for chains of finite lengths are calculated numerically using the recursive Green’s function method for several values of the strength of the disorder, $W$, and the strength of interaction, $U$. When $U=0$, $\xi$ approaches a value larger than half the single-particle localization length as the system size tends to infinity and behaves as $\xi \sim W^{-\nu_0}$ for small $W$ with $\nu_0 = 2.1 \pm 0.1$. When $U\neq 0$, we use the finite size scaling ansatz and find the relation $\xi \sim W^{-\nu}$ with $\nu =
2.9 \pm 0.2$. Moreover, data show the scaling behavior $\xi \sim
W^{-\nu_0} g(b |U|/W^\Delta)$ with $\Delta = 4.0 \pm 0.5$.
address: 'Department of Physics and Center for Theoretical Physics, Seoul National University, Seoul 151–742, Korea'
author:
- 'P. H. Song and Doochul Kim'
title: 'Localization of Two Interacting Particles in One-Dimensional Random Potential'
---
$$\\$$
[2]{}
Recently, there has been intensive attention [@she; @imr; @fra1; @wei1; @opp; @wei2; @rom; @fra2; @fra3; @voj] focused on the problem of the localization of two interacting particles in one-dimensional (1D) random potential. With a few assumptions on the statistical nature of single-particle localized states, Shepelyansky[@she] has mapped the problem approximately to a random band matrix model and obtained an expression for the two-particle localization length, $\xi$, as $$\xi \simeq U^2 \frac{\xi_1^2}{32},$$ where $U$ is the on-site interaction in unit of the hopping energy between nearest neighbor pair sites, and $\xi_1$ the single-particle localization length. This expression is surprising because it implies that $\xi$ can exceed $\xi_1$ at sufficiently small disorder, i.e. sufficiently large $\xi_1$. Later Imry[@imr] has provided a support for Eq. (1) by invoking the Thouless scaling argument. However, the methods employed in Refs. \[1\] and \[2\] are partly approximate and the strict validity of the expression of Eq. (1) is questionable as discussed in, e.g. Refs. \[3-8\] and \[10\].
Many authors[@fra1; @wei1; @opp; @wei2] have tried to find more refined expressions than Eq. (1) by improving the assumptions of Shepelyansky. However, at this stage, there exist controversies yet as to the quantitative expression for $\xi$ like Eq. (1). Frahm [*et al.*]{}[@fra1] obtained the relation $\xi \sim \xi_1^{1.65}$ by the transfer matrix method while an approximate calculation of Green function by v. Oppen [*et al.*]{}[@opp] leads to the hypothesis $\xi = \xi_1/2 + c |U| \xi_1^2$, where $c$ is a constant depending on the statistics of the particles. With the assumption that the level statistics of two interacting particles is described by a Gaussian matrix ensemble, Weinmann and Pichard[@wei1] argued that $\xi$ increases initially as $|U|$ before eventually behaving as $U^2$. Moreover, very recently, Römer and Schreiber have claimed the disappearance of the enhancement as the system size grows (see Refs. \[7\] and \[8\]).
Some of these discrepancies, especially between numerical studies, are due to different definitions for two-particle localization length between authors and also to lack of careful analysis of the finite size effect of the system size. The system under study is a “quantum mechanical two-body problem" in a sense. Motion of the two particles can be decomposed into the motion of the center of mass (CM) and that of the relative coordinate. We are interested in the CM motion since the wavefunction describing the relative motion would not be different from that arising from the single-particle localization problem in the thermodynamic limit if the interaction is short-ranged. Therefore, in this paper, we use the same definition for $\xi$ as introduced by v. Oppen [*et al.*]{}[@opp] for the measure for localization length of the CM: $$\frac{1}{\xi} = -\lim_{|n-m| \rightarrow \infty} \frac{1}{|n-m|} \ln
|\langle n,n|G|m,m\rangle |.$$ Here, $G$ is the Green function and $|i,j\rangle$ is a two-particle state in which the particle 1 (2) is localized at a site $i$ $(j)$. The above definition is reasonable for a description of the CM motion as long as $U$ is smaller than or of the order of the hopping energy between sites[@opp]. In practice, we calculate $\xi_N$ defined below in Eq. (4) for chains of finite lengths [*without any approximation*]{} for several values of $W$ and $U$. We then estimate $\xi$ by extrapolating $\xi_N$ using the finite size scaling ansatz. When $U=0$, we find $\xi \sim
W^{-\nu_0}$ with $\nu_0 = 2.1 \pm 0.1$. Data for $U \neq 0$ lead to the relation $\xi \sim W^{-\nu}$ with $\nu = 2.9 \pm 0.2$. Also the data lead us to propose a scaling form $\xi \sim W^{-\nu_0} g(b|U|/W^\Delta)$, where $g(y)$ is a scaling function with the property $g(y \rightarrow 0)
=$ constant and $g(y \rightarrow \infty) \sim y^{(\nu-\nu_0)/\Delta}$. $\Delta$ is given as $4.0 \pm 0.5$.
We work within the tight-binding equation given by $$\begin{aligned}
\psi_{m+1,n} + \psi_{m-1,n} + \psi_{m,n+1} + \psi_{m,n-1} \nonumber \\
= (E-\epsilon_m-\epsilon_n-U \delta_{m,n})\psi_{m,n},\end{aligned}$$ where $\psi_{i,j} = \langle i,j|\psi \rangle$, $E$ is the energy of the two particles and $\delta_{m,n}$ the Kronecker delta. $m$ and $n$ are the site indices of a chain of length $N$ and range from 1 to $N$, $\epsilon_m$ is the random site energy chosen from a box distribution with interval $[-W/2,W/2]$[@com1] and the hard wall boundary condition, i.e. $\psi_{0,n} = 0$ and so on, is used. As was previously noted[@fra1; @rom], if one interprets $(m,n)$ as Cartesian coordinates of a square lattice of size $N \times N$, the Hamiltonian describes a single particle in a two-dimensional random potential. In Eq. (2), the thermodynamic limit is first taken and then the limit $|n-m| \rightarrow \infty$. To estimate this quantity, we define a sequence $\xi_N$ as $$\frac{1}{\xi_N} = - \ll\frac{1}{N-1} \ln |\langle 1,1|G_N|N,N\rangle |\gg,$$ where $G_N$ represents the Green function for a chain of length $N$ and the double brackets represent the configurational average. To be specific, calculation of $G_N$ amounts to evaluation of the inverse of the matrix, $(E-\cal{H})$, the size of which is $N^2 \times N^2$. One can calculate several elements of $G_N$, i.e. the elements involving the sites of two opposite edges of the square lattice, very efficiently using the recursive algorithm of MacKinnon and Kramer[@mac]. We assume that $\xi_N$ approaches $\xi$ as $N \rightarrow \infty$.
The on-site interaction of the Hamiltonian given by Eq. (3) is relevant only to the spatially symmetric states, which would be realized, say, for a pair of electrons with total spin zero. One can easily see that the contributions to Eq. (2) are only from the spatially symmetric states from following consideration. The Green’s function represents the transition amplitude from an initial state to a final state and since the Hamiltonian, Eq. (3), is invariant under the exchange operation of two particles, the parity of the wavefunction is conserved during the time evolution. Since the initial state of Eq. (2) is a doubly occupied state, i.e. a spatially symmetric state, we are treating only the contributions from symmetric states.
Numerical calculations of $\xi_N$ for various values of $W, U$ and $N$ are performed for $E=0$ without approximation. $N$ is varied within the range $10 \leq N \leq 200$ and for a given parameter set, configurational average is performed over sufficiently many different realizations to control the uncertainties of $\xi_N$ within 1%.
We first examine the case of $U=0$, i.e. the noninteracting two particles. In this case, when the total energy of the system is fixed to $E$, the two-particle wavefunction is a superposition of the products of two single-particle states of energy $E'$ and $E-E'$, and the Green function is given by the convolution of two single-particle Green functions as $$\langle i,i|G(E)|j,j\rangle \sim \int dE' \langle
i|G_0(E')|j\rangle\langle i|G_0(E-E')|j\rangle.$$ It is a nontrivial problem to calculate $\xi(U=0)$ since there exist contributions from various energies. Some authors[@wei1; @opp] have assumed the relation $\xi(U=0) = \xi_1/2$, i.e. half the single-particle localization length, which should be, however, seriously examined. Our numerical data presented in Fig. 1 show that the assumption is not strictly valid. The filled symbols on the $N=\infty$ axis represent $\xi_1/2$ calculated from the expression $\xi_1 \simeq 105/W^2$[@pic] while the empty symbols are our numerical results for $\xi_N$. Taking into account the fact that the uncertainty of each data point is smaller than the symbol size, $\xi_N$ does not seem to extrapolate to $\xi_1/2$ as $N$ tends to infinity. Moreover, the discrepancy between the two quantities becomes larger as $W$ gets smaller. Therefore, we conclude that within the definition of Eq. (2), the single-particle localization length is not an adequate parameter, if it is qualitatively, for a quantitative description of two-particle localization problem. From the data of $N=200$, we get $\xi(U=0) \simeq
70/W^{\nu_0}$ with $\nu_0 = 2.1 \pm 0.1$.
Next, we discuss the case of $U \neq 0$. Figure 2(a) shows the results for $U=1.0$ and $W$ ranging from 0.5 to 10.0. The $y$ axis label represents the renormalized localization length, i.e. $\xi_N$ divided by the system size. For larger values of $W$ and $N$, $\xi_N/N$ behaves as $\sim 1/N$, which implies the convergence of $\xi_N$’s to their constant limiting values. This means that the condition $N \gg \xi$ is well satisfied for these data. However, for smaller values of $W$, i.e. for $W$ ranging from 0.5 to 1.5, it is not easy to deduce the value of $\xi$ since $\xi_N$’s increase steadily within the range of $N$ presented. Therefore we rely on the scaling idea[@mac], which states that $\xi_N/N$ is given by a function of a single parameter, i.e. $N/\xi$: $$\xi_N/N = f(N/\xi).$$ The implication of Eq. (6) is that on a log-log plot all data points of Fig. 2(a) fall on a single curve when translated by $\ln \xi(W)$ along the $x$ axis. As a result, $\xi(W)$’s can be obtained as fitting parameters. The result of data collapsing is shown in Fig. 2(b) for the data set $N \geq 50$. $\xi(W=5.0)$ has been obtained to be $2.87 \pm 0.01$ by fitting the data set for $W=5.0$ and $N \geq 50$ to the formula $\xi_N
= \xi -A/N$[@com2], where $A$ is a constant. Other remaining values of $\xi(W)$’s are obtained by examining the amount of relative translations with respect to the data set of $W=5.0$. The scaling plot is quite good and one can see that the scaling function $f(x)$ behaves as $$f(x) \sim \left\{ \begin{array}{ll}
1/\sqrt{x} & \mbox{\ \ \ \ if $x \ll 1$}, \\
1/x & \mbox{\ \ \ \ if $x \gg 1$}.
\end{array}
\right.$$ As was previously mentioned, the asymptotic behavior for $x \gg 1$ represents the convergence of $\xi_N$’s to $\xi$. On the other hand, the behavior for $x \ll 1$ is very interesting since the same asymptotic behavior has been found for noninteracting disordered 1D systems[@pic]. For the noninteracting case, the resistance $\rho^0_N$ of a chain of length $N$ is related to the single-particle localization length $\xi^0_N$ as[@pic] $$\rho^0_N = [\cosh(2N/\xi^0_N)-1]/2.$$ For $N/\xi^0_N \ll 1$, the right hand side of Eq. (8) reduces to $\sim
(N/\xi^0_N)^2 \sim N/\xi_1$. Therefore for the noninteracting case, the asymptotic behavior for $N \ll \xi_1$ represents the metallic behavior of the resistance, i.e. the linear increase of the resistance as the chain length in the metallic regime. Though no explicit expression like Eq. (8) is not present for the system under study in this paper, we believe that the same asymptotic behavior for the two cases found here is a strong indication that the definition in Eq. (2) is a physically reasonable one.
The $\xi$’s thus obtained as a function of $W$ are plotted in Fig. 2(c)[@com3]. For $0.5 \leq W \leq 5.0$ they are reasonably well fitted to the form of $\sim W^{-\nu}$, where $\nu$ is given by $2.9 \pm 0.1$. Within the error, this value for $\nu$ is different from $\nu_0 = 2.1 \pm 0.1$, i.e. the critical exponent for $U=0$, and from 4.0, which is the value expected by Eq. (1).
Further calculations and similar scaling analyses have been performed for other values of $U$, i.e. 0.2, 0.5, 0.7 and 1.5 upto system size $N=200$. It is difficult to determine $\xi$ for $W < 1.5$ and $U <
1.0$ since the corresponding data of $\xi_N$’s do not show scaling behaviors due to the fact that sufficiently large system sizes have not been reached for these parameters. The resulting $\xi$’s (for 1.5 $\leq
W \leq 5.0$ if $U < 1.0$ and for 0.7 $\leq W \leq 5.0$ if $U>1.0$) give $\nu = 2.7, 3.0, 2.9$ and 3.1 for $U = 0.2, 0.5, 0.7$ and 1.5, respectively. Since we do not expect that $\nu$ depends on $U$, we interpret the variations of the values for $\nu$ as resulting from numerical uncertainties. Therefore our final result for the critical exponent of $\xi$ is $2.9 \pm 0.2$.
Our result for $\nu$ implies that introduction of nonzero $U$ changes the critical behavior of $\xi$ and, in analogy with thermal critical phenomena, the point $W=U=0$ may be regarded as a multicritical point and the line $W=0$ as a critical line in the $W-U$ plane. Then, one may assume a scaling form for $\xi$ as follows; $$\xi = W^{-\nu_0} g(b|U|/W^\Delta),$$ where $g(y)$ is a scaling function, $\Delta$ a crossover exponent and $b$ a constant. Here, we used the fact that Eq. (3) is symmetric for $E=0$ so that $\xi$ depends only on the absolute value of $U$. The scaling function should satisfy $g(y \rightarrow 0) =$ constant and $g(y \rightarrow
\infty) \sim y^{(\nu-\nu_0)/\Delta}$ for consistency. We obtain reasonably good scaling plots within the range $\Delta = 4.0 \pm 0.5$. The scaling plot for $\Delta = 4.0$ is shown in Fig. 3, where $\xi
W^{2.1}$ is plotted against $U/W^4$ for various values of $W$ and $U$. Although the data for $W < 1.5$ may appear to deviate from the scaling curve, taking into account rather large numerical uncertainties of these data, one can expect that they are consistent with the scaling behavior of other data points. We expect that the crossover between the two asymptotic behaviors occurs at $y
\sim 1$ so that the constant $b$ is estimated to be of the order of 100 from Fig. 3. Data within the range $0.01 <
U/W^\Delta < 1.0$ ($1 < y < 100$) approximately obey the form $\simeq y^{0.23}$, which is shown as a straight line. Since we expect the asymptotic behavior $\sim y^{(\nu-\nu_0)/\Delta}$ for this regime, we obtain $(\nu-\nu_0)/\Delta \simeq 0.23$, i.e. $\nu \simeq 3.0$, which is in good agreement with our previous estimate, i.e. $\nu = 2.9
\pm 0.2$. At this stage, we have not found a physical mechanism regarding the scaling parameter, $U/W^\Delta$ with $\Delta = 4.0
\pm 0.5$. The quantity $\xi(U)-\xi(U=0)$ might be also of interest. Our data show that this is consistent with a form $\xi(U)-\xi(U=0)
\sim W^{-2.1} (U/W^4)^{1/2}$ in the region $bU/W^\Delta < 1.0$. This behavior shows that the first correction to $g(y)$ for $y \ll 1$ is given as $\sim y^{1/2}$. On the other hand, our result confirms the enhancement of two localization length due to the interaction. The arrow of Fig. 3 represents the value of $\xi W^{\nu_0}$ for $U = 0$ so that it is clearly seen that $\xi$ monotonically increases with respect to $U$.
Finally, we point out differences between our work and some of those previously reported. References \[3\] and \[7\] deal with exactly the same system as ours but use a different definition for the two-particle localization length. As mentioned before, the problem can be considered as that of a noninteracting single particle in a two-dimensional potential. These authors study the evolution of a state along one of the edges of the square lattice. However, in this paper, we are concerned with the “pair propagation", more generally, the propagation of the CM of the two particles. The definitions for $\xi$ given by Eqs. (2) and (4) describes the propagation along one of the diagonals of the square lattice, instead of that along the edge. Our definition of $\xi$ is exactly the same as that of v. Oppen [*et al.*]{}[@opp]. However it should be noted that in their work, calculation of $\xi$ involves an approximation; the approximation scheme used in Ref. \[5\] fails for small values of $U$ while our results are valid for all values of $U$.
In summary, we have investigated numerically the localization of two interacting particles in 1D random potential using the definition introduced previously for the two-particle localization length. While we find the enhancement of $\xi$ by the interaction, critical properties of $\xi$ are different from those reported in previous studies. We ascribe the differences to the approximation used in one case and, in the other cases, to different definition of $\xi$. Further works are needed to connect the resistance and the two-particle localization length and to elucidate the relation between $\xi$ and $\xi_1$.
This work has been supported by the KOSEF through the CTP and by the Ministry of Education through BSRI both at Seoul National University. We also thank SNU Computer Center for the computing times on SP2.
D. L. Shepelyansky, Phys. Rev. Lett. [**73**]{}, 2607 (1994).
Y. Imry, Europhys. Lett. [**30**]{}, 405 (1995).
K. Frahm, A. Müller-Groelling, J.-L. Pichard and D. Weinmann, Europhys. Lett. [**31**]{}, 169 (1995).
D. Weinmann, A. Müller-Groelling, J.-L. Pichard and K. Frahm, Phys Rev. Lett. [**75**]{}, 1598 (1995).
F. von Oppen, T. Wettig and J. Müller, Phys. Rev. Lett. [**76**]{}, 491 (1996).
D. Weinmann and J.-L. Pichard, Phys. Rev. Lett. [**77**]{}, 1556 (1996).
R. A. Römer and M. Schreiber, Phys. Rev. Lett. [**78**]{}, 515 (1997).
K. Frahm, A. Müller-Groelling, J.-L. Pichard and D. Weinmann, Phys. Rev. Lett. [**78**]{}, 4889 (1997); R. A. Römer and M. Schreiber, Phys. Rev. Lett. [**78**]{}, 4890 (1997).
K. Frahm, A. Müller-Groelling and J.-L. Pichard, Phys. Rev. Lett. [**76**]{}, 1509 (1996).
T. Vojta, R. A. Römer and M. Schreiber, preprint (1997, cond-mat 9702241).
Note that the definition of $W$ in some previous studies, i.e. Refs. \[1\], \[3\], \[4\] and \[6\], is different from ours by a factor of 2.
A. MacKinnon and B. Kramer, Phys. Rev. Lett. [**47**]{}, 1546 (1981); Z. Phys. B [**53**]{}, 1 (1983).
J.-L. Pichard, J. Phys. C [**19**]{}, 1519 (1986).
Although this functional form is not theoretically based, the data fit quite well to this form. Therefore no significant error in our results is expected by using this form.
The numerical uncertainty in $\xi$ is somewhat larger for $W < 1.5$ ($\delta(\ln\xi) \sim \delta\xi/\xi \sim 0.2$) than that for $W \geq 1.5$ $(\delta(\ln\xi) \sim 0.05)$. This is because the data for $W < 1.5$ lie in a rather flat range, as shown in Fig. 2(b), so that the scaling plot obtained by translations remains reasonably good within a comparatively broad range of $\xi$. Nevertheless, for Fig. 2(c), uncertainty of each data point is less than the symbol size.
| {
"pile_set_name": "ArXiv"
} |
---
address:
- 'Mathematical Institute, University of Cologne, Weyertal 86-90, D–50931 Cologne, Germany'
- 'Department of Mathematics, University of Hong Kong, Pokfulam, Hong Kong'
author:
- Kathrin Bringmann
- Ben Kane
title: 'An extension of Rohrlich’s Theorem to the $j$-function'
---
[^1]
Introduction and statement of results {#sec:introduction}
=====================================
We start by recalling the following theorem of Rohrlich [@Rohrlich]. To state it, let $\omega_{{\mathfrak{z}}}$ denote half of the size of the stabilizer $\Gamma_{{\mathfrak{z}}}$ of ${\mathfrak{z}}\in{\mathbb{H}}$ in ${{\text {\rm SL}}}_2({\mathbb{Z}})$ and for a meromorphic function $f:\mathbb H\to{\mathbb{C}}$ let ${{\text {\rm ord}}}_{{\mathfrak{z}}}(f)$ be the order of vanishing of $f$ at ${\mathfrak{z}}$. Moreover define $\Delta(z):=q\prod_{n\geq 1} (1-q^n)^{24}$, where $q:=e^{2\pi iz}$, and set ${\mathbbm{j}}(z):=\frac{1}{6}\log(y^6|\Delta(z)|)+1$, where $z=x+iy$. Rohrlich’s Theorem may be stated in terms of the Petersson inner product, denoted by $\langle\ ,\, \rangle$.
\[thm:Rohrlich\] Suppose that $f$ is a meromorphic modular function with respect to ${{\text {\rm SL}}}_2({\mathbb{Z}})$ that does not have a pole at $i\infty$ and has constant term one in its Fourier expansion. Then $$\left\langle 1,\log|f|\right\rangle = -2\pi\sum_{{\mathfrak{z}}\in{{\text {\rm SL}}}_2({\mathbb{Z}})\backslash{\mathbb{H}}} \frac{{{\text {\rm ord}}}_{\mathfrak{z}}(f)}{\omega_{\mathfrak{z}}}{\mathbbm{j}}({\mathfrak{z}}).$$
In [@Rohrlich], Theorem \[thm:Rohrlich\] was stated for ${\mathbbm{j}}-1$ instead. However, by the valence formula, these two statements are equivalent.
The function ${\mathbbm{j}}$ is a weight zero sesquiharmonic Maass form i.e., it is invariant under the action of ${{\text {\rm SL}}}_2({\mathbb{Z}})$ and it is annihilated by $\xi_0\circ \xi_2 \circ \xi_0$, where $\xi_{\kappa}:=2i y^{\kappa} \overline{\frac{\partial}{\partial \overline{z}}}$ (see Section \[subspolar\] for a full definition). More precisely, $\Delta_{0}({\mathbbm{j}})=1$, where $\Delta_{\kappa}:=-y^2(\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}) +i\kappa y (\frac{\partial}{\partial x} + i\frac{\partial}{\partial y})$ satisfies $\Delta_{\kappa}=-\xi_{2-\kappa}\circ\xi_{\kappa}$.
To extend this, let $j_1:=j-744$, with $j$ the usual $j$-invariant and set $j_n:=j_1|T_n$, where for a function $f$ transforming of weight $\kappa$, we define the $n$-th
Hecke operator
by $$f|T_n (z):=\sum_{\substack{ad=n\\ d>0}} \ \ \sum_{b{\ \, \left( \operatorname{mod} \, d \right)}} d^{-\kappa} f\left(\tfrac{az+b}{d}\right).$$ There are functions ${\mathbbm{j}}_n$ defined in below whose properties are analogous to those of ${\mathbbm{j}}_0:={\mathbbm{j}}$ if we define $j_0:=1$. Namely, these functions are weight zero sesquiharmonic Maass forms that satisfy $\Delta_{0}\left({\mathbbm{j}}_n\right) = j_n$ and are furthermore chosen uniquely so that the principal parts of their Fourier and elliptic expansions essentially only contain a single term which maps to the principal part of $j_n$ under $\Delta_0$. More precisely, they have a purely sesquiharmonic principal part, up to a possible constant multiple of $y$, vanishing constant terms in their Fourier expansion, and a trivial principal part in their elliptic expansions around every point in ${\mathbb{H}}$; see Lemmas \[lem:Fourierexps\] and \[lem:ellexps\] below for the shape of their Fourier and elliptic expansions, respectively. In addition, they also satisfy the following extension of Theorem \[thm:Rohrlich\]. Here we use a regularized version of the inner product (see below), which we again denote by $\langle\ ,\, \rangle$. This regularization was first introduced by Petersson in [@Pe2] and then later independently rediscovered and generalized by Borcherds [@Bo1] and Harvey–Moore [@HM].
\[thm:jninner\] Suppose that $f$ is a meromorphic modular function with respect to ${{\text {\rm SL}}}_2({\mathbb{Z}})$ which has constant term one in its Fourier expansion. Then $$\left\langle j_n,\log|f|\right\rangle=-2\pi\sum_{{\mathfrak{z}}\in {{\text {\rm SL}}}_2({\mathbb{Z}})\backslash{\mathbb{H}}}\frac{{{\text {\rm ord}}}_{{\mathfrak{z}}}(f)}{\omega_{{\mathfrak{z}}}}{\mathbbm{j}}_n({\mathfrak{z}}).$$
This research was motivated by generalizations of Rohrlich’s Theorem in other directions, such as the recent work of Herrero, Imamo$\overline{\text{g}}$lu, von Pippich, and Tóth [@HIvPT].
Theorem \[thm:Rohrlich\] was also generalized by Rohrlich [@Rohrlich] by replacing the meromorphic function $f$ in Theorem \[thm:Rohrlich\] with a meromorphic modular form of weight $k$ times $y^{\frac{k}{2}}$, yielding again a weight zero object. We similarly extend Theorem \[thm:jninner\] in such a direction.
\[thm:jninnergen\] There exists a constant $c_n$ such that for every weight $k$ meromorphic modular form $f$ with respect to ${{\text {\rm SL}}}_2({\mathbb{Z}})$ that does not have a pole at $i\infty$ and has constant term one in its Fourier expansion, we have $$\left<j_n,\log\!\left(y^{\frac{k}{2}}|f|\right)\right>=-2\pi\sum_{{\mathfrak{z}}\in {{\text {\rm SL}}}_2({\mathbb{Z}})\backslash{\mathbb{H}}}\frac{{{\text {\rm ord}}}_{{\mathfrak{z}}}(f)}{\omega_{{\mathfrak{z}}}}{\mathbbm{j}}_n({\mathfrak{z}})+\frac{k}{12}c_n.$$
Plugging $k=0$ into Theorem \[thm:jninnergen\], we see that Theorem \[thm:jninner\] is an immediate corollary.
An interesting special case of Theorem \[thm:jninner\] arises if one takes $f$ to be a so-called
prime form
, which is a modular form which vanishes at precisely one point ${\mathfrak{z}}\in{\mathbb{H}}$ and has a simple zero at ${\mathfrak{z}}$ (see [@Pe7 Section 1.c] for a full treatment of these functions). By the valence formula, the prime forms necessarily have weight $k=12 \omega_{{\mathfrak{z}}}^{-1}$ and may directly be computed as $$\left(\Delta(z)\Big(j(z)-j({\mathfrak{z}})\Big)\right)^{\frac{1}{\omega_{{\mathfrak{z}}}}}.$$ Multiplying by $y^{\frac{k}{2}}$ and taking the logarithm of the absolute value, it is hence natural to consider the functions $$\label{eqn:primeformgdef}
{\mathbbm{g}}_{{\mathfrak{z}}}(z):=\log\left(y^6\left|\Delta(z)\Big(j(z)-j({\mathfrak{z}})\Big)\right|\right),$$ and Theorem \[thm:jninner\] states that $$\label{eqn:jnbggz}
\left<j_n,{\mathbbm{g}}_{{\mathfrak{z}}}\right> = -2\pi{\mathbbm{j}}_n({\mathfrak{z}}) + c_{n}.$$ When characterizing modular forms via their divisors, the prime forms are natural building blocks because they vanish at precisely one point in ${\mathbb{H}}$, allowing one to easily construct a function with a given order of vanishing at each point. In the same way, since each function ${\mathbbm{g}}_{{\mathfrak{z}}}$ appearing on the left-hand side of has a singularity at only one point and the single term ${\mathbbm{j}}_n({\mathfrak{z}})$ is isolated on the right-hand side of , it is natural to use the functions ${\mathbbm{g}}_{{\mathfrak{z}}}$ as building blocks for the logarithms of weight $k$ meromorphic modular forms.
If one were only interested in proving Theorem \[thm:jninner\], then one could choose the building blocks $z\mapsto\log|j(z)-j({\mathfrak{z}})|$ instead of ${\mathbbm{g}}_{{\mathfrak{z}}}$. However, as noted above, the functions ${\mathbbm{g}}_{{\mathfrak{z}}}$ are more natural when considering divisors of modular forms because they only have a singularity precisely at the point ${\mathfrak{z}}$, while the functions $z\mapsto \log|j(z)-j({\mathfrak{z}})|$ have a singularity both at ${\mathfrak{z}}$ and $i\infty$.
Generating functions of traces of ${{\text {\rm SL}}}_2({\mathbb{Z}})$-invariant objects such as $j_n$ have a long history going back to the paper of Zagier on traces of singular moduli [@ZagierSingular]. To give a related example, let $\mathcal{Q}_D$ denote the set of integral binary quadratic forms of discriminant $D$. The generating function, with $\tau_Q\in{\mathbb{H}}$ the unique root of $Q(z,1)$, $$\label{eqn:tracejj}
\sum_{\substack{D<0\\ D\equiv 0,1{\ \, \left( \operatorname{mod} \, 4 \right)}}} \sum_{Q\in \mathcal{Q}_D/{{\text {\rm SL}}}_2({\mathbb{Z}})} {\mathbbm{j}}\!\left(\tau_Q\right) e^{2\pi i |D|\tau}$$ was shown by Bruinier and Funke [@BruinierFunkeTraces Theorem 1.2] to be the holomorphic part of a weight $\frac32$ modular object. Instead of taking the generating function in $D$, one may also sum in $n$ to obtain, for $y$ sufficiently large, $$\label{Hz}
H_{{\mathfrak{z}}}(z):=\sum_{n\geq 0} j_n({\mathfrak{z}}) q^n.$$ This function was shown by Asai, Kaneko, and Ninomiya [@AKN] to satisfy the identity $$H_{{\mathfrak{z}}}(z)=-\frac{1}{2\pi i} \frac{j_1'(z)}{j_1(z)-j_1({\mathfrak{z}})}.$$ This identity is equivalent to the denominator formula $$j_1({\mathfrak{z}})-j_1(z)=e^{-2\pi i{\mathfrak{z}}} \prod_{m\in{\mathbb{N}},\,n\in {\mathbb{Z}}}\left(1-e^{2\pi i m {\mathfrak{z}}}e^{2\pi i n z}\right)^{c(mn)},$$ for the Monster Lie algebra, where $c(m)$ denotes the $m$-th Fourier coefficient of $j_1$. The function $H_{{\mathfrak{z}}}$ is a weight two meromorphic modular form with a simple pole at $z={\mathfrak{z}}$. For a meromorphic modular form $f$ which does not vanish at $i\infty$, it is then natural to define the
divisor modular form
$$f^{\operatorname{div}}(z):=\sum_{{\mathfrak{z}}\in{{\text {\rm SL}}}_2({\mathbb{Z}})\backslash{\mathbb{H}}}\frac{{{\text {\rm ord}}}_{{\mathfrak{z}}}(f)}{\omega_{{\mathfrak{z}}}} H_{{\mathfrak{z}}}(z).$$ Bruinier, Kohnen, and Ono [@BKO Theorem 1] showed that if $f$ satisfies weight $\kappa$ modularity then $f^{\operatorname{div}}$ is related to the logarithmic derivative of $f$ via $$f^{\operatorname{div}}=-\frac{1}{2\pi i} \frac{f'}{f} +\frac{\kappa}{12}E_2,$$ where $E_2$ denotes the quasimodular weight two Eisenstein series. Analogously to , we define the generating function, for $y$ sufficiently large, $${\mathbb{H}}_{{\mathfrak{z}}}(z):=\sum_{n\geq 0}{\mathbbm{j}}_n({\mathfrak{z}}) q^n,$$ and its related divisor modular form $${\mathbbm{f}}^{\operatorname{div}}(z):=\sum_{{\mathfrak{z}}\in{{\text {\rm SL}}}_2({\mathbb{Z}})\backslash{\mathbb{H}}}\frac{{{\text {\rm ord}}}_{{\mathfrak{z}}}(f)}{\omega_{{\mathfrak{z}}}}{\mathbb{H}}_{{\mathfrak{z}}}(z).$$
The function ${\mathbb{H}}_{{\mathfrak{z}}}$ turns out to be the holomorphic part of a weight two sesquiharmonic Maass form, while the generating function $$\begin{aligned}
{\mathbb{I}}_{\mathfrak{z}}(z):=\sum_{n\geq 0}\langle {\mathbbm{g}}_{{\mathfrak{z}}},j_n\rangle q^n,\end{aligned}$$ which is closely related by Theorem \[thm:jninnergen\], is the holomorphic part of a biharmonic Maass form. A weight $\kappa$ biharmonic Maass form satisfies weight $\kappa$ modularity and is annihilated by $\Delta_{\kappa}^2=\left(\xi_{2-\kappa}\circ \xi_{\kappa}\right)^2$ (see Section \[sec:construction\] for a full definition).
\[thm:Fdivgen\] The function ${\mathbb{H}}_{\mathfrak{z}}$ is the holomorphic part of a weight two sesquiharmonic Maass form $\widehat{{\mathbb{H}}}_{{\mathfrak{z}}}$ and ${\mathbb{I}}_{{\mathfrak{z}}}$ is the holomorphic part of a weight two biharmonic Maass form $\widehat{{\mathbb{I}}}_{{\mathfrak{z}}}$.
Consider $$\Theta(z,\tau):=\sum_{n\geq 0} \sum_{\substack{D<0\\ D\equiv 0,1{\ \, \left( \operatorname{mod} \, 4 \right)}}} \sum_{Q\in \mathcal{Q}_D/{{\text {\rm SL}}}_2({\mathbb{Z}})} {\mathbbm{j}}_n\!\left(\tau_Q\right) e^{2\pi i |D|\tau} e^{2 \pi i nz}.$$ The modularity of hints that $\tau\mapsto\Theta(z,\tau)$ may have a relation to a function satisfying weight $\frac32$ modularity, while we see in Theorem \[thm:Fdivgen\] that it is the holomorphic part of a weight two object as a function of $z$ (assuming convergence). It should be possible to prove this modularity using the methods from [@BES] (and which in particular requires generalizing Proposition 1.3 of [@BES] to include functions which have poles in points of the upper half plane).
As a corollary to Theorem \[thm:Fdivgen\], one obtains that ${\mathbbm{f}}^{\operatorname{div}}$ has the modular completion $$\widehat{{\mathbbm{f}}}^{\operatorname{div}}(z):=\sum_{{\mathfrak{z}}\in{{\text {\rm SL}}}_2({\mathbb{Z}})\backslash{\mathbb{H}}}\frac{{{\text {\rm ord}}}_{{\mathfrak{z}}}(f)}{\omega_{{\mathfrak{z}}}}\widehat{{\mathbb{H}}}_{{\mathfrak{z}}}(z).$$
\[cor:Fdiv\] If $f$ is a meromorphic modular function with constant term one in its Fourier expansion, then the function ${\mathbbm{f}}^{\operatorname{div}}$ is the holomorphic part of the Fourier expansion of a weight two sesquiharmonic Maass form $\widehat{{\mathbbm{f}}}^{\operatorname{div}}$. Moreover we have $$\xi_{2}\left(\widehat{{\mathbbm{f}}}^{\operatorname{div}}\right)=-\frac{1}{2\pi}\log|f|.$$
The paper is organized as follows. In Section \[sec:specialfunctions\], we introduce some special functions. In Section \[sec:construction\], we construct a number of functions and discuss their properties. In Section \[sec:FourierElliptic\], we determine the shapes of the Fourier and elliptic expansions of the functions defined in Section \[sec:construction\]. In Section \[sec:jninner\], we compute inner products in order to prove Theorem \[thm:jninnergen\]. In Section \[sec:Fdiv\], we consider the generating functions of these inner products and prove Theorem \[thm:Fdivgen\] and Corollary \[cor:Fdiv\].
Acknowledgements {#acknowledgements .unnumbered}
================
The authors thank Claudia Alfes-Neumann, Stephan Ehlen, and Markus Schwagenscheidt for helpful comments on an earlier version of this paper and the referee for carefully reading our paper.
Special functions, Poincaré series, and polyharmonic Maass forms {#sec:specialfunctions}
================================================================
The incomplete gamma function and related functions
---------------------------------------------------
We use the principal branch of the complex logarithm, denoted by ${\operatorname{Log}}$, with the convention that, for $w>0$, $${\operatorname{Log}}(-w)=\log(w)+\pi i,$$ where $\log: {\mathbb{R}}^+ \to {\mathbb{R}}$ is the natural logarithm.
For $s,w \in\mathbb{C}$ with ${\operatorname{Re}}(w)>0$, define the [*generalized exponential integral*]{} $E_s$ (see [@NIST 8.19.3]) by $$E_s(w) := \int_1^{\infty} e^{-wt}t^{-s}\, dt.$$ This function is related to the incomplete Gamma function, defined for ${\operatorname{Re}}(s)>0$ and $w\in{\mathbb{C}}$ by $$\Gamma(s,w) := \int_{w}^\infty e^{-t}t^{s}\, \frac{dt}{t},$$ via (see [@NIST 8.19.1]) $$\label{eqn:GammaEr}
\Gamma(s, w) = w^s E_{1-s}(w).$$ Up to a branch cut along the negative real line, the function $E_1$ may be analytically continued via $$E_1(w) = {\operatorname{Ein}}(w) - {\operatorname{Log}}(w) - \gamma$$ (see [@NIST 6.2.4]), where $\gamma$ is the Euler-Mascheroni constant and ${\operatorname{Ein}}$ is the entire function given by $${\operatorname{Ein}}(w) :=\int_0^{w} \left(1-e^{-t}\right)\frac{dt}{t}=\sum_{n\geq 1}\frac{(-1)^{n+1}}{n!\, n} w^n.$$ The function ${\operatorname{Ein}}$ also appears in the definition of the exponential integral, namely $${\operatorname{Ei}}(w):=-{\operatorname{Ein}}(-w)+{\operatorname{Log}}(w)+\gamma.$$ In order to obtain a function which is real-valued for $w\in{\mathbb{R}}\setminus\{0\}$, for $\kappa\in{\mathbb{Z}}$ it is natural to define $$W_\kappa(w) := (-2w)^{1-\kappa} {\operatorname{Re}}(E_\kappa(-2w)).$$ By [@BDE Lemma 2.4], , and the fact that $E_\kappa(w)\in{\mathbb{R}}$ for $w>0$, we obtain the following.
\[lem:relationsofnonhol\] For $w>0$, we have $$W_\kappa(w) = (-1)^{1-\kappa}\left((2w)^{1-\kappa}E_\kappa(-2w) + \frac{\pi i}{\Gamma(\kappa)}\right)
= \Gamma(1-\kappa, -2w)+\frac{(-1)^{1-\kappa}\pi i}{(\kappa-1)!}.$$ For $w<0$, we have $$W_\kappa(w) = \Gamma(1-\kappa,-2w).$$
We next define $$\bm{{\mathcal{W}}}_s(w):=\int\limits_{{\operatorname{sgn}}(w)\infty}^w W_{2-s}(-t) t^{-s}e^{2t}dt.$$
A direct calculation shows the following lemma.
\[lem:xiFourier\] For $m\in {\mathbb{Z}}\backslash \{0\}$ we have $$\begin{aligned}
\xi_{\kappa}\!\left(W_\kappa(2\pi my)q^m\right)&=
-(-4\pi m)^{1-\kappa}
q^{-m},\qquad \xi_\kappa\!\left(\bm{{\mathcal{W}}}_{\kappa}(2\pi my)q^m\right)=(2\pi m)^{1-\kappa} W_{2-\kappa}(-2\pi my) q^{-m}.\end{aligned}$$ In particular, $W_\kappa(2\pi my)q^m$ is annihilated by $\Delta_{\kappa}$, and $\bm{{\mathcal{W}}}_{\kappa}(2\pi my)q^m$ is annihilated by $\xi_{\kappa}\circ\Delta_{\kappa}$.
We next determine the asymptotic behavior for $W_\kappa$ and $\bm{{\mathcal{W}}}_\kappa$.
\[lem:Wgrowth\] As $w\to \pm \infty$, we have $$\begin{aligned}
W_\kappa(w)&=
(-2w)^{-\kappa}
e^{2w}+O\!\left(w^{-\kappa-1}e^{2w}\right),\qquad \bm{{\mathcal{W}}}_\kappa(w)=
-2^{\kappa-2}
w^{-1}+O\!\left(w^{-2}\right).\end{aligned}$$
It is not hard to conclude the claims from (see [@NIST 8.11.2]) $$\Gamma(1-\kappa,-2w)=
(-2w)^{-\kappa}
e^{2w}\!\left(1+O\!\left(w^{-1}\right)\right). \qedhere$$
Polar polyharmonic Maass forms {#subspolar}
------------------------------
In this section, we introduce polar polyharmonic Maass forms. Letting $\xi_{\kappa}^{\ell}$ denote the $\xi$-operator repeated $\ell$ times, a
polar polyharmonic Maass form on ${{\text {\rm SL}}}_2({\mathbb{Z}})$ of weight $\kappa\in 2{\mathbb{Z}}$ and depth $\ell\in{\mathbb{N}}_0$
is a function $F:{\mathbb{H}}\to{\mathbb{C}}$ satisfying the following:
1. For every $\gamma=\left(\begin{smallmatrix} a&b\\c&d\end{smallmatrix}\right)\in{{\text {\rm SL}}}_2({\mathbb{Z}})$, we have $$F|_{
\kappa
}\gamma(z):=(cz+d)^{-
\kappa
} F\left(\tfrac{az+b}{cz+d}\right) = F(z).$$
2. The function $F$ is annihilated by $\xi_{
\kappa
}^{\ell}$.
3. For each ${\mathfrak{z}}\in{\mathbb{H}}$, there exists an $m_{{\mathfrak{z}}}\in{\mathbb{N}}_0$ such that $\lim_{z\to{\mathfrak{z}}}(r_{{\mathfrak{z}}}^{m_{{\mathfrak{z}}}}(z)F(z))$ exists, where $$\label{eqn:rXdef}
r_{{\mathfrak{z}}}(z):=|X_{{\mathfrak{z}}}(z)|\quad \text{ with }X_{{\mathfrak{z}}}(z):=\tfrac{z-{\mathfrak{z}}}{z-\overline{{\mathfrak{z}}}}.$$
4. The function $F$ grows at most linear exponentially as $z\to i\infty$.
Note that in (2), $\xi_\kappa^\ell$ can be written in terms of $\Delta_\kappa$ if $\ell$ is even. If a function satisfies (2) (but is not necessarily modular), then we simply call it
depth $\ell$ with respect to the weight $\kappa$.
Note that our notation differs from that in [@LR]. In particular, if the depth is $\ell$ in this paper, then it is depth $\frac{\ell}{2}$ in [@LR].
We omit the adjective “polar” whenever the only possible singularity occurs at $i\infty$. Note that polar polyharmonic Maass forms of depth one are meromorphic modular forms. We call a polar polyharmonic Maass form $F$ of depth two a
polar harmonic Maass form
and those of depth three are
polar sesquiharmonic Maass forms
. We call those forms of depth four
biharmonic
.
Niebur Poincaré series {#sec:seeds}
----------------------
We next recall the Niebur Poincaré series [@Niebur] $$F_{m}(z,s):=\sum_{\gamma\in\Gamma_{\infty}\backslash{{\text {\rm SL}}}_2({\mathbb{Z}})} \varphi_{m,s}(\gamma z), \quad ({\operatorname{Re}}(s)>1, m \in {\mathbb{Z}})$$ where $\Gamma_\infty:=\{\pm \left(\begin{smallmatrix} 1 & n \\ 0 & 1\end{smallmatrix}\right)\,:\, n\in{\mathbb{Z}}\}$ and $$\varphi_{m,s}(z):= y^{\frac{1}{2}} I_{s-\frac{1}{2}}(2\pi|m|y)e^{2\pi i mx}.$$ Here $I_{\kappa}$ is the $I$-Bessel function of order $\kappa$. The functions $s\mapsto F_{m}(z,s)$ have meromorphic continuations to ${\mathbb{C}}$ and do not have poles at $s=1$ [@Niebur Theorem 5]. The functions $\varphi_{m,s}$ are eigenfunctions under $\Delta_0$ with eigenvalue $s(1-s)$. Hence for any $s$ with ${\operatorname{Re}}(s)$ sufficiently large so that $z\mapsto F_{m}(z,s)$ converges absolutely and locally uniformly, we conclude that $F_{m}(z,s)$ is also an eigenfunction under $\Delta_0$ with eigenvalue $s(1-s)$. Arguing by meromorphic continuation, we obtain that $$\Delta_{0}\!\left(F_{m}(z,s)\right)=s(1-s)F_m(z,s)$$ for any $s$ for which $F_{m}(z,s)$ does not have a pole. In particular, one may use this to construct harmonic Maass forms by taking $s=1$. Indeed, by [@Niebur Theorem 6] (note that there is a missing $2\pi \sqrt{n}$), there exists a constant $\mathcal C_n$ such that $$\label{eqn:Fnjn}
2\pi\sqrt{n}F_{-n}(z,1)=j_n(z)+\mathcal C_n.$$
Real-analytic Eisenstein series
-------------------------------
Throughout the paper, we use various properties of the real-analytic Eisenstein series, defined for ${\operatorname{Re}}(s)>1$ by $$E(z,s):=\sum_{\gamma\in\Gamma_{\infty}\backslash{{\text {\rm SL}}}_2({\mathbb{Z}})} {\operatorname{Im}}(\gamma z)^s.$$ Via the Hecke trick, $E(z,s)$ is closely related to the weight two completed Eisenstein series $$\widehat{E}_2(z):=1-24\sum_{m\geq 1} \sigma_1(m) q^m -\frac{3}{\pi y},$$ where $\sigma_\ell(n):=\sum_{d|n}d^\ell$. The following properties of $E(z,s)$ and $\widehat{E}_2(z)$ are well-known.
\[lem:Eprop\]
1. The function $s\mapsto E(z,s)$ has a meromorphic continuation to ${\mathbb{C}}$ with a simple pole at $s=1$ of residue $\frac{3}{\pi}$.
2. The function $z\mapsto E(z,s)$ is an eigenfunction with eigenvalue $s(1-s)$ under $\Delta_0$.
3. The function $\widehat{E}_2$ is a weight two harmonic Maass form which satisfies $$\xi_{2}\!\left(\widehat{E}_2\right) = \frac{3}{\pi}.$$
4. Denoting by $\operatorname{CT}_{s=s_0}(f(s))$ the constant term in the Laurent expansion of the analytic continuation of a function $f$ around $s=s_0$, we have $$\widehat{E}_2(z)=\operatorname{CT}_{s=1} \left(\xi_{0}\!\left(E(z,s)\right)\right).$$
In light of Lemma \[lem:Eprop\] (1), it is natural to define, for some $\mathcal{C}\in{\mathbb{C}}$, $$\label{defEz}
\mathcal{E}(z):=\lim_{s\to 1} \left(4\pi E(z,s)-\tfrac{12}{s-1}\right)+\mathcal{C}.$$ Letting $\zeta(s)$ denote the Riemann zeta function, we specifically choose $\mathcal{C}:=-24\gamma +24\log(2)+144\frac{\zeta'(2)}{\pi^{2}}$ so that, by [@GrossZagier Section II, (2.17) and (2.18)], $$\label{eqn:calEgrowth}
\lim\limits_{y \to \infty}\left(\mathcal{E}(z)-4\pi y +12\log(y)\right)=0.$$
Construction of the functions {#sec:construction}
=============================
In this section, we construct two weight two biharmonic functions ${\mathbb{G}}_{{\mathfrak{z}}}$ and ${\mathbb{J}}_n$ satisfying $$\label{eqn:GJxiops}
{
\xi_2({\mathbb{G}}_{{\mathfrak{z}}})={\mathbbm{g}}_{{\mathfrak{z}}},
}
\qquad \xi_2\circ\xi_0\circ\xi_2({\mathbb{J}}_n)=-j_n,$$ where ${\mathbbm{g}}_{{\mathfrak{z}}}$ is the weight zero sesquiharmonic Maass form that is defined in .
Similarly, ${\mathbb{J}}_n$ is constructed to have a singularity at $i\infty$ such that $\xi_{2}({\mathbb{J}}_n)$ is the function ${\mathbbm{j}}_n$ given in the introduction, i.e., the only singularity of ${\mathbb{J}}_n$ lies in its biharmonic part.
Recall the
automorphic Green’s function
$$\label{eqn:Greensdef}
G_s(z,{\mathfrak{z}}):=\sum_{\gamma\in{{\text {\rm SL}}}_2({\mathbb{Z}})} g_s\!\left(z,\gamma {\mathfrak{z}}\right).$$
Here, with $Q_s$ the Legendre function of the second kind, we define $$g_s(z,{\mathfrak{z}}):=-2Q_{s-1}\!\left(1+\tfrac{|z-{\mathfrak{z}}|^2}{2y{\mathbbm{y}}}\right), \quad {\mathfrak{z}}={\mathbbm{x}}+i{\mathbbm{y}}\in{\mathbb{H}}.$$ The series is convergent for ${\operatorname{Re}}(s)>1$, is an eigenfunction under $\Delta_0$ with eigenvalue $s(1-s)$, and has a meromorphic continuation to the whole $s$-plane (see [@Hejhal Chapter 7, Theorem 3.5]). For $\gamma\in{{\text {\rm SL}}}_2({\mathbb{R}})$, $g_s$ satisfies $g_{s}(\gamma z,\gamma{\mathfrak{z}})=g_{s}(z,{\mathfrak{z}})$, yielding that $G_{s}(z,{\mathfrak{z}})$ is ${{\text {\rm SL}}}_2({\mathbb{Z}})$-invariant in $z$ and ${\mathfrak{z}}$.
Using [@GZ2 Proposition 5.1] and the Kronecker limit formula, $G_s(z,{\mathfrak{z}})$ is related to ${\mathbbm{g}}_{{\mathfrak{z}}}$ by $$\label{eqn:Gzdef}
{\mathbbm{g}}_{{\mathfrak{z}}}(z)=\frac{1}{2}\lim_{s\to 1} \big(G_{s}(z,{\mathfrak{z}})+4\pi E({\mathfrak{z}},s)\big)-12.$$
In the next lemma, we collect some other useful properties of $G_s(z,{\mathfrak{z}})$.
\[lem:Greensprop\]
1. The function $s \mapsto G_{s}(z,{\mathfrak{z}})$ has a simple pole at $s=1$ with residue $-12$.
2. The limit in exists.
3. The function ${\mathbbm{g}}_{{\mathfrak{z}}}$ is sesquiharmonic with $\Delta_{0}({\mathbbm{g}}_{{\mathfrak{z}}})=6$.
4. The only singularity of ${\mathbbm{g}}_{{\mathfrak{z}}}$ in ${{\text {\rm SL}}}_2({\mathbb{Z}})\backslash\mathbb H$ is at $z={\mathfrak{z}}$ with principal part $4\omega_{{\mathfrak{z}}}\log(r_{{\mathfrak{z}}}(z))$.
To construct ${\mathbb{G}}_{{\mathfrak{z}}}$ and ${\mathbb{J}}_n$, we find natural preimages of certain polyharmonic Maass forms, such as ${\mathbbm{g}}_{{\mathfrak{z}}}$, under the $\xi$-operator. For this we study the Laurent expansions of eigenfunctions under $\Delta_\kappa$.
\[lem:Laurent\]
Suppose that $z\mapsto f(z,s)$ is an eigenfunction with eigenvalue $(s-\frac{\kappa}{2})(1-s-\frac{\kappa}{2})$ under $\Delta_{\kappa}$ and that $s\mapsto f(z,s)$ is meromorphic. Then for $s$ close to $1-\frac{\kappa}{2}$, we have the Laurent expansion $$f(z,s)=\sum_{m\gg -\infty} f_{m}(z)\left(s+\tfrac{\kappa}{2}-1\right)^{m}.$$ The coefficients $f_{m}$ have the following properties:
1. We have $$\Delta_{\kappa}\!\left(f_{m}\right) = (\kappa-1)f_{m-1}-f_{m-2}.$$
2. If $s\mapsto f(z,s)$ is holomorphic at $s=1-\frac{\kappa}{2}$, then $f_0$ (resp. $f_1$) is annihilated by $\Delta_{\kappa}$ (resp. $\Delta_{\kappa}^2$).
3. If $ z \mapsto f(z,1-\frac{\kappa}{2})$ vanishes identically, then $f_1$ is annihilated by $\Delta_{\kappa}$.
4. We have $$\xi_{2-\kappa}\left(\operatorname{CT}_{s=1-\frac{\kappa}{2}} \!\left(\frac{\partial}{\partial s} \xi_{\kappa}(f(z,\overline{s}))\right)\right) = (1-\kappa) \operatorname{CT}_{s=1-\frac{\kappa}{2}}\!\left(f(z,s)\right) +\operatorname{Res}_{s=1-\frac{\kappa}{2}}\!\left(f(z,s)\right).$$
\(1) The claim follows using the eigenfunction property by comparing coefficients in the Laurent expansions on both sides of Lemma \[lem:Laurent\] (1).\
(2) By (1) we have $$\label{eqn:Delf0f1}
\Delta_{\kappa}\!\left(f_{0}\right) = (\kappa-1) f_{-1}-f_{-2},\qquad \Delta_{\kappa}\!\left(f_{1}\right) = (\kappa-1)f_0-f_{-1}.$$ Since $s\mapsto f(z,s)$ is holomorphic at $s=1-\frac{\kappa}{2}$, we have that $f_{\ell}=0$ for $\ell<0$, yielding the claim.\
(3) The claim follows immediately from and the fact that $f(z,1-\frac{\kappa}{2})=f_0(z)$ if $f(z,s)$ is holomorphic at $s=1-\frac{\kappa}{2}$.\
(4) Noting that $$f_1(z)=\operatorname{CT}_{s=1-\frac{\kappa}{2}} \left(\frac{\partial}{\partial s} f(z,s)\right),\quad
f_0(z)=\operatorname{CT}_{s=1-\frac{\kappa}{2}} \!\left(f(z,s)\right),\quad
f_{-1}(z)=\operatorname{Res}_{s=1-\frac{\kappa}{2}} \!\left(f(z,s)\right),$$ it is not hard to conclude the statement using and the fact that $1-\frac{\kappa}{2}$ is real so that $$\operatorname{CT}_{\overline{s}=1-\frac{\kappa}{2}}\left(\frac{\partial}{\partial\overline{s}}\xi_{\kappa}(f(z,s))\right)=\operatorname{CT}_{s=1-\frac{\kappa}{2}}\left(\frac{\partial}{\partial s}\xi_{\kappa}(f(z,\overline{s}))\right). \qedhere$$
We are now ready to construct the functions ${\mathbb{G}}_{{\mathfrak{z}}}$ and ${\mathbb{J}}_n$. Namely, we set $$\begin{aligned}
\label{eqn:GGdef}
{\mathbb{G}}_{{\mathfrak{z}}}(z)&:=\frac{1}{2}
{\mathbb{G}}_{{\mathfrak{z}},1}(z)+ \frac{\pi}{6} \left(\overline{\mathcal{E}({\mathfrak{z}})}-\overline{\mathcal{C}}-12\right)\widehat{E}_2(z), \ \text{where} \ {\mathbb{G}}_{{\mathfrak{z}},s}(z):=\frac{\partial}{\partial s}\xi_0\!\left(G_{\overline{s}}(z,{\mathfrak{z}})\right),\\
\label{eqn:scrJndef}
{\mathbb{J}}_n(z)&:= -\pi\sqrt{n}\mathcal{F}_{-n}(z,1) +\frac{\overline{\mathcal{C}_n}}{12}\mathbb{E}(z) + a_n \widehat{E}_2(z),\end{aligned}$$ where $$\begin{aligned}
\mathbb{E}(z)&:=4\pi \left[\frac{\partial}{\partial s} \xi_0(E(z,\overline{s}))\right]_{s=1}+\frac{\pi}{3}\left(\overline{\mathcal{C}}-12\right) \widehat{E}_{2}(z)\notag,\\
\label{eqn:mathcalFdef}
\mathcal{F}_{-n}(z,s)&:=\frac{\partial^2}{\partial s^2} \xi_0\!\left(F_{-n}(z,\overline{s})\right)-2\frac{\partial}{\partial s} \xi_0\!\left(F_{-n}(z,\overline{s})\right)\end{aligned}$$ with $\mathcal{C}$ and $\mathcal{C}_n$ given in and , respectively, and with $a_n$ determined below in Subsection \[sec:FourierConstruction\]. Also define auxiliary functions $\mathcal{G}_{{\mathfrak{z}}}:=\xi_0({\mathbbm{g}}_{{\mathfrak{z}}})$, ${\mathbbm{j}}_n:=\xi_2({\mathbb{J}}_n)$, and $\mathcal{J}_n:=-\xi_0({\mathbbm{j}}_n)$ so that is implied by $$\begin{aligned}
\label{eqn:HGxi}\xi_{2}\!\left({\mathbb{G}}_{{\mathfrak{z}}}\right) &= {\mathbbm{g}}_{{\mathfrak{z}}},& \xi_{0}\!\left({\mathbbm{g}}_{{\mathfrak{z}}}\right) &= \mathcal{G}_{{\mathfrak{z}}},&\xi_{2}\!\left(\mathcal{G}_{{\mathfrak{z}}}\right)&=-6,\\
\label{eqn:Jxi}\xi_{2}\!\left({\mathbb{J}}_n\right)&={\mathbbm{j}}_{n},&\xi_{0}\!\left({\mathbbm{j}}_{n}\right)&=-{\mathcal{J}}_n,&\xi_{2}\!\left({\mathcal{J}}_n\right)&=j_n.\end{aligned}$$
Note that throughout the paper, we are using uppercase blackboard for depth four, lowercase blackboard for depth three, uppercase script for depth two, and standard letters for depth one. The exception to this rule is $\widehat{E}_2$, whose notation is standard; the analogous notation for the Eisenstein series may be found by comparing the functions in vertically.
\[lem:GJxi\] The functions ${\mathbb{G}}_{{\mathfrak{z}}}$ and ${\mathbb{J}}_n$ satisfy and we have $$\label{eqn:xiboldE}
\xi_{2}(\mathbb{E})=\mathcal{E}.$$
Lemma \[lem:Laurent\] (4) with $f(z,s)=E(z,s)$ and yields that $$4 \pi \xi_{2}\!\left(\left[\frac{\partial}{\partial s} \xi_0(E(z,\overline{s}))\right]_{s=1}\right)=\mathcal{E}(z)-\mathcal{C}+12.$$ Combining this with Lemma \[lem:Eprop\] (3) yields .
Plugging $f(z,s)=G_s(z,{\mathfrak{z}})$ into Lemma \[lem:Laurent\] (4) and using Lemma \[lem:Greensprop\] (1), we see that $$\begin{aligned}
\notag
\xi_2\!\left({\mathbb{G}}_{{\mathfrak{z}},1}(z)\right) &= \operatorname{CT}_{s=1}\!\left(G_{s}(z,{\mathfrak{z}})\right)+ \operatorname{Res}_{s=1} (G_s(z, {\mathfrak{z}}))=\lim_{s\to 1}\!\left(G_s(z,{\mathfrak{z}}) +\tfrac{12}{s-1}\right) -12\\
&= 2{\mathbbm{g}}_{{\mathfrak{z}}}(z)-\mathcal{E}({\mathfrak{z}})+\mathcal{C}+12,\label{eqn:ggzrewrite}\end{aligned}$$ by and . To show the identity for ${\mathbb{J}}_{n}$ in , first note that if $s \mapsto f(z,s)$ is holomorphic at $s=1$ and $z\mapsto f(z,s)$ is real-differentiable, then $$\begin{aligned}
\label{xs}
\left[\xi_\kappa\!\left(f(z,\overline{s})\right)\right]_{s=1}=\xi_\kappa\!\left(f(z,1)\right).\end{aligned}$$ Using this and then interchanging the $\xi$-operator with differentiation in $s$ and recalling that $F_{-n}(z,s)$ is an eigenfunction under $\Delta_0$ with eigenvalue $s(1-s)$, we conclude that $$\label{eqn:xiJJn}
\xi_{2}\!\left(\left[\frac{\partial^2}{\partial s^2} \xi_0\!\left(F_{-n}(z,\overline{s})\right)\right]_{s=1}\right) = 2F_{-n}(z,1)+2\left[\frac{\partial}{\partial s}F_{-n}(z,s)\right]_{s=1}.$$ Applying Lemma \[lem:Laurent\] (4) with $f(z,s)=F_{-n}(z,s) $, we furthermore have $$\label{eqn:xiFn1}
\xi_2\!\left(\left[\frac{\partial}{\partial s} \xi_0\!\left(F_{-n}(z,\overline{s})\right)\right]_{s=1}\right)=F_{-n}(z,1).$$ Combining and with the definition then gives $$\label{eqn:calFxi}
\xi_{2}\!\left(\mathcal{F}_{-n}(z,1)\right)=2\left[\frac{\partial}{\partial s}F_{-n}(z,s)\right]_{s=1}.$$ Applying $\xi_0$ to and pulling the $\xi$-operator inside, we conclude that $$\label{eqn:DeltaJJn}
\xi_0\circ\xi_2\!\left(\mathcal{F}_{-n}(z,1)\right)=2\left[\frac{\partial}{\partial s}\xi_0\!\left( F_{-n}(z,\overline{s})\right)\right]_{s=1}.$$ Applying $\xi_2$ to we then obtain from and that $$\xi_{2}\circ \xi_0\circ\xi_2\!\left(\mathcal{F}_{-n}(z,1)\right)=2F_{-n}(z,1)= \left(\pi\sqrt{n}\right)^{-1}\!\left(j_n(z)+ \mathcal{C}_n\right).$$ The claim then follows, using , Lemma \[lem:Eprop\] (4), and Lemma \[lem:Eprop\] (3).
Combining and together with , Lemma \[lem:Eprop\] (3), and Lemma \[lem:Eprop\] (4) yields the following: $$\label{eqn:diagram}\begin{split}
\xymatrix{
{\mathbb{G}}_{{\mathfrak{z}}}\ar[r]^{\xi_{2}}&{\mathbbm{g}}_{{\mathfrak{z}}}\ar[r]^{\xi_{0}}&\mathcal{G}_{{\mathfrak{z}}}\ar[r]^{\xi_{2}}&-6 ,\\
{\mathbb{J}}_n\ar[r]^{\xi_{2}}&{\mathbbm{j}}_{n}\ar[r]^{\xi_{0}}&-{\mathcal{J}}_{n}\ar[r]^{\xi_{2}}&-j_n
,\\
\mathbb{E}\ar[r]^{\xi_{2}}&\mathcal{E}\ar[r]^{\xi_{0}}&4\pi \widehat{E}_2\ar[r]^{\xi_{2}}&12.
}
\end{split}$$
In order to determine the principal parts of the polyharmonic Maass forms defined in and , we require the Taylor expansions of $\varphi_{m,s}$ and $\xi_0(g_s(z,{\mathfrak{z}}))$ around $s=1$; note that the principal parts of the analytic continuations to $s=1$ come from the values at $s=1$ of the corresponding seeds of the Poincaré series. We compute the first two coefficients of the Taylor expansion of the seeds in the following lemma.
\[lem:diffseeds\] Assume that $n\in{\mathbb{N}}$.
1. We have $$\begin{aligned}
\varphi_{-n,s}(z)=f_{-n,0}(z) + f_{-n,1}(z)(s-1)+O\!\left((s-1)^2\right),\end{aligned}$$ where $$\begin{aligned}
f_{-n,0}(z)&:=\frac{1}{2\pi\sqrt{n}}\left(q^{-n}-W_0(-2\pi ny)q^{-n}\right),\\
f_{-n,1}(z)&:=-\frac{1}{2\pi\sqrt{n}}\left(2\bm{{\mathcal{W}}}_0(-2\pi ny) q^{-n}+E_1(4\pi ny)q^{-n} \right).\end{aligned}$$
2. We have $$\label{eqn:xigsLaurent}
\xi_0\!\left(g_s(z,{\mathfrak{z}})\right)=\mathfrak{g}_{{\mathfrak{z}},0}(z) + \mathfrak{g}_{{\mathfrak{z}},1}(z)(s-1)+O\!\left((s-1)^2\right),$$ where $$\begin{aligned}
\mathfrak{g}_{{\mathfrak{z}},0}(z)&:=\xi_0\!\left(g_{1}(z,{\mathfrak{z}})\right),& &\\ \mathfrak{g}_{{\mathfrak{z}},1}(z)&:=\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{-2}B(r_{{\mathfrak{z}}}(z)) X_{{\mathfrak{z}}}^{-1}(z),
&\textnormal{with }B(r)&:= \tfrac{4r^2}{\left(1-r^2\right)^2} \left[\frac{\partial}{\partial s} \frac{\partial}{\partial w}Q_{s-1}(w)\right]_{\substack{\hspace{-12pt}s=1,\\w=\tfrac{1+r^2}{1-r^2}}}.\end{aligned}$$ Moreover, we have $$\label{eqn:rlim}
\lim_{r\to 0^+} B(r)=0.$$
\(1) Since $s \mapsto \varphi_{-n,s}(z)$ is holomorphic at $s=1$, we have a Taylor expansion of the shape ; we next explicitly determine the Taylor coefficients. We obtain $f_{-n,0}(z)= (2\pi\sqrt{n})^{-1}(q^{-n}-\overline{q}^{n})$. Evaluating $I_{\frac12}(w) = \frac{1}{\sqrt{2\pi w}}(e^w-e^{-w})$, the claim for $f_{-n,0}$ then follows by noting that $\Gamma(1,w)=e^{-w}$ for $w>0$ and using Lemma \[lem:relationsofnonhol\] to evaluate $\overline{q}^{n}=W_0(-2\pi ny)q^{-n}$.
To determine $f_{-n,1}$, we observe that by definition $$f_{-n,1}(z)=e^{-2\pi i nx} y^{\frac{1}{2}} \left[\frac{\partial}{\partial s}I_{s-\frac12}(2\pi n y)\right]_{s=1}.$$ Using [@NIST 10.38.6], we obtain that $$\left[\frac{\partial}{\partial s}I_{s-\frac12}(w)\right]_{s=1} = -(2\pi w)^{-\frac{1}{2}} \left(E_1(2w) e^w + {\operatorname{Ei}}(2w)e^{-w}\right).$$ Hence, plugging in $w=2\pi ny$, we obtain $$f_{-n,1}(z)=-\left(2\pi\sqrt{n}\right)^{-1} e^{-2\pi i nx}\left(E_1(4\pi ny) e^{2\pi ny} + {\operatorname{Ei}}(4\pi ny)e^{-2\pi ny}\right).$$ Using Lemma \[lem:relationsofnonhol\], one sees that for $w>0$ $$\begin{aligned}
{\operatorname{Ei}}(w) &= W_2\left(\tfrac{w}{2}\right)+w^{-1}e^w.\end{aligned}$$ Applying integration by parts to the definition of $\bm{{\mathcal{W}}}_0$, this implies $${\operatorname{Ei}}(4\pi n y)e^{-4\pi ny}q^{-n}=2\bm{{\mathcal{W}}}_0(-2\pi n y)q^{-n},$$ from which we conclude the claim.
\(2) Define $$\label{eqn:frakgdef}
\mathfrak{g}_s(z,{\mathfrak{z}}):=\frac{\partial}{\partial s}\xi_0\!\left(g_{s}(z,{\mathfrak{z}})\right).$$ By Lemma \[lem:Greensprop\] (1), the $(s-1)^{-1}$ term in the Laurent expansion of $g_s$ is constant as a function of $z$, and hence annihilated by $\xi_0$. Thus is equivalent to showing that $\mathfrak{g}_1(z,{\mathfrak{z}})=\mathfrak{g}_{{\mathfrak{z}},1}(z)$. The chain rule yields $$\label{eqn:frakg}
\mathfrak{g}_1(z,{\mathfrak{z}})=-2\left[\frac{\partial}{\partial s} \frac{\partial}{\partial w}Q_{s-1}(w)\right]_{s=1,\, w=1+\frac{|z-{\mathfrak{z}}|^2}{2y{\mathbbm{y}}}} \xi_{0}\left(1+\frac{|z-{\mathfrak{z}}|^2}{2y{\mathbbm{y}}}\right).$$ A direct computation gives $$\begin{aligned}
\label{eqn:xicoshr}
\xi_0\!\left(1+\tfrac{|z-{\mathfrak{z}}|^2}{2y{\mathbbm{y}}}\right)=-2\left(\tfrac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{-2} \frac{r_{{\mathfrak{z}}}^2(z)}{\left(1-r_{{\mathfrak{z}}}^2(z)\right)^2} X_{{\mathfrak{z}}}^{-1}(z).\end{aligned}$$ Since $$\label{eqn:coshr}
1+\tfrac{|z-{\mathfrak{z}}|^2}{2y{\mathbbm{y}}}=\cosh(d(z,{\mathfrak{z}})) = \tfrac{2}{1-r_{{\mathfrak{z}}}^2(z)}-1 = \tfrac{1+r_{{\mathfrak{z}}}^2(z)}{1-r_{{\mathfrak{z}}}^2(z)},$$ we conclude that $\mathfrak{g}_1(z,{\mathfrak{z}})=\mathfrak{g}_{{\mathfrak{z}},1}(z)$, establishing .
To evaluate the limit , we use (see [@GrossZagier Section II, (2.5)]) $$Q_{s-1}(w)=\int_{0}^{\infty}\left(w+\sqrt{w^2-1}\cosh(u)\right)^{-s} du.$$ It is then not hard to compute $$\begin{gathered}
\label{Qdiff}
B(r)=-4 r\int_0^\infty \frac{1+\log\left(1-r^2\right)-\log\left(1+r^2+2r\cosh(u)\right)}{\left(1+re^u\right)^2\left(1+re^{-u}\right)^2} \left(r+\frac{\left(1+r^2\right)}{2}\cosh(u)\right)du.\end{gathered}$$ We next determine the limit of this expression as $r\to 0^+$. By evaluating $$\int_{0}^{\infty} \frac{r+\frac{1+r^2}{2}\cosh(u)}{\left(1+re^u\right)^2\left(1+re^{-u}\right)^2} du = \frac{1+4r^2\log(r) - r^4}{4\left(1-r^2\right)^3}+O(1),$$ one can show that the limit of as $r\to 0^+$ equals $$\lim_{r\to 0^+} \left( r \int_{0}^\infty \frac{\log\left(1+r e^u\right)}{\left(1+re^u\right)^2} e^u du\right)-1.$$ The claim then follows by determining that the limit equals $1$.
Fourier expansions {#sec:FourierElliptic}
==================
In this section, we investigate the shape of the Fourier expansions of biharmonic Maass forms.
Fourier expansions of sesquiharmonic Maass forms
------------------------------------------------
The following shapes of Fourier expansions for sesquiharmonic Maass forms follow by Lemmas \[lem:xiFourier\] and \[lem:Wgrowth\].
\[lem:sesquiFourier\] If $\mathcal{M}$ is translation-invariant, sesquiharmonic of weight $\kappa\in{\mathbb{Z}}\setminus\{1\}$, and grows at most linear exponentially at $i\infty$, then for $y\gg0$ we have $\mathcal{M}=\mathcal{M}^{++}+\mathcal{M}^{+-}+\mathcal{M}^{--}$, where $$\begin{aligned}
\mathcal{M}^{++}(z)&:=\sum_{m\gg -\infty} c_{\mathcal{M}}^{++}(m) q^m,\\
\mathcal{M}^{+-}(z)&:=c_{\mathcal{M}}^{+-}(0) y^{1-\kappa}+\sum_{\substack{m\ll \infty\\ m\neq 0}}c_{\mathcal{M}}^{+-}(m)
W_\kappa(2\pi my)q^m,\\
\mathcal{M}^{--}(z)&:=c_{\mathcal{M}}^{--}(0)\log(y)+\sum_{\substack{m\gg -\infty\\ m\neq 0}} c_{\mathcal{M}}^{--}(m)\bm{{\mathcal{W}}}_{\kappa}(2\pi my) q^m.\end{aligned}$$ Moreover, $\mathcal{M}$ is harmonic if and only if $\mathcal{M}^{--}(z)=0$.
Fourier expansions of biharmonic Maass forms
--------------------------------------------
A direct calculation gives the following shape of the constant term of the biharmonic part of the Fourier expansion.
\[lem:Fourierconstant\] The constant term of the Fourier expansion of a weight $\kappa\in{\mathbb{Z}}\setminus\{1\}$ biharmonic Maass form $F$ has the shape $$\label{eqn:Fourierconstant}
c_{F}^{+++}(0)+ c_{F}^{++-}(0) y^{1-\kappa} + c_{F}^{+--}(0) \log(y)+ c_{F}^{---}(0)y^{1-\kappa} \left(1+(\kappa-1)\log(y)\right).$$ Moreover, we have $$\xi_{\kappa}\!\left(y^{1-\kappa} (1+(\kappa-1)\log(y))\right)=-(\kappa-1)^2\log(y).$$
Fourier expansions of the functions from Section \[sec:construction\] {#sec:FourierConstruction}
---------------------------------------------------------------------
We now determine the shapes of the Fourier expansions of the functions from Section \[sec:construction\]. For this, we complete the definition by fixing $a_n$. Specifically, since Lemma \[lem:Eprop\] (3) implies that $\xi_{2}(a_n \widehat{E}_2)=\frac{3}{\pi}\overline{a_n}$ is a constant, we may choose $a_n$ so that the constant term of the holomorphic part of the Fourier expansion of ${\mathbbm{j}}_n$ vanishes for $n\neq 0$ and the constant term is $1$ for $n=0$. For $n=0$ we must verify that the holomorphic part of the constant term is indeed equal to $1$ in the explicit formula ${\mathbbm{j}}_0(z)=\frac{1}{6}\log(y^6|\Delta(z)|)+1$. For this, we use the product expansion of $\Delta$ to show that as $y\to\infty$ $$\label{eqn:logDeltaFourier}
\frac{1}{6}\log\left(y^6|\Delta(z)|\right)+1=
1- \frac{\pi}{3}y+\log(y)+o(1).$$
\[lem:Fourierexps\]
1. There exist $c_{j_n}(m), c_{{\mathcal{J}}_n}^{+}(m), c_{{\mathcal{J}}_n}^{-}(m), c_{{\mathbbm{j}}_n}^{++} (m), c_{{\mathbbm{j}}_n}^{+-}(m)$, and $c_{{\mathbbm{j}}_n}^{--}(m)\in{\mathbb{C}}$ such that $$\begin{aligned}
j_n(z)&=q^{-n}+\sum_{m\geq 1} c_{j_n}(m) q^m, \\
{\mathcal{J}}_{n}(z)&=\sum_{m\geq 0} c_{{\mathcal{J}}_{n}}^+(m)q^m +4\pi n
\delta_{n\neq 0}
W_2(2\pi ny)q^n
-\delta_{n=0} \frac{1}{y}+ \sum_{m\leq-1} c_{{\mathcal{J}}_{n}}^-(m)W_2(2\pi my)q^m,\\
{\mathbbm{j}}_n(z)&=\delta_{n=0}+\sum_{m\geq 1} c_{{\mathbbm{j}}_n}^{++}(m)q^m + c_{{\mathbbm{j}}_n}^{+-}(0) y+ \sum_{m\leq -1} c_{{\mathbbm{j}}_n}^{+-}(m)W_{0}(2\pi my) q^m
\notag\\
&\hspace{2.15cm}
+\delta_{n=0}\log(y) +2\delta_{n\neq 0}\bm{{\mathcal{W}}}_{0}(-2\pi ny) q^{-n} +\sum_{m \geq 1} c_{{\mathbbm{j}}_n}^{--}(m)\bm{{\mathcal{W}}}_{0}(2\pi my)
q^m.\end{aligned}$$ Here $\delta_S:=1$ if some statement $S$ is true and $0$ otherwise.
2. There exist constants $c_{{\mathbbm{g}}_{{\mathfrak{z}}}}^{++}(m)$, $c_{{\mathbbm{g}}_{{\mathfrak{z}}}}^{+-}(m)$, $c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{+++}(m)$, $c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{++-}(m)$, and $c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{+--}(m)\in{\mathbb{C}}$ such that for $y$ sufficiently large $$\begin{aligned}
\mathcal{G}_{{\mathfrak{z}}}(z)&= - 4 \pi \sum_{m \geq 1 } m\overline{c^{+-}_{{\mathbbm{g}}_{{\mathfrak{z}}}}(-m)} q^m + \frac{6}{y}, \\
{\mathbbm{g}}_{{\mathfrak{z}}}(z) &= \sum_{m\geq 1} c_{{\mathbbm{g}}_{{\mathfrak{z}}}}^{++}(m) q^m +\sum_{m \leq -1} c_{{\mathbbm{g}}_{{\mathfrak{z}}}}^{+-}(m)W_0(2\pi my)q^m+6\log(y),\\
{\mathbb{G}}_{{\mathfrak{z}}}(z)&=\sum_{m\geq 0} c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{+++}(m)q^m + \sum_{m\leq -1} c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{++-}(m)
W_{2}(2\pi my) q^m \notag\\
& \hspace{4.0cm}-\frac{6}{y}\left(1+\log(y)\right)+\sum_{m \geq 1}c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{+--}(m)\bm{{\mathcal{W}}}_{2}(2\pi my)q^m.\end{aligned}$$
3. There exist constants $c_{\mathbb{E}}^{+++} (m)$, $c_{\mathbb{E}}^{++-}(m)$, $c_{\mathbb{E}}^{+--}(m)$, $c_{\mathcal{E}}^{++}(m)$, and $c_{\mathcal{E}}^{+-}(m)\in{\mathbb{C}}$ such that $$\begin{aligned}
\xi_0(\mathcal E(z))&=4\pi \widehat{E}_2(z) = 4\pi \sum_{m\geq 0} c_{E_2} (m) q^m -\frac{12}{y},\\
\mathcal E(z)& = \sum_{m\geq 1 } c_{\mathcal E}^{++}(m) q^m+ 4\pi y +\sum_{m\leq -1}c_{\mathcal E}^{+-}(m) W_{0}(2\pi my) q^m- 12\log(y),\\
\nonumber \mathbb E(z) &= \sum_{m\geq 0} c_{\mathbb E}^{+++} (m) q^m + \sum_{m\leq -1} c_{\mathbb E}^{++-}(m) W_2(2\pi my)q^m \\&\hspace{3.2cm}+ 4\pi \log(y) +\sum_{m\geq 1} c_{\mathbb E}^{+--}(m)\bm{{\mathcal{W}}}_{2}(2\pi my)q^m+ \frac{12}{y}(1+\log(y)).
$$
\(1) Since the expansion for $j_n$ is well-known, it is enough to show the expansion for ${\mathbbm{j}}_n$. The expansion for ${\mathcal{J}}_n$ then follows by applying $\xi_0$, employing and Lemma \[lem:xiFourier\]. We now use , then apply $\frac{\partial}{\partial s}$ to the Fourier expansion of $F_{-n}(z,s)$ given in [@Niebur Theorem 1], and employ Lemma \[lem:diffseeds\] (1) to determine the contribution to the principal part from the first term in . Combining this with and , we see that the principal part of ${\mathbbm{j}}_n$ is the growing part of $$\label{eqn:growthjjn}
c_{{\mathbbm{j}}_n}^{+-}(0) y+\delta_{n=0}\log(y)+\delta_{n\neq 0}\left(2\bm{{\mathcal{W}}}_0(-2\pi ny)q^{-n}+E_1(4\pi ny)q^{-n}\right).$$ However, by and the asymptotic growth of the incomplete gamma function [@NIST 8.11.2], $E_1(4\pi ny)q^{-n}$ decays exponentially as $y\to\infty$ and thus does not contribute to the principal part. The constant term of the holomorphic part of ${\mathbbm{j}}_n$ is determined by the choice of $a_n$. In the special case $n=0$, the evaluation $c_{{\mathbbm{j}}_0}^{+-}(0)=-\frac{\pi}{3}$ implies that matches .
\(2) We first claim that ${\mathbbm{g}}_{{\mathfrak{z}}}(z)-6\log(y)$ vanishes as $y\to\infty$. By [@GrossZagier Section II, (2.19)] we have $$\label{eqn:Gs*lim}
G_{s}(z,{\mathfrak{z}}) = \frac{4\pi}{1-2s}y^{1-s} E({\mathfrak{z}},s)+O_s\!\left(e^{-y}\right) \quad (\text{as}\ y \to \infty),$$ where the error has no pole at $s=1$. Combining with and implies that $$\label{eqn:aG1++(0)}
{\mathbbm{g}}_{{\mathfrak{z}}}(z)=\lim_{s\to 1} \left( 2\pi \left(\frac{y^{1-s}}{1-2s}+1\right)E({\mathfrak{z}},s)\right)-12
+O\!\left(e^{-y}\right)\qquad\textnormal{as }y\to\infty.$$ From [@GrossZagier p. 241, second displayed formula], we have $$2\pi E({\mathfrak{z}},s)=\frac{6}{s-1}+ O(1), \qquad 1+\frac{y^{1-s}}{1-2s}=\left(\log(y)+2\right)(s-1)+O_y\left((s-1)^2\right),$$ and hence the right-hand side of equals $6\log(y)+O(e^{-y})$. We conclude from that ${\mathbbm{g}}_{{\mathfrak{z}}}(z)-6\log(y)$ vanishes as $y\to\infty$.
Since the sesquiharmonic part of ${\mathbbm{g}}_{{\mathfrak{z}}}$ is $6\log(y)$ by Lemma \[lem:Greensprop\] (3), the expansion for ${\mathbbm{g}}_{{\mathfrak{z}}}$ follows from the expansion in Lemma \[lem:sesquiFourier\] by comparing the asymptotics in Lemma \[lem:Wgrowth\] with . The expansion for $\mathcal{G}_{{\mathfrak{z}}}$ then follows by applying $\xi_0$ and using Lemma \[lem:xiFourier\].
Finally, by explicitly computing a pre-image under $\xi_2$ of the sesquiharmonic part of ${\mathbbm{g}}_{{\mathfrak{z}}}$, we conclude that the biharmonic part of the expansion of ${\mathbb{G}}_{{\mathfrak{z}}}(z)$ is $-6(1+\log(y))y^{-1}$. Subtracting this from ${\mathbb{G}}_{{\mathfrak{z}}}$ yields a sesquiharmonic function which is bounded as $y\to \infty$ by and . We may thus use Lemma \[lem:sesquiFourier\] and note the asymptotics in Lemma \[lem:Wgrowth\] to compute the shape of the rest of the expansion.
\(3) The fact that $\xi_0(\mathcal E)=4\pi\widehat{E}_2$ follows from Lemma \[lem:Eprop\] (4). Using Lemma \[lem:Wgrowth\], we obtain the claim for $\mathcal{E}$ directly from . Noting the relationship between $\mathbb{E}$ and $\mathcal{E}$ together with $\xi_2(y^{-1})=-1$ and $\xi_2(\log(y))=y$, the constant term of $\mathbb{E}$ is then obtained by Lemma \[lem:Fourierconstant\]. After subtracting this term, the remaining function is sesquiharmonic and we obtain the Fourier expansion of $\mathbb{E}$ by Lemma \[lem:sesquiFourier\] and Lemma \[lem:Wgrowth\].
Elliptic expansions
===================
Elliptic expansions of polyharmonic Maass forms
-----------------------------------------------
For a weight $\kappa$ non-holomorphic modular form $F$, the
elliptic expansion
around the point ${\mathfrak{z}}\in{\mathbb{H}}$ is the unique expansion of the type $$F(z)=\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{-\kappa}\sum_{m\in{\mathbb{Z}}} c_{F,{\mathfrak{z}}}\!\left(r_{{\mathfrak{z}}}(z);m\right) X_{{\mathfrak{z}}}^m(z), \qquad r_{\mathfrak{z}}(z) \ll 1,$$ where $r_{{\mathfrak{z}}}$ and $X_{{\mathfrak{z}}}$ are defined in . If $F$ is a polar polyharmonic Maass form of depth $\ell$, then $c_{F,{\mathfrak{z}}}(r;m)$ satisfies a differential equation with $\ell$ independent solutions for each $m\in{\mathbb{Z}}$. We choose a basis of solutions $B_{\kappa,j}(r;m)$ for $1\leq j\leq \ell$ such that $(z-\overline{{\mathfrak{z}}})^{-\kappa}B_{\kappa,j}(r_{{\mathfrak{z}}}(z);m)X_{{\mathfrak{z}}}^{m}(z)$ has depth $j$, i.e., $j\in{\mathbb{N}}_0$ is minimal with $$\xi_{\kappa,z}^{j}\!\left(
\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{-\kappa}
B_{\kappa,j}\!\left(r_{{\mathfrak{z}}}(z);m\right)X_{{\mathfrak{z}}}^{m}(z)\right)=0.$$ We then write, iterating “$+$” $\ell-j$ times and “$-$” $j-1$ times, $$c_{F,{\mathfrak{z}}}\!\left(r_{{\mathfrak{z}}}(z);m\right) = \sum_{j=1}^{\ell}c_{F,{\mathfrak{z}}}^{+\ldots+-\ldots-}(m) B_{\kappa,j}\!\left(r_{{\mathfrak{z}}}(z);m\right).$$
A direct calculation gives the following lemma.
\[lem:xiellexp\] Let $g_m:(0,1) \to {\mathbb{R}}$ be a differentiable function and define $$f_m(z):=\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{-\kappa} g_m(r_{{\mathfrak{z}}}(z)) X_{{\mathfrak{z}}}^m(z).$$ Then the following hold.
1. For $\kappa\in {\mathbb{Z}}$, we have $$\xi_{\kappa}\!\left(f_m(z)\right)= -\frac{1}{2}\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{\kappa-2} \left(1-r_{{\mathfrak{z}}}^2(z)\right)^{\kappa}r_{{\mathfrak{z}}}^{2m+1}(z) g_{m}'\!\left(r_{{\mathfrak{z}}}(z)\right)X_{{\mathfrak{z}}}^{-m-1} (z),$$
2. The function $f_m$ has depth $j$ if and only if there exist $a_{\ell}\, (1 \leq \ell \leq j-1)$ with $a_{j-1}\neq 0$ for which $$g_m'(r) =\left(1-r^2\right)^{-\kappa}r^{-2m-1} \sum_{\ell=1}^{j-1} a_{\ell} B_{2-\kappa,\ell}(r;-m-1).$$
By Lemma \[lem:xiellexp\] (2), one may choose $B_{\kappa,j}(r;m)$ such that, for some $
C_m
\neq 0$, $$\label{eqn:Bxirel}
B_{\kappa,j}'(r;m)= C_m \left(1-r^2\right)^{-\kappa}r^{-2m-1} B_{2-\kappa,j-1}(r;-m-1).$$ This uniquely determines $r\mapsto B_{\kappa,j}(r;m)$ up to an additive constant. If $B_{\kappa,j}(r;m)$ satisfies and $f$ has depth $j$, then we say that $f$ has
pure depth
$j$ if there exist constants $c_{{\mathfrak{z}}}(m)\in{\mathbb{C}}$ such that $$f(z)=(z-\overline{{\mathfrak{z}}})^{-\kappa}\sum_{m\in {\mathbb{Z}}} c_{{\mathfrak{z}}}(m) B_{\kappa,j}(r;m) X_{{\mathfrak{z}}}^m(z).$$
Since any function $f$ of depth $\ell$ naturally (and uniquely) splits into $f=\sum_{j=1}^{\ell} f_j$ with $f_j$ of pure depth $j$, we call $f_j$ the
depth $j$ part of the elliptic expansion of $f$.
General elliptic expansions of sesquiharmonic Maass forms
---------------------------------------------------------
We next explicitly choose a basis of functions $B_{\kappa,j}(r;m)$ in the special case that $F$ is sesquiharmonic. For this we define $$\begin{aligned}
\label{eqn:B23def}
B_{\kappa,2}(r;m)&:=\beta_{t_0}(1-r^2;1-\kappa,-m),& B_{\kappa,3}(r;m)&:=\bm{\beta}_{\kappa-1,-m}(r),\qquad 0<t_0,r<1\end{aligned}$$ with $$\begin{aligned}
\beta_{t_0}(r;a,b)&:=-\int_{r}^{1-t_0} t^{a-1}(1-t)^{b-1}dt-\sum_{\substack{n\geq 0\\ n\neq -b}} \frac{(-1)^n}{n+b}\binom{a-1}{n} t_0^{n+b}-(-1)^b \delta_{a\in{\mathbb{N}}}\delta_{0\leq -b<a}\log\!\left(t_0\right),\\
\bm{\beta}_{a,b}(r)&:=-2\int_0^{r}t^{2b-1}\left(1-t^2\right)^{-a-1}\beta_{t_0}\!\left(1-t^2; a, 1-b\right)dt.\end{aligned}$$ It turns out that $\beta_{t_0}$ is independent of the choice of $t_0$ (see Lemma \[lem:betabnd\] below). We however leave the dependence in the notation to distinguish it from the incomplete beta function.
\[def:finorder\] We say that a function $\mathcal{M}$ has
finite order
at ${\mathfrak{z}}$ if there exists $m_0\in{\mathbb{R}}$ such that $r_{{\mathfrak{z}}}^{m_0}(z)\mathcal{M}(z)$ does not have a singularity at ${\mathfrak{z}}$.
\[lem:sesquiellexp\] If $\mathcal{M}$ is sesquiharmonic of weight $\kappa$ and has singularities of finite order, then it has an expansion of the type $\mathcal{M}=\mathcal{M}_{{\mathfrak{z}}}^{++} + \mathcal{M}_{{\mathfrak{z}}}^{+-}+\mathcal{M}_{{\mathfrak{z}}}^{--}$ for $r_{{\mathfrak{z}}}(z)$ sufficiently small with $$\begin{aligned}
\mathcal{M}_{{\mathfrak{z}}}^{++}(z)&:=\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{-\kappa}\sum_{m\gg -\infty} c_{\mathcal{M},{\mathfrak{z}}}^{++}(m)X_{{\mathfrak{z}}}^m(z),\\
\mathcal{M}_{{\mathfrak{z}}}^{+-}(z)&:=\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{-\kappa}\sum_{m\ll \infty} c_{\mathcal{M},{\mathfrak{z}}}^{+-}(m)\beta_{t_0}\!\left(1-r_{{\mathfrak{z}}}^2(z);1-\kappa,-m\!\right)X_{{\mathfrak{z}}}^m(z),\\
\mathcal{M}_{{\mathfrak{z}}}^{--}(z)&:=\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{-\kappa}\sum_{m\gg -\infty} c_{\mathcal{M},{\mathfrak{z}}}^{--}(m)\bm{\beta}_{\kappa-1,-m}\!\left(r_{{\mathfrak{z}}}(z)\right)X_{{\mathfrak{z}}}^m(z)
\end{aligned}$$ for some constants $c_{\mathcal{M},{\mathfrak{z}}}^{++}(m),c_{\mathcal{M},{\mathfrak{z}}}^{+-}(m)$, and $c_{\mathcal{M},{\mathfrak{z}}}^{--}(m)\in{\mathbb{C}}$. Moreover, the special functions in this expansion, i.e, those given in , satisfy with $C_m=-2$.
We use Lemma \[lem:xiellexp\] with $g_m=g_{m,j}$, where for $j\in\{1,2,3\}$ we define $$\begin{aligned}
g_{m,1}(r):=1,\qquad g_{m,2}(r):=\beta_{t_0}\left(1-r^2;1-\kappa,-m\right),\qquad g_{m,3}(r):=\bm{\beta}_{\kappa-1,-m}(r).
\end{aligned}$$ We show that $\xi_{\kappa}^3$ annihilates the corresponding functions $f_m=f_{m,j}$ and $f_{m,j}$ has depth $j$. Clearly $f_1$ has depth one as it is meromorphic. Computing $$\label{eqn:B2diff}
g_{m,2}'(r)=-2r\left[\frac{\partial}{\partial w} \beta_{t_0}(w;1-\kappa,-m)\right]_{w=1-r^2} = -2 \left(1-r^2\right)^{-\kappa} r^{-2m-1},$$ we see from Lemma \[lem:xiellexp\] (1) that $$\label{eqn:xiB2}
\xi_{\kappa}\!\left(f_{m,2}(z)\right)=\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{
\kappa-2}X_{{\mathfrak{z}}}^{-m-1}(z)$$ is meromorphic, and hence $f_{m,2}$ has depth two.
Finally we compute $$\label{eqn:B3diff}
\bm{\beta}_{\kappa-1,-m}'(r)= -2 r^{-2m-1} \left(1-r^2\right)^{-\kappa}\beta_{t_0}\left(1-r^2;\kappa-1,m+1\right),$$ from which we conclude via Lemma \[lem:xiellexp\] (1) that $$\label{eqn:xiB3}
\xi_{\kappa}\!\left(f_{m,3}(z)\right)=
\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{\kappa-2}
\beta_{t_0}\left(1-r_{{\mathfrak{z}}}^2(z);\kappa-1,m+1\right)X_{{\mathfrak{z}}}^{-m-1}(z).$$ Thus $f_{m,3}$ has depth three from the above calculation.
The following lemma follows directly by using the binomial series.
\[lem:betabnd\] For $0<w<1$ and $0<t_0<1$ we have $$ \beta_{t_0}(w;a,b)=- \sum_{n\in{\mathbb{N}}_0\setminus\{-b\}} \frac{(-1)^n}{n+b}\binom{a-1}{n}(1-w)^{n+b}-(-1)^b \delta_{a\in{\mathbb{N}}}\delta_{0\leq -b<a}\log(1-w).$$ In particular, $\beta_{t_0}$ is independent of $t_0$ and as $w\to 1^-$ we have $$\beta_{t_0}(w;a,b)\sim \begin{cases}
-\frac{1}{b}(1-w)^{b} & \text{if }b\neq 0,\\
(a-1) \delta_{a\notin{\mathbb{N}}}(1-w) -\log(1-w)\delta_{a\in{\mathbb{N}}}&\text{if }b=0.\end{cases}$$
Lemma \[lem:betabnd\] then directly implies the following corollary.
\[cor:boldbetabnd\] As $r\to 0^+$, we have $$\bm{\beta}_{\kappa-1,-m}(r)\sim
\begin{cases}-\frac{1}{m+1}r^2&\text{if }m\neq -1,\\
\frac{2-\kappa}{2} r^4\delta_{\kappa<2} + 2 r^2\log(r) \delta_{\kappa\geq 2} &\text{if }m=-1.
\end{cases}$$
Using Lemma \[lem:betabnd\] and Corollary \[cor:boldbetabnd\], one may determine those terms in the elliptic expansion that contribute to the principal part and those that do not.
\[lem:sesquiellpp\] Suppose that $\mathcal{M}$ is sesquiharmonic of weight $\kappa$ and has singularities of finite order. Then the following hold:
1. The principal part of the meromorphic part $\mathcal{M}_{{\mathfrak{z}}}^{++}$ of the elliptic expansion of $\mathcal{M}$ precisely comes from those $m$ with $m<0$.
2. If $\kappa\in-{\mathbb{N}}_0$, then the principal part of the harmonic part $\mathcal{M}_{{\mathfrak{z}}}^{+-}$ of the elliptic expansion of $\mathcal{M}$ precisely comes from those $m$ with $m\geq 0$.
3. If $\kappa\notin-{\mathbb{N}}_0$, then the principal part of the harmonic part $\mathcal{M}_{{\mathfrak{z}}}^{+-}$ of the elliptic expansion of $\mathcal{M}$ precisely comes from those $m$ with $m>0$.
4. The principal part of the sesquiharmonic part $\mathcal{M}_{{\mathfrak{z}}}^{--}$ of $\mathcal{M}$ precisely comes from those $m$ with $m\leq -3$.
Biharmonic parts of elliptic expansions
---------------------------------------
For biharmonic forms we only require the coefficient of $X_{{\mathfrak{z}}}^{-1}(z)$ of the biharmonic part of their elliptic expansion. By Lemma \[lem:betabnd\] and Corollary \[cor:boldbetabnd\], $B_{2,2}(r;-1)=\beta_{t_0}(1-r^2;-1,1)$ and $B_{2,3}(r;-1)=\bm{\beta}_{1,1}(r)$ both vanish as $r\to 0^+$. We next show that $B_{2,4}(r;-1)$ may be chosen so that it also decays as $r\to 0^+$. The following lemma follows directly by and Lemma \[lem:betabnd\].
\[lem:doublyharmonicellexp\] The function $$B_{2,4}(r;-1):=\frac{\log\left(1-r^2\right)+r^2}{1-r^2}$$ satisfies with $C_m=-2$, and hence the corresponding function from Lemma \[lem:xiellexp\] has depth four. In particular, $\lim_{r\to 0^+} B_{2,4}(r;-1)=0$.
Elliptic expansions of the functions from Section \[sec:construction\]
----------------------------------------------------------------------
We next explicitly determine the shape of the expansions of the functions defined in Section \[sec:construction\].
\[lem:ellexps\]
1. There exist $c_{j_n,{\mathfrak{z}}}(m)$, $ c_{{\mathcal{J}}_n,{\mathfrak{z}}}^{+}(m)$, $c_{{\mathcal{J}}_n,{\mathfrak{z}}}^-(m)$, $c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{++}(m)$, $c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{+-}(m)$, and $c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{--}(m)\in{\mathbb{C}}$ such that we have for $r_{{\mathfrak{z}}}(z)$ sufficiently small $$\begin{aligned}
j_n(z)&=\sum_{m\geq 0} c_{j_n,{\mathfrak{z}}}(m) X_{{\mathfrak{z}}}^m(z),\\
{\mathcal{J}}_{n}(z)&=\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{-2}\left(\sum_{m\geq 0} c_{{\mathcal{J}}_{n},{\mathfrak{z}}}^+(m)X_{{\mathfrak{z}}}^m(z) + \sum_{m\leq -1}c_{{\mathcal{J}}_{n},{\mathfrak{z}}}^-(m) \beta_{t_0}\!\left(1-r_{{\mathfrak{z}}}^2(z);-1-m\right)X_{{\mathfrak{z}}}^m(z)\right),\\
\notag {\mathbbm{j}}_n(z)&=\sum_{m\geq 0} c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{++}(m)X_{{\mathfrak{z}}}^m(z) + \sum_{m\leq-1} c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{+-}(m)\beta_{t_0}\!\left(1-r_{{\mathfrak{z}}}^2(z);1-m\right)X_{{\mathfrak{z}}}^m(z)\\
&\hspace{2.8in} + \sum_{m\geq 0}c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{--}(m)\bm{\beta}_{-1,-m}\!\left(r_{{\mathfrak{z}}}(z)\right)X_{{\mathfrak{z}}}^m(z).\end{aligned}$$
2. For every $w\in{\mathbb{H}}$, there exist $c_{{\mathbb{G}}_{{\mathfrak{z}}},w}^{+++}(m)$, $c_{{\mathbb{G}}_{{\mathfrak{z}}},w}^{++-}(m)$, $c_{{\mathbb{G}}_{{\mathfrak{z}}},w}^{+--}(m)$, $c_{{\mathbbm{g}}_{{\mathfrak{z}}},w}^{++}(m)$, $c_{{\mathbbm{g}}_{{\mathfrak{z}}},w}^{+-}(m)$, and $c_{{\mathbbm{g}}_{{\mathfrak{z}}},w}^{--}(m)\in{\mathbb{C}}$ such that for $r_{w}(z)$ sufficiently small we have $$\begin{aligned}
\mathcal{G}_{{\mathfrak{z}}}(z)&= \left(\frac{z-\overline{w}}{2\sqrt{{\operatorname{Im}}(w)}}\right)^{-2}\!\!\left(c_{{\mathbbm{g}}_{{\mathfrak{z}}},w}^{+-}(0)X_{w}^{-1}(z)+ \sum_{m \geq 0} \overline{c^{+-}_{{\mathbbm{g}}_{{\mathfrak{z}}},w}(-m-1)} X_w^{m}(z) + \frac{6 r_{w}^2(z)}{1- r_w^2(z)} X_w^{-1} (z) \right),\\
{\mathbbm{g}}_{{\mathfrak{z}}}(z) &= \sum_{m\geq 0} c_{{\mathbbm{g}}_{{\mathfrak{z}}},w}^{++}(m) X_{w}^m(z) +\sum_{m\leq 0} c_{{\mathbbm{g}}_{{\mathfrak{z}}},w}^{+-}(m)\beta_{t_0}\!\left(1-r_{w}^2(z);1,-m\right) X_{w}^m(z)+ 6 \log\!\left(1-r_{w}^2(z)\right),\\
{\mathbb{G}}_{{\mathfrak{z}}}(z)&=\left(\frac{z-\overline{w}}{2\sqrt{{\operatorname{Im}}(w)}}\right)^{-2}\Bigg(\sum_{m\geq 0} c_{{\mathbb{G}}_{{\mathfrak{z}}},w}^{+++}(m)X_{w}^{m}(z) + \sum_{m\leq -1} c_{{\mathbb{G}}_{{\mathfrak{z}}},w}^{++-}(m)\beta_{t_0}\!\left(1-r_{w}^2(z);-1,-m\right) X_{w}^m(z)\\
&\hspace{1.1in} +\sum_{m\geq -1} c_{{\mathbb{G}}_{{\mathfrak{z}}},w}^{+--}(m)\bm{\beta}_{1,-m}\!\left(r_{w}(z)\right) X_{w}^m(z)-6\frac{\log\!\left(1-r_{w}^2(z)\right)+r_w^2(z)}{1-r_{w}^2(z)}X_{w}^{-1}(z)\Bigg).\end{aligned}$$ Moreover, $c_{ {\mathbbm{g}}_{{\mathfrak{z}}},w}^{+-}(0)=0$ unless $w$ is equivalent to ${\mathfrak{z}}$, in which case we have $c_{{\mathbbm{g}}_{{\mathfrak{z}}},{\mathfrak{z}}}^{+-}(0)=-\frac{\omega_{{\mathfrak{z}}}}{2}$.
\(1) Lemma \[lem:sesquiellpp\] gives the claim, up to the vanishing of the terms $m=-1$ and $m=-2$ in the sesquiharmonic part. Applying and then to these terms, this vanishing follows from the fact that $j_n=\Delta_0({\mathbbm{j}}_n)$ does not have a singularity. The other claims then follow by and .
\(2) We begin by proving the expansion for ${\mathbb{G}}_{{\mathfrak{z}}}$ and we first determine its biharmonic part. Since ${\mathbbm{g}}_{{\mathfrak{z}}}(z)-6\log(y)$ is harmonic by Lemma \[lem:Greensprop\] (3), $\xi_2$ maps the biharmonic part of ${\mathbb{G}}_{{\mathfrak{z}}}$ to $6$ times the sesquiharmonic part of the elliptic expansion of $\log(y)$, which by may be computed via $$\label{eqn:logy}
\log(y)=\log({\operatorname{Im}}(w))+\log\left(1-r_{w}^2(z)\right)-{\operatorname{Log}}\!\left(1-X_{w}(z)\right)-{\operatorname{Log}}\!\left(1-\overline{X_{w}(z)}\right).$$ Since the third and fourth terms are harmonic, they do not contribute to the sesquiharmonic part, and hence the sesquiharmonic part of ${\mathbbm{g}}_{{\mathfrak{z}}}$ is precisely $6\log(1-r_{w}^2(z))=-6B_{0,3}(r_{w}(z);0)$. Hence, by Lemma \[lem:doublyharmonicellexp\] and Lemma \[lem:xiellexp\] (1), we see that $$\label{eqn:doublyHHhat}
{\mathbb{G}}_{{\mathfrak{z}}}(z)+6\left(\frac{z-\overline{w}}{2\sqrt{{\operatorname{Im}}(w)}}\right)^{-2}\frac{\log\!\left(1-r_{w}^2(z)\right)+r_w^2(z)}{1-r_{w}^2(z)}X_{w}^{-1}(z)$$ is sesquiharmonic. Choosing the function from Lemma \[lem:doublyharmonicellexp\] as a basis element for the biharmonic part of the elliptic expansion, is precisely the sum of the terms in the elliptic expansion of ${\mathbb{G}}_{{\mathfrak{z}}}$ of depth at most three. The shape of this expansion is given in Lemma \[lem:sesquiellexp\], and we are left to determine the ranges of the summation. Recall that $\xi_2$ sends the $X_{{\mathfrak{z}}}^{m}(z)$ term to an $X_{{\mathfrak{z}}}^{-m-1}(z)$ term by Lemma \[lem:xiellexp\] and the principal part of $\xi_{2}(\mathbb{G}_{{\mathfrak{z}}})={\mathbbm{g}}_{{\mathfrak{z}}}$ is a constant multiple of $\log(r_{{\mathfrak{z}}}(z))=-\frac{1}{2}\beta_{t_0}(1-r_{{\mathfrak{z}}}^2(z);1;0)$, which follows directly from Lemma \[lem:betabnd\]. Thus the sesquiharmonic principal part comes from an $m=-1$ term in the elliptic expansion of ${\mathbb{G}}_{{\mathfrak{z}}}$ and it remains to explicitly compute the meromorphic principal part of ${\mathbb{G}}_{{\mathfrak{z}}}$. Since for ${\operatorname{Re}}(s)>1$ the principal part of ${\mathbb{G}}_{{\mathfrak{z}},s}$ comes from $2\omega_{{\mathfrak{z}}}\mathfrak{g}_{s}(z,{\mathfrak{z}})$, taking the limit we conclude that the principal part of ${\mathbb{G}}_{{\mathfrak{z}}}$ comes from $\omega_{{\mathfrak{z}}}\mathfrak{g}_{{\mathfrak{z}},1}$ by Lemma \[lem:diffseeds\] (2). Since $g_{s}(z,{\mathfrak{z}})$ is an eigenfunction under $\Delta_0$ with eigenvalue $s(1-s)$, we may use Lemma \[lem:Laurent\] (2) with $\kappa=0$ to conclude that $\mathfrak{g}_{{\mathfrak{z}},1}$ is annihilated by $\Delta_2^2$. Since $\mathfrak{g}_{{\mathfrak{z}},1}(z)=(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}})^{-2} B(r_{{\mathfrak{z}}}(z)) X_{{\mathfrak{z}}}^{-1}(z)$ is biharmonic, we may hence write the elliptic expansion of $\mathfrak{g}_{{\mathfrak{z}},1}$ as $$\mathfrak{g}_{{\mathfrak{z}},1}(z)=\left(\frac{z-\overline{{\mathfrak{z}}}}{2\sqrt{{\mathbbm{y}}}}\right)^{-2}\sum_{j=1}^4 a_jB_{2,j}\!\left(r_{{\mathfrak{z}}}(z);-1\right) X_{{\mathfrak{z}}}^{-1}(z)$$ for some $a_j\in{\mathbb{C}}$, and we see that $c_{{\mathbb{G}}_{{\mathfrak{z}}},{\mathfrak{z}}}^{+++}(-1)=\omega_{{\mathfrak{z}}}a_1$.
Moreover, Lemmas \[lem:sesquiellexp\] and \[lem:doublyharmonicellexp\] together with and imply that $$\label{eqn:constQs-1}
B(r)= a_1+a_2\beta_{t_0}\!\left(1-r^2; -1,1\right) +a_3\bm{\beta}_{1,1}(r)+a_4\frac{\log\!\left(1-r^2\right)+r^2}{1-r^2}.$$ We then note that by Lemma \[lem:betabnd\] and Corollary \[cor:boldbetabnd\], the limit $r\to 0^+$ on the right-hand side of equals $a_1$. Using to evaluate the limit on the left-hand side of , we obtain $a_1=0$. We hence conclude the claim for ${\mathbb{G}}_{{\mathfrak{z}}}$ by Lemma \[lem:sesquiellpp\].
In order to obtain the elliptic expansion for ${\mathbbm{g}}_{{\mathfrak{z}}}$, we apply the operator $\xi_2$ to the expansion for ${\mathbb{G}}_{{\mathfrak{z}}}$. Using Lemma \[lem:xiellexp\] (1) and (together with and ) and Lemma \[lem:doublyharmonicellexp\], we conclude the claimed expansion. Similarly, we apply $\xi_0$ to the expansion of ${\mathbbm{g}}_{{\mathfrak{z}}}$ and use Lemma \[lem:xiellexp\] and to obtain the claimed expansion for $\mathcal{G}_{{\mathfrak{z}}}$. $\qedhere$
Evaluating the inner product and the proof of Theorem \[thm:jninnergen\] {#sec:jninner}
========================================================================
To define the regularization of the inner product used in this paper, we set $$\mathcal F_{T,{\mathfrak{z}}_1,\ldots,{\mathfrak{z}}_\ell,\varepsilon_1,\ldots,\varepsilon_\ell} := \mathcal F_T\setminus \bigcup_{j=1}^{\ell}\left(\mathcal B_{\varepsilon_j}(z_j)\cap \mathcal F_T\right),$$ where $\mathcal F_T$ is the usual fundamental domain cut off at height $T$ and $$\mathcal{B}_{\varepsilon_j}\!\left({\mathfrak{z}}_j\right):=\left\{ z\in{\mathbb{H}}:r_{{\mathfrak{z}}_j}(z)<\varepsilon_j\right\}.$$ For two functions $g$ and $h$ satisfying weight $k$ modularity and whose singularities in the fundamental domain lie in $\{{\mathfrak{z}}_1,\dots {\mathfrak{z}}_{\ell},i\infty\}$, we define the generalized inner product (in case of existence) $$\label{eqn:innerdef}
\left<g,h\right>:=\lim_{T\to\infty}\lim_{\varepsilon_\ell\to 0^+}\cdots\lim_{\varepsilon_{1}\to 0^+}\left<g,h\right>_{T,\varepsilon_1,\dots,\varepsilon_{\ell}},$$ where $$\left<g,h\right>_{T,\varepsilon_1,\dots,\varepsilon_{\ell}}:=\int_{\mathcal{F}_{T,\mathfrak{z}_1,\dots,\mathfrak{z}_{\ell},\varepsilon_1,\dots,\varepsilon_{\ell}}}
g(z)\overline{h(z)} y^k \frac{dxdy}{y^2}.$$
Before explicitly computing the inner product with $j_n$, we give the following lemma for evaluating the inner product, which follows by repeated usage of Stokes’ Theorem.
\[lem:innerggen\] Suppose that $\mathbb{F}_1,\mathbb{F}_2:{\mathbb{H}}\to{\mathbb{C}}$ satisfy weight two modularity and that $\xi_2\circ \Delta_{2}(\mathbb{F}_2)=C$ is a constant. Then, denoting $\mathbbm{f}_j:=\xi_{2}(\mathbb{F}_j)$, $
\mathcal F_j:=\xi_0(\mathbbm{f}_j)
$, and $f_j:=-\xi_2(\mathcal F_j)$, we have $$\begin{gathered}
\label{eqn:innerggen}
\left<f_1,\mathbbm{f}_2\right>_{T,\varepsilon_1,\dots,\varepsilon_{\ell}} = \overline{\int_{\partial \mathcal{F}_{T,\mathfrak{z}_1,\dots,\mathfrak{z}_{\ell},\varepsilon_1,\dots,\varepsilon_{\ell}}} \mathcal{F}_1(z)\mathbbm{f}_{2}(z) dz}\\
-\int_{\partial \mathcal{F}_{T,\mathfrak{z}_1,\dots,\mathfrak{z}_{\ell},\varepsilon_1,\dots,\varepsilon_{\ell}}}\mathbbm{f}_1(z)\mathcal{F}_{2}(z)dz-\overline{C}\overline{\int_{\partial \mathcal{F}_{T,\mathfrak{z}_1,\dots,\mathfrak{z}_{\ell},\varepsilon_1,\dots,\varepsilon_{\ell}}} \mathbb{F}_1(z)dz}.\end{gathered}$$ Here the integral along the boundary $\partial \mathcal{F}_{T}$ is oriented counter-clockwise and the integral along the boundary $\partial \mathcal{B}_{\varepsilon_j}(\mathfrak{z}_j)$ is oriented clockwise. In particular $\left<f_1,\mathbbm{f}_2\right>$ equals the limit as $T\to \infty$, $\varepsilon_j\to 0^+$, $1\leq j \leq \ell$, of the right-hand side of , assuming that the regularized integrals exist.
We are now ready to prove Theorem \[thm:jninnergen\] with the explicit constant $$\label{definecn}
c_n:= 6\overline{ c_{{\mathbb{J}}_n}^{+++}(0)}+ 6 c_{{\mathbbm{j}}_n}^{+-}(0).$$
We begin by recalling the well-known fact that every meromorphic modular form $f$ may be written as a product of the form (for example, see [@Pe7 (61)]) $$f(z)=c \Delta(z)^{{{\text {\rm ord}}}_{i\infty}(f)}\prod_{{\mathfrak{z}}\in{{\text {\rm SL}}}_2({\mathbb{Z}})\backslash{\mathbb{H}}} \left(\Delta(z)\Big(j(z)-j({\mathfrak{z}})\Big)\right)^{\frac{{{\text {\rm ord}}}_{{\mathfrak{z}}}(f)}{\omega_{{\mathfrak{z}}}}}$$ with $c\in {\mathbb{C}}$. In particular, if ${{\text {\rm ord}}}_{i\infty}(f)=0$ and $f$ is normalized so that $c=1$, then we see that $$\label{logfG}
\log\left(y^{\frac{k}{2}}|f(z)|\right)= \sum_{{\mathfrak{z}}\in {{\text {\rm SL}}}_2({\mathbb{Z}})\backslash {\mathbb{H}}} \frac{{{\text {\rm ord}}}_{\mathfrak{z}}(f)}{\omega_{{\mathfrak{z}}}} {\mathbbm{g}}_{{\mathfrak{z}}}(z).$$ By linearity of the inner product, it suffices to prove , because Theorem \[thm:jninnergen\] follows from and together with the valence formula.
We take $\mathbb{F}_1:={\mathbb{J}}_n$ and $\mathbb{F}_2:={\mathbb{G}}_{{\mathfrak{z}}}$ in Lemma \[lem:innerggen\] and evaluate the contribution from the three terms in . We have $\mathbbm{f}_1={\mathbbm{j}}_n$, $\mathcal{F}_1=-{\mathcal{J}}_n$, $f_1=j_n$, $\mathbbm{f}_2={\mathbbm{g}}_{{\mathfrak{z}}}$, $\mathcal{F}_2=\mathcal{G}_{{\mathfrak{z}}}$, and $C=6$.
We first compute the contribution to the three terms in along the boundary near $i\infty$: $$\label{eqn:boundaryinfty}
\left\{x+iT: -\frac{1}{2}\leq x\leq \frac{1}{2}\right\}.$$ Since the integral is oriented counter-clockwise from $\frac{1}{2}$ to $-\frac{1}{2}$, these yield the negative of the constant terms of the corresponding Fourier expansions. By Lemma \[lem:Fourierexps\], the first term equals $$\begin{gathered}
\label{eqn:innerinfty1}
\sum_{m\geq 1} \overline{c_{{\mathbbm{g}}_{{\mathfrak{z}}}}^{++}(m) c_{{\mathcal{J}}_n}^{-}(-m)}W_2(-2\pi mT) -6\delta_{n=0}\frac{\log(T)}{T}\\
+ \sum_{m\leq -1}\overline{c_{{\mathbbm{g}}_{{\mathfrak{z}}}}^{+-}(m)}\left(\overline{c_{{\mathcal{J}}_n}^+(-m)}+4\pi n \delta_{m=-n}
\delta_{n\neq 0}
W_2(2\pi nT)\right) W_0(2\pi mT)+ 6\overline{c_{{\mathcal{J}}_n}^+(0)}\log(T).\end{gathered}$$ By Lemma \[lem:Wgrowth\], every term in other than the term with $\log(T)$ vanishes as $T\to\infty$.
By Lemma \[lem:Fourierexps\], the contribution from from the second term in equals $$\label{eqn:innerinfty2}
-4\pi \sum_{m\geq 1}m \overline{c_{{\mathbbm{g}}_{{\mathfrak{z}}}}^{+-}(-m)}\left(c_{{\mathbbm{j}}_n}^{+-}(-m)W_0(-2\pi mT)+2\delta_{m=-n}\bm{{\mathcal{W}}}_{0}(-2\pi nT)\right)+6\delta_{n=0}\frac{1+\log(T)}{T}+6 c_{{\mathbbm{j}}_n}^{+-}(0).$$ By Lemma \[lem:Wgrowth\], every term in other than $6c_{{\mathbbm{j}}_n}^{+-}(0)$ vanishes as $T\to\infty$.
The third integral yields $6$ times the complex conjugate of the constant term of ${\mathbb{J}}_n$, which has the shape in . Since $\kappa=2>1$, every term in other than $$\label{eqn:innerinfty3}
6\overline{c_{{\mathbb{J}}_n}^{+++}(0)} + 6\overline{c_{{\mathbb{J}}_{n}}^{+--}(0)}\log(T)$$ vanishes as $T\to\infty$. However, since $\xi_{2}\!\left(\log(y)\right) = y$ and $\xi_0(y)=1$, we conclude from that $$\label{duality}
c_{{\mathbb{J}}_{n}}^{+--}(0)=\overline{c_{{\mathbbm{j}}_n}^{+-}(0)} = -c_{{\mathcal{J}}_{n}}^+(0).$$ Combining , , , and , we conclude that the limit $T\to \infty$ of the contribution to from the integral along is overall $$\label{eqn:innerinfty}
6
\overline{c_{{\mathbb{J}}_n}^{+++}(0)}+
6
c_{{\mathbbm{j}}_n}^{+-}(0).$$
We next compute the contribution from the integral along $\partial\!\left( \mathcal{B}_{\varepsilon}({\mathfrak{z}})\cap\mathcal{F}\right)$. Note that for a function $F$ satisfying weight two modularity we have $$\int_{\partial\left(\mathcal{B}_{\varepsilon}({\mathfrak{z}})\cap \mathcal{F}\right)} F(z) dz = \frac{1}{\omega_{{\mathfrak{z}}}}\int_{\partial\mathcal{B}_{\varepsilon}({\mathfrak{z}})}F(z)dz.$$ Moreover a straightforward calculation yields that, for $\ell\in {\mathbb{Z}}$, $$\label{int2}
\frac{1}{2\pi i}\int_{\partial \mathcal{B}_{\varepsilon}({\mathfrak{z}})} (z-\overline{{\mathfrak{z}}})^{-2}X_{{\mathfrak{z}}}^{\ell}(z)dz=
\begin{cases}
({\mathfrak{z}}-\overline{{\mathfrak{z}}})^{-1}&\text{if }\ell=-1,\\
0 &\text{otherwise},
\end{cases}$$ where the integral is taken counter-clockwise. Thus we need to determine the coefficient of $X_{{\mathfrak{z}}}^{-1}(z)$ of the corresponding elliptic expansions. Noting that the orientation of is the opposite of the orientation in , Lemma \[lem:ellexps\] implies that the contribution along $\partial\!\left(\mathcal{B}_{\varepsilon}({\mathfrak{z}})\cap\mathcal{F}\right)$ from the first term in is $$\begin{aligned}
&\frac{4\pi}{\omega_{{\mathfrak{z}}}}\!\Bigg(\sum_{m\geq 0} \overline{c_{{\mathbbm{g}}_{{\mathfrak{z}}},{\mathfrak{z}}}^{++}(m)c_{{\mathcal{J}}_{n},{\mathfrak{z}}}^{-}(-m-1)}\beta_{t_0}\!\Big(1-\varepsilon^2;-1,m+1\Big)\\
&\hspace{0.8cm}+ \sum_{m\leq -1}\!\! \overline{c_{{\mathbbm{g}}_{{\mathfrak{z}}},{\mathfrak{z}}}^{+-}(m)c_{{\mathcal{J}}_n,{\mathfrak{z}}}^{+}(-m-1)}\beta_{t_0}\!\left(1-\varepsilon^2;1,-m\right)\\
&\hspace{0.8cm}+\Big(\overline{c_{{\mathbbm{g}}_{{\mathfrak{z}}},{\mathfrak{z}}}(0)}\beta_{t_0}\left(1-\varepsilon^2;1,0\right)+6\log\!\left(1-\varepsilon^2\right)\Big)\overline{c_{{\mathcal{J}}_n,{\mathfrak{z}}}^{-}(-1)}\beta_{t_0}\!\left(1-\varepsilon^2;-1,1\right)\Bigg).\end{aligned}$$ By Lemma \[lem:betabnd\], each of these terms vanishes as $\varepsilon\to 0^+$.
By Lemma \[lem:ellexps\], the integral along $\partial(\mathcal{B}_{\varepsilon}({\mathfrak{z}})\cap\mathcal{F})$ from the second term in evaluates as $$\frac{4\pi}{\omega_{{\mathfrak{z}}}}\Bigg(\left(-\frac{\omega_{{\mathfrak{z}}}}{2}+\frac{6\varepsilon^2}{1-\varepsilon^2}\right)\left(c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{++}(0)+c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{--}(0)\bm{\beta}_{-1,0}(\varepsilon)\right)+\sum_{m\leq -1}\overline{c_{{\mathbbm{g}}_{{\mathfrak{z}}},{\mathfrak{z}}}^{+-}(m)}c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{+-}(m)\beta_{t_0}\!\left(1-\varepsilon^2;1,-m\right)\Bigg).$$Lemma \[lem:betabnd\] implies that this converges to $-2\pi c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{++}(0)$ as $\varepsilon\to 0^+$. Taking the limit $r=\varepsilon\to 0^+$ in the expansion of ${\mathbbm{j}}_n$ in Lemma \[lem:ellexps\] (1) and using the bounds in Lemma \[lem:betabnd\] and Corollary \[cor:boldbetabnd\] then gives $c_{{\mathbbm{j}}_n,{\mathfrak{z}}}^{++}(0)={\mathbbm{j}}_n({\mathfrak{z}})$.
Since ${\mathbb{J}}_n$ does not have a singularity at ${\mathfrak{z}}$, there is no contribution from the third integral in . Hence we conclude that the integral along $\partial(\mathcal{B}_{\varepsilon}({\mathfrak{z}})\cap\mathcal{F})$ altogether contributes $-2\pi {\mathbbm{j}}_n({\mathfrak{z}})$. Thus the overall inner product is, using , $$\label{Gzj}
\langle j_n,{\mathbbm{g}}_{{\mathfrak{z}}}\rangle = 6\overline{c_{{\mathbb{J}}_n}^{+++}(0)}+6 c_{{\mathbbm{j}}_n}^{+-}(0)-2\pi{\mathbbm{j}}_n({\mathfrak{z}}) = -2\pi{\mathbbm{j}}_n({\mathfrak{z}})+c_n.$$
A generating function and the proof of Theorem \[thm:Fdivgen\] and Corollary \[cor:Fdiv\] {#sec:Fdiv}
=========================================================================================
The functions needed for Theorem \[thm:Fdivgen\] are $$\label{eqn:HHdef}
\widehat{{\mathbb{H}}}_{{\mathfrak{z}}}:=-\frac{1}{2\pi}{\mathbb{G}}_{{\mathfrak{z}}} -\frac{1}{4\pi}\mathbb E,\qquad \widehat{{\mathbb{I}}}_{{\mathfrak{z}}}:={\mathbb{G}}_{{\mathfrak{z}}}.$$
We compute the inner product $\left<j_n,{\mathbbm{g}}_{{\mathfrak{z}}}\right>$ in another way and then compare with the evaluation in Theorem \[thm:jninnergen\]. Namely, we apply Stokes’ Theorem’ with to instead write $$\left<j_n,{\mathbbm{g}}_{{\mathfrak{z}}}\right>_{T,\varepsilon} = \overline{\left<\xi_{2}\!\left({\mathbb{G}}_{{\mathfrak{z}}}\right) ,j_n\right>_{T,\varepsilon}} = -\int_{\partial \mathcal{F}_{T,{\mathfrak{z}},\varepsilon}}{\mathbb{G}}_{{\mathfrak{z}}}(z)j_{n}(z)dz.$$ By Lemma \[lem:Fourierexps\], the integral along the boundary near $i\infty$ contributes $$c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{+++}(n)+c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{+--}(n)
\delta_{n\neq 0}
\bm{{\mathcal{W}}}_{2}(2\pi nT)-\delta_{n=0}\frac{6}{T}(1+\log(T)) + \sum_{m\geq 1}c_{j_n}(m)c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{++-}(-m)W_2(-2\pi mT).$$ By Lemma \[lem:Wgrowth\], every term other than $c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{+++}(n)$ vanishes as $T\to\infty$.
We then use the elliptic expansions of $j_n$ and ${\mathbb{G}}_{{\mathfrak{z}}}$ in Lemma \[lem:ellexps\], Lemma \[lem:betabnd\], and Corollary \[cor:boldbetabnd\] to show that the integral along $\partial\!\left(\mathcal{B}_{\varepsilon}({\mathfrak{z}})\cap\mathcal{F}\right)$ vanishes. Thus $$\label{eqn:jninnerotherway}
\left<j_n,{\mathbbm{g}}_{{\mathfrak{z}}}\right> = c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{+++}(n).$$ Taking the generating function of both sides of and recalling , we see that $$\sum_{n\geq 0} \left<j_n,{\mathbbm{g}}_{{\mathfrak{z}}}\right>q^n = {\mathbb{G}}_{{\mathfrak{z}}}^{+++}(z)=\widehat{{\mathbb{I}}}_{{\mathfrak{z}}}^{+++}(z).$$
It remains to show that ${\mathbb{H}}_{{\mathfrak{z}}}$ is the holomorphic part of $\widehat{{\mathbb{H}}}_{{\mathfrak{z}}}$, which is a weight two sesquiharmonic Maass form because the biharmonic part of $2{\mathbb{G}}_{{\mathfrak{z}}}+\mathbb{E}$ vanishes by Lemma \[lem:Fourierexps\]. For this, we plug into and use to obtain $$\label{eqn:boldJncoeff}
{\mathbbm{j}}_n({\mathfrak{z}})=-\frac{1}{2\pi}c_{{\mathbb{G}}_{{\mathfrak{z}}}}^{+++}(n)+\frac{3}{\pi}\overline{c_{{\mathbb{J}}_n}^{+++}(0)} -\frac{3}{\pi} \overline{c_{{\mathcal{J}}_n}^{+}(0)}.$$ Again by , to show that ${\mathbb{H}}_{{\mathfrak{z}}}$ is the holomorphic part of $\widehat{{\mathbb{H}}}_{{\mathfrak{z}}}$ it remains to prove that $12\overline{c_{{\mathcal{J}}_n}^{+}(0)}-12\overline{c_{{\mathbb{J}}_n}^{+++}(0)}$ is the $n$-th Fourier coefficient of the holomorphic part of $\mathbb{E}$. In order to see this, we next compute $\left<j_n,\mathcal{E}\right>_{T}$. Using Stokes’ Theorem and noting gives $$\label{jE}
\langle j_n, \mathcal{E}\rangle_{T} = \overline{\langle\xi_2(\mathbb{E}),j_n\rangle_{T}} = -\int_{\partial \mathcal F_T}\mathbb E(z)j_n(z) dz.$$ Plugging in Lemma \[lem:Fourierexps\], we obtain that equals $$\begin{gathered}
\label{Ejfirst}
c_{\mathbb E}^{+++}(n)+c_{\mathbb{E}}^{+--}(n)
\delta_{n\neq 0}
\bm{{\mathcal{W}}}_{2}(2\pi nT) + \delta_{n=0}\left( c_{\mathbb{E}}^{+--}(0)\log(T) + \frac{12}{T}\left(1+\log(T)\right)\right)\\
+\sum_{m\geq 1} c_{j_n}(m)c_{\mathbb E}^{++-}(-m) W_2(-2\pi mT).\end{gathered}$$ We see by Lemma \[lem:Wgrowth\] that becomes $$\label{eqn:innerEisen1}
\left<j_n,\mathcal{E}\right>_T =c_{\mathbb E}^{+++}(n)+\delta_{n=0} c_{\mathbb{E}}^{+--}(0)\log(T)+O\!\left(\tfrac{\log(T)}{T}\right).$$
We next use Lemma \[lem:innerggen\] with $\mathbb{F}_1={\mathbb{J}}_n$ and $\mathbb{F}_2=\mathbb{E}$ to compute the inner product another way. We have $\mathbbm{f}_1={\mathbbm{j}}_n$, $\mathcal{F}_1=-\mathcal{J}_n$, $f_1=j_n$, $\mathbbm{f}_2=\mathcal{E}$ by , $\mathcal{F}_{2}=\xi_0(\mathcal{E})$, and $C=-12$ in this case. We plug in Lemma \[lem:Fourierexps\] to the first term from Lemma \[lem:innerggen\] to obtain $$\begin{gathered}
\sum_{m\geq 1}\overline{c_{{\mathcal{J}}_n}^+(m)} \ \overline{c_{\mathcal{E}}^{+-}(-m)} W_0(-2\pi mT) +4\pi n W_{2}(2\pi n T) \overline{ c_{\mathcal E}^{+-}(-n)} W_{0}(-2\pi nT)\\
+\left( \overline{c_{{\mathcal{J}}_n}^+(0)}-\delta_{n=0}\frac{1}{T}\right) \left(4\pi T-12\log(T)\right) + \sum_{m\leq -1}\overline{c_{{\mathcal{J}}_n}^-(m)} W_2(2\pi mT) \overline{c_{\mathcal E}^{++}(-m)}.\end{gathered}$$ By Lemma \[lem:Wgrowth\], this becomes $$\label{vanish1}
-4\pi \delta_{n=0}+\overline{c_{{\mathcal{J}}_n}^+(0)}\!\left(4\pi T-12\log(T)\right) +O\!\left(\tfrac{\log(T)}{T}\right).$$ By Lemma \[lem:Fourierexps\], the second term in Lemma \[lem:innerggen\] equals $$\begin{gathered}
4\pi \sum_{m\leq -1} c_{{\mathbbm{j}}_n}^{+-}(m) c_{E_2}(-m) W_0(2\pi mT) + 8\pi\delta_{n\neq 0} \bm{{\mathcal{W}}}_0(-2\pi nT) c_{E_2}(n)\\
+ \left(c_{{\mathbbm{j}}_n}^{+-}(0) T + \delta_{n=0}(1+\log(T))\right)\left(4\pi c_{E_2}(0)-\tfrac{12}{T}\right).\end{gathered}$$ Using Lemma \[lem:Wgrowth\], , and the fact that $c_{E_2}(0)=1$, this becomes $$\label{eqn:innerEsecond}
-4\pi \overline{c_{{\mathcal{J}}_n}^+(0)}T+12\overline{c_{{\mathcal{J}}_n}^+(0)}+4\pi \delta_{n=0}\log(T)+4\pi\delta_{n=0} +O\!\left(\tfrac{\log(T)}{T}\right).$$ By the contribution from the last term in Lemma \[lem:innerggen\] is, using $$\label{eqn:innerEthird}
-12 \overline{c_{{\mathbb{J}}_n}^{+++}(0)} + 12 \overline{c_{{\mathcal{J}}_n}^+(0)} \log(T)+o(1).$$ Combining the respective evaluations , , and , we get $$\begin{aligned}
\label{combining}
\left<j_n,\mathcal{E}\right>_T&=4\pi \delta_{n=0} \log(T)+ 12 \overline{c_{{\mathcal{J}}_n}^+(0)} - 12\overline{c_{{\mathbb{J}}_n}^{+++}(0)} +o(1).\end{aligned}$$ Comparing the constant terms in the asymptotic expansions of and gives that $$c_{\mathbb E}^{+++}(n) =12 \overline{c_{{\mathcal{J}}_n}^{+}(0)} - 12 \overline{c_{{\mathbb{J}}_n}^{+++}(0)}.$$ Plugging this into , we conclude that ${\mathbbm{j}}_n({\mathfrak{z}})$ is the $n$-th coefficient of $\widehat{{\mathbb{H}}}_{{\mathfrak{z}}}$, as claimed.
We conclude with the proof of Corollary \[cor:Fdiv\].
The claim follows from Theorem \[thm:Fdivgen\], , , , , and the valence formula.
[99]{} T. Asai, M. Kaneko, and H. Ninomiya, *Zeros of certain modular functions and an application*, Comm. Math. Univ. Sancti Pauli **46** (1997), 93–101. R. Borcherds,
Automorphic forms with singularities on Grassmannians
, Invent. Math. **132** (1998), 491–562. K. Bringmann, N. Diamantis, and S. Ehlen,
Regularized inner products and errors of modularity
, Int. Math. Res. Not. **2017** (2017), 7420–7458. K. Bringmann, S. Ehlen, and M. Schwagenscheidt,
On the modular completion of certain generating functions
, submitted for publication, arxiv:1804.07589. J. Bruinier and J. Funke,
Traces of CM values of modular functions
, J. Reine Angew. Math. **594** (2006), 1–33. J. Bruinier and J. Funke,
On two geometric theta lifts
, Duke Math. J.
125
(2004), no. 1, 45–90. J. Bruinier, W. Kohnen, and K. Ono,
The arithmetic of the values of modular functions and the divisors of modular forms
, Compositio Math. **130** (2004), 552–566. Digital Library of Mathematical Functions, National Institute of Standards and Technology, website: http://dlmf.nist.gov/. S. Ehlen, S. Sankaran,
On two arithmetic theta lifts
, Compositio Math. **154** (2018), 2090–2149. J. Fay, [*[Fourier coefficients of the resolvent for a Fuchsian group]{}*]{}, J. Reine Angew. Math. **293-294** (1977), 143–203. B. Gross and D. Zagier,
On singular moduli
, J. Reine Angew. Math. **355** (1985), 191–220. B. Gross and D. Zagier,
Heegner points and derivatives of $L$-series
, Invent. Math. **84** (1986), 225–320. J. Harvey, G. Moore,
Algebras, BPS states, and strings
, Nuclear Phys. B **463** (1996), 315–368. D. Hejhal,
The Selberg trace formula for ${{\text {\rm SL}}}_2({\mathbb{R}})$, Volume 2
, Lecture Notes in Mathematics **1001**, Springer–Verlag, 1983. S. Herrero, Ö. Imamo$\overline{\text{g}}$lu, A. von Pippich, and Á. Tóth,
A Jensen–Rohrlich type formula for the hyperbolic 3-space
, Trans. Amer. Math. Soc., to appear, online 2018 (DOI: 10.1090/tran/7484). J. Lagarias and R. Rhoades,
Polyharmonic Maass forms for $\operatorname{PSL}(2,{\mathbb{Z}})$
, Ramanujan J. **41** (2016), 191–232. D. Niebur,
A class of nonanalytic automorphic functions
, Nagoya Math. J. **52** (1973), 133–145. H. Petersson,
Automorphe Formen als metrische Invarianten I
, Math. Nachr. **1** (1948), 158–212. H. Petersson,
Über automorphe Orthogonalfunktionen und die Konstruktion der automorphen Formen von positiver reeller Dimension
, Math. Ann. **127** (1954), 33–81. D. Rohrlich,
A modular version of Jensen’s formula
, Math. Proc. Cambridge Phil. Soc. **95** (1984), 15–20. D. Zagier,
Traces of singular moduli
, in “Motives, polylogarithms and Hodge Theory (eds. F. Bogomolov and L. Katzarkov), Int. Press Lect. Ser. **3** (2002), 209–244.
[^1]: The research of the first author is supported by the Alfried Krupp Prize for Young University Teachers of the Krupp foundation and the research leading to these results receives funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant agreement n. 335220 - AQSER. The research of the second author was supported by grants from the Research Grants Council of the Hong Kong SAR, China (project numbers HKU 17302515, 17316416, 17301317, and 17303618).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This paper is devoted to the study of the stability and stabilizability of heat equation in non-cylindrical domain. The interesting thing is that there is a class of initial values such that the system is no longer exponentially stable. The system is only polynomially stable or only analogously exponentially stable. Then, the rapid exponential stabilization of the system is obtained by the backstepping method.'
author:
- 'Lingfei Li[^1],Yujing Tang[^2]andHang Gao [^3]'
title: '**The stability and rapid exponential stabilization of heat equation in non-cylindrical domain [^4]**'
---
[**Key Words**]{}: non-cylindrical domain, heat equation, stability, exponential stabilization
Introduction and Main results
=============================
It is well known that the stability and stabilization problems for both linear and nonlinear partial differential equations have been studied extensively(see \[1-4\], references therein). In general, the study of stability for a given system is along the following way. We first concern whether the solution is stable or not. Then, if it is unstable, we try to find a control to stabilize the system. And if the solution decays in a slower rate, one wants to force the solution to decay with arbitrarily prescribed decay rates. They are called stability, stabilization and rapid stabilization, respectively.
There are many methods to study the stabilization including pole placement, the control Lyapunov function method, the backstepping method and so on. The backstepping method which was initiated in \[5\] and \[6\] has been used as a standard method for finite dimensional control systems. The application of continuous backstepping method to parabolic equations was first given in \[7\] and \[8\]. In the last decade, the backstepping method has been widely used to study the stability of partial differential equations, such as wave equation, Korteweg-de Vries equation, and Kuramoto-Sivashinsky equation (see \[9\]-\[11\]). The scheme of backstepping method is as follows. Initially, an exponentially stable target system is established. Then, the PDE system is transformed to the target system by using a Volterra transformation. At the same time, the PDE describing the transformation kernel is obtained. Thus the stabilization problem is converted to the well-posedness problem of the kernel and the invertibility of transformation. Finally, An explicit full state feedback control is given to demonstrate successful stabilization of the unstable system.
Many problems in the real world involve non-cylindrical regions such as controlled annealing of a solid in a fluid medium(\[12\]), vibration control of an extendible flexible beam(\[12\]), phase change and heat transfer. The PDE in the non-cylindrical domain has close relation with equations of time-dependent coefficients. The typical system with time-dependent coefficients is given by the Czochralski crystal growth problem (\[13\]), which is presented as a heat equation with time-dependent coefficients. Hence, the research on the problem of non-cylindrical region has important practical significance. Time-dependent domain and coefficients lead to more complexities and difficulties. In most papers concerning stabilization, systems are considered in cylindrical domains. Only few results have been found for systems in non-cylindrical domains(see \[12\]\[14\]\[15\]). In \[12\], the authors considered the problem of the stabilization and the control of distributed systems with time-dependent spatial domains. The evolution of the spatial domains with time is described by a finite-dimensional system of ordinary differential equations depending on the control. Namely, the dynamical behavior of the distributed system is controlled by manipulating its spatial domain. The length of the time interval is finite. In \[14\], stabilization of heat equation with time-varying spatial domain is discussed. The well-posedness of the kernel which depends on time $t$ is proved by successive approximation. There is a restriction on the boundary of moving domain. More precisely, the boundary function $l(t)$ is analytic and the $j$th partial derivative satisfies: $$|\partial_{t}^{j}l(t)|\leq M^{j+1}j!, j\geq0.$$ If the boundary is unbounded with respect to $t$, then the assumption in \[14\] does not hold. In \[15\], the authors extended the backstepping-based observer design in \[14\] to the state estimation of parabolic PDEs with time-dependent spatial domain. As far as we know, the stability and stabilizability of parabolic equation in unbounded non-cylindrical domain have not been discussed in detail yet.
In this paper, we mainly focus on the stability and rapid stabilization of the one-dimensional heat equation in non-cylindrical domain. Let us begin with the following system $$\label{b1}
\left\{\begin{array}{ll}
u_{t}-u_{xx}=0
&\mbox{in}\ {Q}_t,\\[3mm]
u(0,t)=0,u(l(t),t)=0, &\mbox{in}\ (0,+\infty), \\[3mm]
u(x,0)=u_{0}(x), &\mbox{in}\ \Omega=(0,1),
\end{array}
\right.$$ where ${Q}_t=\{(x,t)| x\in (0,l(t))=\Omega_{t},t\in(0,+\infty), l(t)=(1+kt)^{\alpha},\alpha>0, k>0\}$. The existence and uniqueness of solutions to the parabolic equations in non-cylindrical domains are investigated in \[16\]. The following Lemma can be proved by the method in \[16\].
\[b3\] If $u_{0}(x)\in L^2(\Omega)$, then system (1.1) has a unique weak solution $u$ in the following space $$C([0,+\infty);L^2(\Omega_{t}))\cap L^2(0,+\infty;H^1_{0}(\Omega_{t})),$$and there exists a positive constant $C$ independent of $u_{0}$ and ${Q}_t$ such that $$\|u\|_{L^{\infty}(0,+\infty;L^2(\Omega_{t}))}+ \|\nabla u\|_{L^{2}({Q}_t)}\leq C\|u_{0}\|_{L^2(\Omega)}.$$ Moreover, if $u_{0}(\cdot)\in H^1_{0}(\Omega)$, then system (1.1) has a unique strong solution $u$ in the class $$C([0,+\infty);H^1_{0}(\Omega_{t}))\cap L^2(0,+\infty;H^2\cap H^1_{0}(\Omega_{t}))\cap H^1(0,+\infty;L^2(\Omega_{t})),$$ and there exists a constant independent of $u_{0}$ and ${Q}_t$ such that $$\|u\|_{L^{\infty}(0,+\infty;H^1_{0}(\Omega_{t}))}+ \|u_t\|_{L^{2}({Q}_t)}\leq C\|u_{0}\|_{H^1_{0}(\Omega)}.$$
The proof of Lemma 1.1 will be given in Appendix. In what follows, the definitions of the stability for system (1.1) are given.
\[0\] System $(1.1)$ is said to be exponentially stable, if for any given $u_{0}\in L^2(\Omega)$, there exist $C>0$, $ \alpha>0$ and $t_{0}>0$ such that for every $t\geq t_{0}$, $\|u\|_{L^{2}(\Omega_{t})}\leq C e^{-\alpha t}$. System $(1.1)$ is said to be analogously exponentially stable, if for any given $u_{0}\in L^2(\Omega)$, there exist $C>0$, $ C_{1}>0$, $t_{0}>0$ and $\beta$ with $ 0<\beta<1$ such that for every $t\geq t_{0}$, $\|u\|_{L^{2}(\Omega_{t})}\leq C e^{-C_{1}{t}^{\beta}}$. And system $(1.1)$ is said to be polynomially stable, if for any given $u_{0}\in L^2(\Omega)$, there exist $\gamma>0$, $t_{0}>0$ and $C>0$ such that for every $t\geq t_{0}$, $\|u\|_{L^{2}(\Omega_{t})}\leq C{(\varphi(t))^{-\gamma}},$ where $\varphi(t)$ is a polynomial with respect to $t$ .
It is easy to see that the exponentially stable system must be analogously exponentially stable and the analogously exponentially stable system must be polynomially stable. We can get by energy estimate that system (1.1) is polynomially (or analogously exponentially) stable. What we want to know is that whether the polynomially (or analogously exponentially) stable system is exponentially stable or not. The conclusion is that the corresponding solution ${u}$ is only polynomially stable for $\alpha\geq\frac{1}{2}$, more precisely, the $L^{2}$ norm of solution has a polynomial lower bound for some initial values. And system $(1.1)$ is only analogously exponentially stable for $0<\alpha<\frac{1}{2}$ because we can also find a lower bound of the solution. Hence, we have the following Theorems.
\[02\] System $(1.1)$ is only polynomially stable for $\alpha\geq\frac{1}{2}$.
\[01\] System $(1.1)$ is only analogously exponentially stable for $0<\alpha<\frac{1}{2}$.
\[01\] It is well known that the heat equation on the cylindrical domain is exponentially stable. If $\alpha=1$, then the boundary is a line. It can be seen by Theorem 1.1 that the system is polynomially stable as long as the boundary of the domain is tilted a little bit, namely $k$ is small enough. When the boundary of the domain is inclined steeply so that it is close to the x-axis, i.e.,k is large enough, the system is still polynomially stable. But if the boundary is $(1+kt)^\alpha(0<\alpha<\frac{1}{2})$, system (1.1) is only analogously exponentially stable. The relation between the stability and the boundary curve is shown in Figure 1.
![The relation between the stability and the boundary curve](stable1.eps "fig:"){width="47.00000%" height="5.5cm"} ![The relation between the stability and the boundary curve](stable2.eps "fig:"){width="47.00000%" height="5.5cm"}
Since system (1.1) decays slower than exponential decay rate, the goal of this paper is to construct a control to force the solution to decay at a desired rate. It is easy to see that there exists an internal feedback control leading to exponential decay. However, we are interested in looking for boundary feedback control to stabilize the system exponentially. Let us consider the system with boundary control
$$\label{b1}
\left\{\begin{array}{ll}
u_{t}-u_{xx}=0,
&\mbox{in}\ {Q}_t,\\[3mm]
u(0,t)=0,u(l(t),t)=U(t), &\mbox{in}\ (0,+\infty), \\[3mm]
u(x,0)=u_{0}(x), &\mbox{in}\ \Omega=(0,1),
\end{array}
\right.$$
where ${Q}_t=\{(x,t)| x\in (0,l(t))=\Omega_{t},t\in(0,+\infty), l(t)=(1+kt)^{\alpha},0<\alpha\leq1, k>0\}$. The well-posedness of solutions to (1.2) can be obtained from the results in \[16\] and Lemma 1.1.
\[b3\] Assume $U(t)\in H^1(0,+\infty)$ and $\sqrt{l(t)}U'(t)\in L^2(0,+\infty)$. If $u_{0}(x)\in L^2(\Omega)$, system (1.2) has a unique weak solution $u$ in the class $$C([0,+\infty);L^2(\Omega_{t}))\cap L^2(0,+\infty;H^1(\Omega_{t})).$$ Moreover, if $u_{0}(x)\in H^1(\Omega)$ and $u_{0}(1)=U(0)$, then system (1.2) has a unique strong solution $u$ in the class $$C([0,+\infty);H^1(\Omega_{t}))\cap L^2(0,+\infty;H^2(\Omega_{t}))\cap H^1(0,+\infty;L^2(\Omega_{t})).$$
The proof of Lemma 1.2 will be shown in Appendix.
\[02\] Assume that $u_{0}\in L^2(\Omega),0<\alpha\leq1$. Then, there exists a boundary feedback control $U(t)\in H^{1}(0,+\infty)$ such that system $(1.1)$ decays exponentially to zero.
\[01\] When the growth order of $l(t)$ is more than 1, the linear feedback control does not work. However, we can not deduce that the system is not stabilized exponentially by feedback control because of the variety of feedback control.
This paper is organized as follows. In section 2, the stability of (1.1) is established. While in Section 3, the feedback stabilization of (1.2) for $0<\alpha\leq 1$ is proved. At last, the appendix is given in Section 4.
The stability of (1.1)
======================
\[01\] Let us assume that $u_{0}\in C(\overline{\Omega})$, $u_{0}>0$,$\alpha\geq\frac{1}{2}$. Then there is some $t_{0}$ such that the solution of system (1.1) satisfies $$\|u\|_{L^2(0,l(t))}\geq C (1+kt)^{-\beta} ,t\geq t_{0},$$ where $ C$ depending on $k,\alpha, t_{0}$, and $ \beta>0$ depending on $k,\alpha$.
[**Proof of Lemma 2.1**]{} (1)Let $u_{0}(x)=\sin(\pi x)e^{-\frac{\alpha x^{2}}{4}}$. Assume that $u(x,t)=\sin\frac{\pi x}{(1+kt)^{\alpha}}e^{c(x,t)}$ is the solution of the following system, $$\label{b1}
\left\{\begin{array}{ll}
u_{t}-u_{xx}=0
&\mbox{in}\ {Q}_t,\\[3mm]
u(0,t)=0,u(l(t),t)=0, &\mbox{in}\ (0,+\infty), \\[3mm]
u(x,0)=u_{0}(x), &\mbox{in}\ \Omega=(0,1),
\end{array}
\right.$$ where $\alpha\geq\frac{1}{2}$.
By straightforward computations, we get that the partial derivatives are $$\begin{array}{ll}
\frac{\partial u}{\partial t}=-\frac{\alpha \pi kx}{(1+kt)^{\alpha+1}}\cos\frac{\pi x}{(1+kt)^{\alpha}}e^{c(x,t)}+\sin\frac{\pi x}{(1+kt)^{\alpha}}e^{c(x,t)} \frac{\partial c }{\partial t},\\[3mm]
\frac{\partial u}{\partial x}=\frac{\pi}{(1+kt)^{\alpha}}\cos\frac{\pi x}{(1+kt)^{\alpha}}e^{c(x,t)}+\sin{\frac{\pi x}{(1+kt)^\alpha}}e^{c(x,t)} \frac{\partial c}{\partial x},\\[3mm]
\frac{\partial^{2} u}{\partial x^{2}}=
-\frac{\pi^{2}}{(1+kt)^{2\alpha}}\sin\frac{\pi x}{(1+kt)^{\alpha}}e^{c(x,t)}+\frac{2 \pi }{(1+kt)^{\alpha}}\cos\frac{\pi x}{(1+kt)^{\alpha}}e^{c(x,t)} \frac{\partial c}{\partial x}\\[3mm]
+\sin\frac{\pi x}{(1+kt)^{\alpha}}e^{c(x,t)}( (\frac{\partial c}{\partial x})^{2}+\frac{\partial^{2} c}{\partial x^{2}}).
\end{array}$$ Substituting (2.2) into the equation in (2.1) and comparing the coefficients of sine and cosine, one has the equation satisfied by $c(x,t)$ as follows $$\label{b6}
\left\{\begin{array}{ll}
\frac{\partial c}{\partial x}=-\frac{\alpha k x}{2(1+kt)},\\[3mm]
\frac{\partial c}{\partial t}=-\frac{\pi^{2}}{(1+kt)^{2\alpha}}+\frac{\alpha^{2}k^{2}x^{2}}{4(1+kt)^2}-\frac{\alpha k }{2(1+kt)}.
\end{array}
\right.$$
When $\alpha=\frac{1}{2}$, integrating (2.3) respect for $x$ and $t$ respectively, we obtain $$\label{b7}
\left\{\begin{array}{ll}
c(x,t)- c(0,t)=-\frac{\alpha kx^{2}}{4(1+kt)},\\[3mm]
c(x,t)- c(x,0)=-\frac{\pi^{2}\ln(1+kt)}{k}-\frac{k\alpha^{2}x^{2}}{4(1+kt)}+
\frac{k\alpha^{2}x^{2}}{4}-\frac{\alpha}{2}\ln(1+kt).
\end{array}
\right.$$ Taking $c(0,0)=0$, then we have $$c(x,t)=-\frac{\pi^{2}\ln(1+kt)}{k}-\frac{\alpha}{2}\ln(1+kt)-
\frac{k\alpha^{2}x^{2}}{4(1+kt)}+\frac{k\alpha^{2}x^{2}}{4}-\frac{k\alpha x^{2}}{4}.$$ Hence, the expression of the solution to (2.1) is $$\label{b8}
\begin{array}{ll}
u&=\sin\frac{\pi x}{(1+kt)^{\alpha}}(1+kt)^{-\frac{\alpha}{2}-
\frac{\pi^{2}}{k}}e^{\frac{k^{2}\alpha^{2}x^{2}t}{4(1+kt)}}e^{-\frac{k\alpha x^{2}}{4}}.
\end{array}$$
Now let us estimate the norm $$\label{b10}
\begin{array}{ll}
\|u\|^{2}_{L^{2}(0,l(t))}
&= \int^{(1+kt)^\alpha}_{0}(\sin\frac{\pi x}{(1+kt)^{\alpha}})^{2}{(1+kt)}^{-\alpha}(1+kt)^{-\frac{2\pi^{2}}{k}}
e^{\frac{k^{2}\alpha^{2}x^{2}t}{2(1+kt)}}e^{-\frac{k\alpha x^{2}}{2}}dx\\[3mm]
&\geq(1+kt)^{-\frac{2\pi^{2}}{k}}(1+kt)^{-\alpha}\int^{(1+kt)^\alpha}_{0}(\sin\frac{\pi x}{(1+kt)^{\alpha}})^{2}e^{-\frac{k\alpha x^{2}}{2}}dx.
\end{array}$$ Thanks to the following inequality $$|\sin\theta|\geq\frac{2}{\pi}|\theta|, \theta\in [-\frac{\pi}{2},\frac{\pi}{2}],$$ we get $$\label{b11}
\begin{array}{ll}
\|u\|^{2}_{L^{2}(0,l(t))}&\geq\|u\|^{2}_{L^{2}(0,\frac{l(t)}{2})}\\[3mm]
&\geq (1+kt)^{-\frac{2\pi^{2}}{k}}(1+kt)^{-\alpha}\int^{\frac{(1+kt)^\alpha}{2}}_{0}\frac{4}{\pi^{2}}({\frac{\pi x}{(1+kt)^{\alpha}}})^{2}e^{-\frac{k\alpha x^{2}}{2}}dx \\[3mm]
&\geq 4 (1+kt)^{-\frac{2\pi^{2}}{k}}(1+kt)^{-3\alpha}\int^{\frac{(1+kt)^\alpha}{2}}_{0} x^{2}e^{-\frac{k\alpha x^{2}}{2}}dx.
\end{array}$$ Set $x=\sqrt{\frac{2}{k\alpha}}y$, the integral in (2.7) turns to be $$\label{b12}
\begin{array}{ll}
\int^{\frac{(1+kt)^\alpha}{2}}_{0} x^{2}e^{-\frac{k\alpha x^{2}}{2}}dx
&= ({\frac{2}{k\alpha}})^{\frac{3}{2}}\int_{0}^{\sqrt{\frac{k\alpha}{2}}
\frac{(1+kt)^\alpha}{2}}y^{2}e^{-y^{2}}dy\\[3mm]
&=({\frac{2}{k\alpha}})^{\frac{3}{2}}(-\frac{y}{2}e^{-y^{2}}|_{0}^
{\sqrt{\frac{k\alpha}{2}}\frac{(1+kt)^\alpha}{2}}+
\int^{\sqrt{\frac{k\alpha}{2}}\frac{(1+kt)^{\alpha}}{2}}_{0}\frac{e^{-y^{2}}}{2}dy)\\[3mm]
&\geq ({\frac{2}{k\alpha}})^{\frac{3}{2}}(-\frac{1}{8}+\frac{\sqrt{\pi}}{8}) \\[3mm]
&\geq {C_1 },
\end{array}$$ where $C_{1}=({\frac{2}{k\alpha}})^{\frac{3}{2}}(-\frac{1}{8}+\frac{\sqrt{\pi}}{8}).$ The inequalities $$-\frac{y}{2}e^{-y^{2}}|_{y=\sqrt{\frac{k\alpha}{2}}\frac{(1+kt)^\alpha}{2}}\geq -\frac{1}{8}$$ $$\int^{{\sqrt{\frac{k\alpha}{2}}\frac{(1+kt)^{\alpha}}{2}}}_{0}\frac{e^{-y^{2}}}{2}dy\geq \frac{\sqrt{\pi}}{8}$$ hold for $t\geq t_{0}$ due to the facts $$\lim_{y\rightarrow\infty} ye^{-y^{2}}=0, \int^{\infty}_{0}e^{-y^{2}}dy=\frac{\sqrt{\pi}}{2}.$$ Combining (2.7) and (2.8), we derive that $$\|u\|_{L^2(0,l(t))}\geq C_{2} (1+kt)^{-\frac{\pi^{2}}{k}}(1+kt)^{-\frac{3\alpha}{2}},$$ where $C_{2}=2\sqrt{C_{1}}.$
In the case of $\alpha>\frac{1}{2}$, we can get the similar estimate. Integrating (2.3), we have $$\label{b7}
\left\{\begin{array}{ll}
c(x,t)- c(0,t)=-\frac{k\alpha x^{2}}{4(1+kt)},\\[3mm]
c(x,t)- c(x,0)=-\frac{\pi^{2}}{k(1-2\alpha)}((1+kt)^{(1-2\alpha)}-1)
-\frac{k\alpha^{2}x^{2}}{4(1+kt)}+\frac{k\alpha^{2}x^{2}}{4}-\frac{\alpha}{2}\ln(1+kt).
\end{array}
\right.$$ Taking $c(0,0)=0$, we obtain that $$c(x,t)=-\frac{\pi^{2}}{(1-2\alpha)k}((1+kt)^{(1-2\alpha)}-1)-\frac{\alpha}{2}\ln(1+kt)-
\frac{k\alpha^{2}x^{2}}{4(1+kt)}+\frac{k\alpha^{2}x^{2}}{4}-\frac{k\alpha x^{2}}{4}.$$ Hence, the solution to (2.1) is $$\label{b8}
\begin{array}{ll}
u&=\sin\frac{\pi x}{(1+kt)^{\alpha}}e^{-\frac{\pi^{2}}{k(1-2\alpha)}((1+kt)^{(1-2\alpha)}-1)}(1+kt)^
{-\frac{\alpha}{2}}e^{\frac{k^{2}\alpha^{2}x^{2}t}{4(1+kt)}}e^{-\frac{k\alpha x^{2}}{4}}.
\end{array}$$ By similar estimate, the norm of the solution turns out to be $$\label{b10}
\begin{array}{ll}
\|u\|^{2}_{L^{2}(0,l(t))}
&\geq \int^{\frac{(1+kt)^\alpha}{2}}_{0}(\sin\frac{\pi x}{(1+kt)^{\alpha}})^{2}{(1+kt)}^{-\alpha}e^{-\frac{2 \pi^{2}}{k(1-2\alpha)}((1+kt)^{(1-2\alpha)}-1)}e^
{\frac{k^{2}\alpha^{2}x^{2}t}{2(1+kt)}}e^{-\frac{k\alpha x^{2}}{2}}dx\\[3mm]
&\geq e^{-\frac{2 \pi^{2}}{k(1-2\alpha)}((1+kt)^{(1-2\alpha)}-1)}(1+kt)^{-\alpha}\int^{\frac{(1+kt)^\alpha}{2}}_{0}(\sin\frac{\pi x}{(1+kt)^{\alpha}})^{2}e^{-\frac{k\alpha x^{2}}{2}}dx\\[3mm]
&\geq C_{3} e^{-\frac{2 \pi^{2}}{k(1-2\alpha)}((1+kt)^{(1-2\alpha)}-1)}(1+kt)^{-\alpha}.
\end{array}$$ We derive that $$\label{b11}
\begin{array}{ll}
\|u\|_{L^2(0,l(t))}&\geq C_{4} e^{-\frac{ \pi^{2}}{k(1-2\alpha)}((1+kt)^{(1-2\alpha)}-1)}(1+kt)^{-\frac{\alpha}{2}}\\[3mm]
&\geq C_{4} e^{\frac{ \pi^{2}}{k(1-2\alpha)}} (1+kt)^{-\frac{\alpha}{2}}.
\end{array}$$
(2)Assume $\alpha=\frac{1}{2}$. Let ${v}_{0}=A\sin(\pi x)e^{-\frac{\alpha x^{2}}{4}},A>0$, then the corresponding solution ${v}$ is only polynomially stable. For any ${u} _{0}\in C(\overline{\Omega}),{u} _{0}>0$, there exists a ${v} _{0} $ such that ${u}_{0}\geq {v}_{0}$. Let $w={u}-{v}$, we have that $w$ is the solution of the following system $$\label{b13}
\left\{\begin{array}{ll}
w_{t}-w_{xx}=0,
&\mbox{in}\ {Q}_t,\\[3mm]
w(0,t)=w(l(t),t)=0, &\mbox{in}\ (0,+\infty), \\[3mm]
w(x,0)=w_{0}(x)={u}_{0}(x)-{v}_{0}(x)\geq0, &\mbox{in}\ \Omega=(0,1),
\end{array}
\right.$$ where ${Q}_t=\{(x,t)| x\in (0,l(t)),t\in(0,+\infty), l(t)=(1+kt)^{\alpha}\}$. If we reduce the system (2.13) to a variable coefficient parabolic equation in the cylindrical domain, it is clear that the comparison principle holds. Therefore $w\geq0 $ in ${Q}_t $.
Since ${v}\geq0$ in ${{Q}}_t$ , we get ${u}\geq {v}\geq0$ in ${{Q}}_t$. Then we arrive at $$\|{u}\|_{L^2(0,l(t))}\geq\|{v}\|_{L^2(0,l(t))}\geq C_{2}(1+kt)^{-\frac{\pi^{2}}{k}}(1+kt)^{-\frac{3\alpha}{2}} ,t\geq t_{0}.$$
If $\alpha>\frac{1}{2}$, ${u} _{0}\in C(\overline{\Omega}),{u} _{0}>0$, we have similar estimate.
Hence, we can take $\beta=\frac{\pi^{2}}{k}+\frac{3\alpha}{2}$ for $\alpha=\frac{1}{2}$ and $\beta=\frac{\alpha}{2}$ for $\alpha>\frac{1}{2}$. $\Box$
\[02\] Let us assume that $u_{0}\in C(\overline{\Omega})$, $u_{0}>0$,$0<\alpha<\frac{1}{2}$. Then there is some $t_{0}$ such that the solution of system (1.1) satisfies $$\|u\|_{L^2(0,l(t))}\geq C_{1} (1+kt)^{-\frac{\alpha}{2}}e^{-C_{2}t^{1-2\alpha}} ,t\geq t_{0},$$ where $ C_{1},C_{2}>0$ depending on $k,\alpha, t_{0}$.
[**Proof of Corollary 2.1:**]{} The proof can be obtained by a simple modification of the first step in the proof of Lemma 2.1. Indeed, integrating (2.3) for $0<\alpha<\frac{1}{2}$, we also have (2.9),(2.10) and (2.11). Namely, $$\|u\|^{2}_{L^{2}(0,l(t))}\geq C_{3} e^{-\frac{2 \pi^{2}}{k(1-2\alpha)}((1+kt)^{(1-2\alpha)}-1)}(1+kt)^{-\alpha}.$$ Due to $0<1-2\alpha<1$, we get that
$$\label{b11}
\begin{array}{ll}
\|u\|_{L^2(0,l(t))}&\geq C_{4} (1+kt)^{-\frac{\alpha}{2}}e^{-C_{2}(1+kt)^{1-2\alpha}}\\[3mm]
&\geq C'_{4} (1+kt)^{-\frac{\alpha}{2}}e^{-Ct^{1-2\alpha}},t\geq t_{0}.
\end{array}$$
Thanks to the inequality $$(a+b)^\alpha\leq a^\alpha+b^{\alpha},\quad\mbox{if}\quad a>0,b>0,0<\alpha<1,$$ the last inequality in (2.15) holds. $\Box$
[**Proof of Theorem 1.1:**]{}
Set $x=(1+kt)^{\alpha}y, y\in(0,1), w(y,t)=u((1+kt)^{\alpha}y,t)$. System (1.1) can be rewritten as $$\label{b3}
\left\{\begin{array}{ll}
w_{t}-\frac{k\alpha yw_{y}}{1+kt}-\frac{w_{yy}}{(1+kt)^{2\alpha}}=0,
&\mbox{in}\ {Q},\\[3mm]
w(0,t)=w(1,t)=0, &\mbox{in}\ (0,+\infty), \\[3mm]
w(y,0)=u_{0}(y), &\mbox{in}\ \Omega=(0,1),
\end{array}
\right.$$ where ${Q}=\{(y,t)| y\in (0,1),t\in(0,+\infty)\}$. System (2.16) is a variable coefficient parabolic equation in the cylindrical domain ${Q}$.
The stability will be shown by the classical energy estimate. Multiplying (2.16) by $w$ and integrating with respect to $y$, we get $$\label{b4}
\frac{1}{2}\frac{d}{dt}\int ^{1}_{0}w^{2}dy+ \frac{k\alpha}{2(1+kt)}\int ^{1}_{0}w^{2}dy+\frac{1}{(1+kt)^{2\alpha}}\int ^{1}_{0}w_{y}^{2}dy=0.$$ Let $E(t)=\int ^{1}_{0}w^{2}dy$. By Poincaré’s inequality, we see that $$\label{b4}
\frac{1}{2}\frac{d}{dt}E(t)+ \frac{k\alpha}{2(1+kt)}E(t)+\frac{C}{(1+kt)^{2\alpha}}E(t)\leq 0$$ for a suitable $C>0$.
Using Gronwall inequality, we deduce that for $\alpha=\frac{1}{2}$ $$\label{b1}
E(t)\leq E(0)(1+kt)^{-(\alpha+\frac{2C}{k})}.$$ Consequently, $$\|u\|_{L^2(0,l(t))}= E(t)^{ \frac{1}{2}}\leq\|u_{0}\|_{L^2(0,1)}(1+kt)^{-(\frac{\alpha}{2}+\frac{C}{k})}
\leq\|u_{0}\|_{L^2(0,1)}(1+kt)^{-\frac{\alpha}{2}}.$$ Namely, the solution is polynomially stable in the sense of $L^{2}$ norm.
For $\alpha>\frac{1}{2} $, the energy estimate is $$\label{b4}
E(t)\leq E(0)(1+kt)^{-\alpha} e^{\frac{2C(1-(1+kt)^{1-2\alpha})}{k(1-2\alpha)}}\leq E(0) (1+kt)^{-\alpha}.$$ Thus,$$\|u\|_{L^2(0,l(t))}\leq\|u_{0}\|_{L^2(0,1)}(1+kt)^{-\frac{\alpha}{2}}.$$
By the estimates above together with Lemma 2.1, we finish the proof of Theorem 1.1. $\Box$
[**Proof of Theorem 1.2:**]{}
From (2.18), we have for $0<\alpha<\frac{1}{2}$, $$\label{b4}
E(t)\leq E(0)(1+kt)^{-\alpha} e^{\frac{2C(1-(1+kt)^{1-2\alpha})}{k(1-2\alpha)}}\leq E(0) e^{\frac{2C(1-(1+kt)^{1-2\alpha})}{k(1-2\alpha)}}.$$ Thanks to $(1+kt)^{1-2\alpha}\geq(kt)^{1-2\alpha}$, we get $$\|u\|_{L^2(0,l(t))}\leq\|u_{0}\|_{L^2(0,1)} e^{\frac{C(1-(1+kt)^{1-2\alpha})}{k(1-2\alpha)}}\leq C_{1} e^{-C_{2}t^{1-2\alpha}}.$$ Hence, in the case of $0<\alpha<\frac{1}{2}$, $(1.1)$ is analogously exponentially stable in the sense of $L^{2}$ norm.
Then, in view of Corollary 2.1, we complete the proof of Theorem 1.2.
$\Box$
\[02\] Let $$l_{1}(t)=(1+kt)^{\alpha_1}, l_{2}(t)=(1+kt)^{\alpha_2} (0<\alpha_{1}<\alpha_{2}),$$ $$\Omega^{1}_{t}=\{x| 0<x< l_{1}(t)\}, \Omega^{2}_{t}=\{x| 0< x < l_{2}(t)\},$$ $$u_{0}\in H^{1}_{0}(0,1), u_{0}\geq 0( u_{0}\leq 0).$$ Suppose $L(t)$ is a smooth curve between line $ l_{1}(t)$ and $ l_{2}(t)$, $L(0)=1$. Let $\Omega_{t}=\{x| 0<x< L(t)\}$. If the solutions corresponding to $ l_{1}(t)$ and $ l_{2}(t)$ are only polynomially (or analogously exponentially) stable, then the solution of $(1.1)$ with $u_{0}$ and boundary curve $L(t)$ is only polynomially (or analogously exponentially) stable.
[**Proof of Corollary 2.2:**]{}
Denote by $u_{1}$ the solution corresponding to $u_{0}$ and $ l_{1}(t)$, and $u_{2}$ the solution corresponding to $u_{0}$ and $ l_{2}(t)$. Assume $u_{1}$ and $u_{2}$ are only polynomially stable. Denote the solution corresponding to $u_{0}$ and $ L(t)$ by $v$. We have that $v|_{\Omega^{1}_{t}\times(0,\infty)}$ is the solution of the following system $$\label{b1}
\left\{\begin{array}{ll}
v_{t}-\Delta v=0,
&\mbox{in}\ \Omega^{1}_{t}\times(0,\infty),\\[3mm]
v(0,t)=0,v(l_{1}(t),t)=\alpha(t), &\mbox{in}\ (0,+\infty), \\[3mm]
v(x,0)=u_{0}(x)\geq0, &\mbox{in}\ \Omega.
\end{array}
\right.$$ Therefore $\alpha(t)\geq0 $ by the comparison principle for parabolic equation in cylindrical domain and the regularity of the solution in Lemma 1.1. Using the comparison principle again, we get $ v\geq u_{1}\geq0$ in $\Omega^{1}_{t}\times(0,\infty)$. Then we arrive $$\|v\|_{L^2(\Omega_{t})}\geq\|v\|_{L^2(\Omega^{1}_{t})}\geq\|u_{1}\|_{L^2(\Omega^{1}_{t})}\geq C_{1} (1+kt)^{-\beta} ,t\geq t_{0}.$$ Similarly, $$C_{2}(1+kt)^{-\frac{\alpha_{2}}{2}}\geq\|u_{2}\|_{L^2(\Omega^{2}_{t})}\geq\|v\|_{L^2(\Omega_{t})} ,t\geq t_{0}.$$ The solution $v$ is only polynomially stable. $\Box$
Exponential stabilization
=========================
In this part, we follow the standard procedure of the backstepping method.
[**Step1 The stability of the target system**]{}
We choose the stable target system $$\label{b15}
\left\{\begin{array}{ll}
w_{t}-w_{xx}+\lambda w=0,
&\mbox{in}\ {Q}_t,\\[3mm]
w(0,t)=0, w(l(t),t)=0, &\mbox{in}\ (0,+\infty), \\[3mm]
w(x,0)=w_{0}(x), &\mbox{in}\ \Omega=(0,1),
\end{array}
\right.$$ where ${Q}_t=\{(x,t)| x\in (0,l(t)),t\in(0,+\infty), l(t)=(1+kt)^{\alpha}, k>0, \alpha>0\}, \lambda> 0$ will be determined later.
\[01\] System (3.1) is exponentially stable.
We present the proof of Lemma 3.1 which also can be found in \[14\] as a matter of convenience.
Proof of Lemma 3.1: Multiplying the equation in system $(3.1)$ by $w$, integrating from 0 to $l(t)$ with respect to ${x}$, we get $$\frac{1}{2}\frac{d}{dt}\int_{0}^{l(t)}w^{2}dx-ww_{x}|_{0}^{l(t)}+\int_{0}^{l(t)}w^{2}_{x}dx+\lambda\int_{0}^{l(t)}w^{2}dx=0.$$ Taking account into the boundary condition, we have $$\label{b16}
\begin{array}{ll}
\frac{1}{2}\frac{d}{dt}\int_{0}^{l(t)}w^{2}dx&=-\int_{0}^{l(t)}w^{2}_{x}dx-\lambda\int_{0}^{l(t)}w^{2}dx\\[3mm]
&\leq -\lambda\int_{0}^{l(t)}w^{2}dx.
\end{array}$$ We will derive the stability result by Gronwall inequality $$\|w\|_{L^{2}(0,l(t))}\leq\|w_{0}\|_{L^{2}(0,1)}e^{-\lambda t}.$$ [**Step2 The equation of the kernel**]{}
We introduce the Volterra transformation as follows, $$w(x,t)=u(x,t)+\int_{0}^{x}p(x,y,t)u(y,t)dy.$$ Computing the partial derivatives directly, one gets $$\label{b17}
w_{x}=u_{x}(x,t)+p(x,x,t)u(x,t)+\int_{0}^{x}p_{x}(x,y,t)u(y,t)dy,$$
$$\label{b18}
w_{xx}=u_{xx}+\frac{dp(x,x,t)}{dx}u+p(x,x,t)u_{x}+p_{x}(x,x,t)u+\int_{0}^{x}p_{xx}udy,$$
$$\label{b19}
w_{t}=u_{t}+\int_{0}^{x}p(x,y,t)u_{t}(y,t)dy+\int_{0}^{x}p_{t}udy.$$
Taking account into the equation in (1.2) and integrating by parts, one has $$\label{b20}
\begin{array}{ll}
w_{t}&=u_{t}(x,t)+\int_{0}^{x}p_{t}udy+\int_{0}^{x}pu_{yy}(y,t)dy\\[3mm]
&=u_{t}(x,t)+\int_{0}^{x}p_{t}udy+\int_{0}^{x}p_{yy}(x,y,t)u(y,t)dy\\[3mm]
&+p(x,x,t)u_{x}(x,t)-p(x,0,t)u_{x}(0,t)
-p_{y}(x,x,t)u(x,t)+p_{y}(x,0,t)u(0,t).
\end{array}$$ According to (3.1),(3.4), (3.6), and taking $p(x,0,t)=0$, we derive that $$\label{b21}
\begin{array}{ll}
w_{t}-w_{xx}+\lambda w=-(2\frac{dp(x,x,t)}{dx}-\lambda )u+\int_{0}^{x}(p_{t}-p_{xx}+p_{yy}+\lambda p)udy
\end{array}.$$ Therefore, we choose kernel $p(x,y,t)$ defined on $ \mathbb{S}(t)=\{(x,y)|0\leq y\leq x\leq l(t)\}$ satisfying the following system, $$\label{b19}
\left\{\begin{array}{ll}
p_{t}-p_{xx}+p_{yy}+\lambda p=0,\\[3mm]
p(x,0,t)=0, \\[3mm]
\frac{dp(x,x,t)}{dx}=\frac{\lambda}{2}.
\end{array}
\right.$$ The stabilization problem is changed to the existence of the kernel. Meanwhile, the control is the following feedback of the state by the boundary condition for $w$, $$U(t)=-\int_{0}^{l(t)}p(l(t),y,t)u(y,t)dy.$$ [**Step3 The well-posedness and the estimate of the kernel**]{}
In \[4\], the backstepping method is extended to plants with time-varying coefficients and the explicit expression of the kernel is given. The kernel $p(x,y,t)$ defined on $ \mathbb{D}(t)=\{(x,y)|0\leq y\leq x\leq 1\}$ satisfying(3.8),which is of the following form $$\label{b18}
p(x,y,t)=-\frac{y}{2}e^{-\lambda t}f(z,t),z=\sqrt{x^{2}-y^{2}},$$ where $$\label{b18}
\left\{\begin{array}{ll}
f_{t}=f_{zz}+\frac{3f_{z}}{z},\\[3mm]
f_{z}(0,t)=0,f(0,t)=-\lambda e^{\lambda t}:=F(t) .
\end{array}
\right.$$ The $C_{z,t}^{2,1}$ solution to this problem is $$\label{b18}
f(z,t)=\Sigma_{n=0}^{\infty}\frac{1}{n!(n+1)!}(\frac{z}{2})^{2n}F^{(n)}(t).$$ Taking account into the form of the kernel in \[4\], we can obtain the growth order of $p(x,y,t)$. The solution of (3.8) is $$\label{b22}
\begin{array}{ll}
p(x,y,t)&=-\frac{y}{2}e^{-\lambda t}f(z,t)\\[3mm]
&=-\frac{y}{2}e^{-\lambda t}\Sigma_{n=0}^{\infty}\frac{1}{n!(n+1)!}(\frac{z}{2})^{2n}F^{(n)}(t)\\[3mm]
&=\frac{y}{2}e^{-\lambda t}\Sigma_{n=0}^{\infty}\frac{1}{n!(n+1)!}(\frac{z}{2})^{2n}{\lambda}^{n+1}e^{\lambda t}\\[3mm]
&=\frac{y}{2}\Sigma_{n=0}^{\infty}\frac{1}{n!(n+1)!}(\frac{x^{2}-y^{2}}{4})^{n}{\lambda}^{n+1}\\[3mm]
&=\frac{y}{2l(t)}\Sigma_{n=0}^{\infty}\frac{1}{n!(n+1)!}(\frac{\frac{x^{2}}{l^{2}(t)}-\frac{y^{2}}{l^{2}(t)}}{4})^{n}l^{2n+1}(t){\lambda}^{n+1}.
\end{array}$$ The absolute value of the kernel is $$\label{b22}
\begin{array}{ll}
|p(x,y,t)|
&\leq\frac{1}{2}\Sigma_{n=0}^{\infty}\frac{{\lambda}^{n+1} l^{2n+1}(t)}{4^{n}n!(n+1)!}\\[3mm]
&\leq{\lambda}^{\frac{1}{2}}\Sigma_{n=0}^{\infty}\frac{({\frac{{\lambda}^{\frac{1}{2}} l(t)}{2}})^{n}}{n!}\frac{(\frac{{\lambda}^{\frac{1}{2}} l(t)}{2})^{n+1}}{(n+1)!}\\[3mm]
&\leq{\lambda}^{\frac{1}{2}}\Sigma_{n=0}^{\infty}\frac{({\frac{{\lambda}^{\frac{1}{2}} l(t)}{2}})^{n}}{n!}\Sigma_{n=0}^{\infty}\frac{({\frac{{\lambda}^{\frac{1}{2}} l(t)}{2}})^{n+1}}{(n+1)!}\\[3mm]
&\leq{\lambda}^{\frac{1}{2}}e^{\frac{{\lambda}^{\frac{1}{2}} l(t)}{2}}e^{\frac{{\lambda}^{\frac{1}{2}} l(t)}{2}}\\[3mm]
&\leq{\lambda}^{\frac{1}{2}}e^{{\lambda}^{\frac{1}{2}} l(t)}.
\end{array}$$ [**Step4 The invertibility of the transformation**]{}
Let $$u(x,t)=w(x,t)-\int_{0}^{x}q(x,y,t)w(y,t)dy.$$ By similar arguments, the equation for the kernel $q(x,y,t)$ becomes $$\label{b29}
\left\{\begin{array}{ll}
q_{t}-q_{xx}+q_{yy}-\lambda q=0,\\[3mm]
q(x,0,t)=0, \\[3mm]
\frac{dq(x,x,t)}{dx}=\frac{\lambda }{2},
\end{array}
\right.$$ which is defined on $ \mathbb{S}(t)=\{(x,y)|0\leq y\leq x\leq l(t)\}$.
Proceeding as the analysis of ${k}(x,y,t)$, one can find that the property of the kernel ${q}(x,y,t)$ is similar to the kernel ${k}(x,y,t)$, such as existence, uniqueness and the estimate. Thus, one has $$\label{b32}
\begin{array}{ll}
|{q}|\leq {\lambda}^{\frac{1}{2}}e^{{\lambda}^{\frac{1}{2}} l(t)}
\end{array}.$$
[**Step5 Rapid exponential stabilization of (1.2)**]{}
Now we will show the rapid exponential stabilization of (1.2). It is easy to see by H$\ddot{o}$lder inequality that $$\label{b34}
\begin{array}{ll}
|\int_{0}^{x}q(x,y,t)w(y,t)dy|
&\leq {\lambda}^{\frac{1}{2}}e^{{\lambda}^{\frac{1}{2}} l(t)}\int_{0}^{x}|w|dy\\[3mm]
&\leq {\lambda}^{\frac{1}{2}}e^{{\lambda}^{\frac{1}{2}} l(t)}\|w\|_{L^{2}(0,l(t))}x^{\frac{1}{2}}.
\end{array}$$ The estimate for solution of system (1.2) turns out to be $$\label{b34}
\begin{array}{ll}
\|u\|_{L^{2}(0,l(t))}&\leq\|w\|_{L^{2}(0,l(t))}+\frac{1}{\sqrt{2}}\|w\|_{L^{2}(0,l(t))}{l(t)} {\lambda}^{\frac{1}{2}}e^{{\lambda}^{\frac{1}{2}} l(t)}\\[3mm]
&\leq \|w_{0}\|_{L^{2}(0,1)}e^{-\lambda t}+\frac{1}{\sqrt{2}}{\lambda}^{\frac{1}{2}}l(t)\|w_{0}\|_{L^{2}(0,1)}e^{-\lambda t+{\lambda}^{\frac{1}{2}} (1+kt)^{\alpha}}.
\end{array}$$ Since $l(t)$ is unbounded as time $t$ tends to infinity, we need to restrict the growth of $ l(t)$ in order to guarantee the exponential stability of solution. It is readily to verified that the solution is exponentially stable for $0<\alpha\leq1$ because the following inequality $$\label{b34}
\begin{array}{ll}
\|u\|_{L^{2}(0,l(t))}&\leq \|w_{0}\|_{L^{2}(0,1)}e^{-\lambda t}+\frac{1}{\sqrt{2}}{\lambda}^{\frac{1}{2}}l(t)\|w_{0}\|_{L^{2}(0,1)}e^{-\lambda t+{\lambda}^{\frac{1}{2}} (1+kt)}\\[3mm]
&\leq e^{-\frac{(\lambda-{\lambda}^{\frac{1}{2}}k) t}{2}}
\end{array}$$ holds for $t\geq t_{0}$ if $t_{0}$ is large enough and $\lambda>k^{2}$. Hence, we build a feedback control law to force the solution of the closed-loop system to decay exponentially to zero with arbitrarily prescribed decay rates.
It can be also checked that the feedback control belongs to $H^{1}(0,+\infty)$ for $0<\alpha\leq1.$ From (3.13) and (3.18), we get the following estimate $$\label{b35}
\begin{array}{ll}
\|U(t)\|^{2}_{L^{2}(0,+\infty)}&=\int^{+\infty}_{0}U^{2}(t)dt\\[3mm]
&=(\int^{+\infty}_{t_0}+\int^{t_{0}}_{0})U^{2}(t)dt\\[3mm]
&\leq(\int^{+\infty}_{t_0}+\int^{t_{0}}_{0})\|u\|^{2}_{L^{2}(0,l(t))}
{({\lambda}^{\frac{1}{2}}e^{{\lambda}^{\frac{1}{2}} l(t)})}^{2}l(t)dt\\[3mm]
&\leq\int^{+\infty}_{t_0} e^{-(\lambda-{\lambda}^{\frac{1}{2}}k) t}{\lambda}e^{2{\lambda}^{\frac{1}{2}} (1+kt)}(1+kt)dt+\int^{t_{0}}_{0} \|u\|^{2}_{L^{2}(0,l(t))}{\lambda}e^{2{\lambda}^{\frac{1}{2}} l(t)}l(t)dt\\[3mm]
&\leq\int^{+\infty}_{t_0} e^{-(\lambda-3{\lambda}^{\frac{1}{2}}k) t}{\lambda}e^{2{\lambda}^{\frac{1}{2}}}(1+kt)dt+\int^{t_{0}}_{0} \|u\|^{2}_{L^{2}(0,l(t))}{\lambda}e^{2{\lambda}^{\frac{1}{2}} l(t)}l(t)dt\\[3mm]
&< \infty,
\end{array}$$ provided $\lambda>9k^{2}$, which means that $U(t)\in L^{2}(0,+\infty). $
On the other hand, $$\label{b36}
\begin{array}{ll}
U'(t)&=-l'(t)p(l(t),l(t),t)u(l(t),t)-\int_{0}^{l(t)}p_{x}(l(t),y,t)l'(t)u(y,t)dy\\[3mm]
&-\int_{0}^{l(t)}p_{t}(l(t),y,t)u(y,t)dy-\int_{0}^{l(t)}p(l(t),y,t)u_{t}(y,t)dy.
\end{array}$$
The first term belongs to $L^{2}(0,+\infty)$ provided $\lambda>25k^{2}$due to $$\label{b37}
\begin{array}{ll}
\int_{t_0}^{+\infty}(l'(t)p(l(t),l(t),t)u(l(t),t))^{2}dt
&=\int_{t_0}^{+\infty}(l'(t)p(l(t),l(t),t)U(t))^{2}dt\\[3mm]
&\leq\int_{t_0}^{+\infty}{k^{2}\alpha}^{2}(1+kt)^{2\alpha-2}({\lambda}^{\frac{1}{2}}e^{{\lambda}^{\frac{1}{2}} l(t)})^{4}l(t) e^{-(\lambda-k{\lambda}^{\frac{1}{2}}) t}dt\\[3mm]
&\leq\int_{t_0}^{+\infty}C(k,\alpha,\lambda)(1+kt)^{3\alpha-2}e^{-(\lambda-5{k\lambda}^{\frac{1}{2}}) t}dt\\[3mm]
&<\infty.
\end{array}$$
Using the expression of the kernel, we can calculate the derivatives with respect to $x$ and $t,$ $$\label{b37}
\begin{array}{ll}
p_{x}(x,y,t)&=-\frac{y}{2}e^{-\lambda t}f_{z}(z,t)\frac{x}{\sqrt{x^{2}-y^{2}}}\\[3mm]
&=-\frac{xy}{2z}e^{-\lambda t}\Sigma_{n=1}^{\infty}\frac{n}{n!(n+1)!}(\frac{z}{2})^{2n-1}F^{(n)}(t),
\end{array}$$
$$\label{b37}
\begin{array}{ll}
p_{t}(x,y,t)=\frac{y}{2}\lambda e^{-\lambda t}f(z,t)-\frac{y}{2} e^{-\lambda t}f_{t}(z,t).
\end{array}$$
The absolute value of $p_{t}(x,y,t)$ is $$\label{b37}
\begin{array}{ll}
|p_{t}(x,y,t)|&\leq\lambda|p|+|\frac{y}{2}e^{-\lambda t}\Sigma_{n=0}^{\infty}\frac{1}{n!(n+1)!}
(\frac{z}{2})^{2n}\lambda^{n+2}e^{\lambda t}|\\[3mm]
&\leq2\lambda|p|.
\end{array}$$ The absolute value of $p_{x}(x,y,t)$ is $$\label{b37}
\begin{array}{ll}
|p_{x}|&\leq
{\frac{xy}{4}}e^{-\lambda t}\Sigma_{n=1}^{\infty}\frac{1}{n!n!}(\frac{z}{2})^{2n-2}F^{(n)}(t)\\[3mm]
&\leq x^{2}e^{-\lambda t}\Sigma_{n=1}^{\infty}\frac{1}{n!n!}
\frac{(x^{2}-y^{2})^{n-1}}{4^{n}}{\lambda}^{n+1}e^{\lambda t}\\[3mm]
&\leq
(\frac{x}{l(t)})^{2}\Sigma_{n=1}^{\infty}\frac{1}{n!n!4^{n}}
(\frac{x^{2}-y^{2}}{l^{2}(t)})^{n-1}l^{2n}(t){\lambda}^{n+1}\\[3mm]
&\leq\Sigma_{n=1}^{\infty}\frac{1}{n!n!4^{n}}
l^{2n}(t){\lambda}^{n+1}\\[3mm]
&\leq\lambda e^{\lambda^{\frac{1}{2}}l(t)}.
\end{array}$$ The absolute value of the second term on the right hand of (3.20) is $$\label{b37}
\begin{array}{ll}
|\int_{0}^{l(t)}p_{x}(l(t),y,t)l'(t)u(y,t)dy|
&\leq\lambda e^{\lambda^{\frac{1}{2}}l(t)}l'(t)\|u\|_{L^{2}(0,l(t))}\sqrt{l(t)}\\[3mm]
&\leq C(k,\alpha,\lambda)(1+kt)^{\frac{3\alpha}{2}-1} e^{-\frac{(\lambda-3k{\lambda}^{\frac{1}{2}}) t}{2}}.
\end{array}$$ When $\lambda>9k^{2}$, the $L^{2}$ norm estimate of the second term becomes $$\label{b37}
\begin{array}{ll}
\int_{0}^{+\infty}(\int_{0}^{l(t)}p_{x}(l(t),y,t)l'(t)u(y,t)dy)^{2}dt
&\leq\int_{0}^{+\infty}C'(k,\alpha,\lambda)(1+kt)^{3\alpha-2} e^{-(\lambda-3k{\lambda}^{\frac{1}{2}}) t}dt\\[3mm]
&<\infty.
\end{array}$$ Similarly, the third term in (3.20) belongs to $L^{2}(0,+\infty)$ by H$\ddot{o}$lder inequality.
Now let us deal with the fourth term in (3.20). According to the inverse transformation $$u(x,t)=w(x,t)-\int_{0}^{x}q(x,y,t)w(y,t)dy,$$ the derivative of $u$ with respect to $t$ is $$u_{t}(x,t)=w_{t}(x,t)-\int_{0}^{x}q_{t}(x,y,t)w(y,t)dy
-\int_{0}^{x}q(x,y,t)w_{t}(y,t)dy.$$ The $L^{2}$ norm of $u_{t}$ satisfies $$\label{b37}
\begin{array}{ll}
\|u_{t}\|_{L^{2}(0,l(t))}
&\leq\|w_{t}\|_{L^{2}(0,l(t))}+\bar{q}_{t}\sqrt{l(t)}\|w\|_{L^{2}(0,l(t))}
+\bar{q}\sqrt{l(t)}\|w_{t}\|_{L^{2}(0,l(t))}\\[3mm]
&\leq\bar{q}_{t}\sqrt{l(t)}\|w\|_{L^{2}(0,l(t))}
+2\bar{q}\sqrt{l(t)}\|w_{t}\|_{L^{2}(0,l(t))}\\[3mm]
&\leq2(\lambda+1)\bar{q}\sqrt{l(t)}(\|w\|_{L^{2}(0,l(t))}
+\|w_{t}\|_{L^{2}(0,l(t))}),
\end{array}$$ where $\bar{q}_{t}$ denotes the upper bound of $|{q}_{t}|$, $\bar{q}=\bar{p}={\lambda}^{\frac{1}{2}}e^{{\lambda}^{\frac{1}{2}} l(t)}$ . The last inequality in (3.28) is the consequence of $\bar{q}_{t}\leq2\lambda\bar{q}$. In order to estimate (3.28), let us introduce the change of variable in system (3.1) $$w=\tilde{w}e^{-\lambda t},$$ then $\tilde{w}$ is the solution to $$\label{b15}
\left\{\begin{array}{ll}
\tilde{w}_{t}-\tilde{w}_{xx}=0,
&\mbox{in}\ {Q}_t,\\[3mm]
\tilde{w}(0,t)=0, \tilde{w}(l(t),t)=0, &\mbox{in}\ (0,+\infty), \\[3mm]
\tilde{w}(x,0)=w_{0}(x), &\mbox{in}\ \Omega=(0,1).
\end{array}
\right.$$ If $w_{0}\in H^{1}_{0}(0,1)$, system (3.29) has a unique strong solution. And we have the following estimates by Lemma 4.2 and the change of variable, $$\|\tilde{w}\|_{L^{2}(0,l(t))}+\|\tilde{w}_{t}\|_{L^{2}(Q_{t})}\leq C\|{w}_{0}\|_{H^{1}_{0}(0,1)},$$ $$\|{w}\|_{L^{2}(0,l(t))}\leq Ce^{-\lambda t}\|{w}_{0}\|_{H^{1}_{0}(0,1)},$$ $$\|{w}_{t}\|_{L^{2}(0,l(t))}\leq e^{-\lambda t}\|{\tilde{w}}_{t}\|_{L^{2}(0,l(t))}+\lambda e^{-\lambda t}\|{\tilde{w}}\|_{L^{2}(0,l(t))}.$$ Now we will estimate the last term in (3.20). By (3.28) and H$\ddot{o}$lder inequality, we get $$\label{b37}
\begin{array}{ll}
&\int_{0}^{+\infty}(\int_{0}^{l(t)}p(l(t),y,t)u_{t}(y,t)dy)^{2}dt\\[3mm]
&\leq\int_{0}^{+\infty}\bar{p}l(t)\|u_{t}\|^{2}_{L^{2}(0,l(t))}dt\\[3mm]
&\leq\int_{0}^{+\infty}\bar{p}l(t)8(\lambda+1)^{2}\bar{q}^{2}l(t)(\|w\|^{2}_{L^{2}(0,l(t))}
+\|w_{t}\|^{2}_{L^{2}(0,l(t))})dt
\\[3mm]
&\leq C(\lambda)\int_{0}^{+\infty}e^{3{\lambda}^{\frac{1}{2}} l(t)}l^{2}(t)(\|w\|^{2}_{L^{2}(0,l(t))}
+\|w_{t}\|^{2}_{L^{2}(0,l(t))})dt.
\end{array}$$ Combining (3.30), (3.31) with (3.32), we obtain that $$\label{b37}
\begin{array}{ll}
&\int_{0}^{+\infty}(\int_{0}^{l(t)}p(l(t),y,t)u_{t}(y,t)dy)^{2}dt\\[3mm]
&\leq C(\lambda)\int_{0}^{+\infty}e^{3{\lambda}^{\frac{1}{2}} l(t)}l^{2}(t)(\|w\|^{2}_{L^{2}(0,l(t))}
+2e^{-2\lambda t}\|{\tilde{w}}_{t}\|^{2}_{L^{2}(0,l(t))}+2\lambda ^{2}e^{-2\lambda t}\|{\tilde{w}}\|^{2}_{L^{2}(0,l(t))})dt\\[3mm]
&\leq C'(\lambda)\int_{0}^{+\infty}e^{3{\lambda}^{\frac{1}{2}} l(t)-\lambda t}l^{2}(t)\|{w}_{0}\|^{2}_{H^{1}_{0}(0,1)}dt
+C'(\lambda)\int_{0}^{t_{0}}e^{3{\lambda}^{\frac{1}{2}} l(t)}l^{2}(t)e^{-2\lambda t}\|{\tilde{w}}_{t}\|^{2}_{L^{2}(0,l(t))}dt\\[3mm]
&+C'(\lambda)\int_{t_{0}}^{+\infty}\|{\tilde{w}}_{t}\|^{2}_{L^{2}(0,l(t))}dt\\[3mm]
&<+\infty.
\end{array}$$
Thus, we derive that $U(t)\in H^{1}(0,+\infty)$ if $w_{0}\in H^{1}_{0}(0,1)$. If $w_{0}\in L^{2}(0,1)$, the solution of system (3.29) will satisfies $w(\cdot,T)\in H^{1}_{0}(0,l(T))$ at some time $T$ by the regularity of heat equation.
Similarly, one can also prove $\sqrt{l(t)}U'(t)\in L^{2}(0,+\infty)$ for large enough $\lambda$.
In conclusion, we should take $\lambda>25k^{2}$ in order to guarantee the exponential stabilizability, $U(t)\in H^{1}(0,+\infty)$ and $\sqrt{l(t)}U'(t)\in L^{2}(0,+\infty)$.
\[03\] The well-posedness of solution to system (1.2) with the feedback control can be see from the well-posedness of solutions to system (3.1) and system (3.8).
\[03\] When the growth order of $l(t)$ is less than 1, such as $l(t)=1+ln(1+t),l(t)=1+sint$, the linear feedback control does work from (3.17).
\[03\] Since the PDE on the non-cylindrical domain can be converted to the equation with time-dependent coefficients, the results of this paper may be extended to some parabolic equations with time-dependent coefficients, for which the explicit expression of the kernel can not be obtained. These are problems which we will consider next.
$\Box$
Appendix
========
Let us start with the following system $$\label{b1}
\left\{\begin{array}{ll}
u_{t}-u_{xx}=f(x,t),
&\mbox{in}\ {{Q}}_T,\\[3mm]
u(0,t)=0,u(l(t),t)=0, &\mbox{in}\ (0,T), \\[3mm]
u(x,0)=u_{0}(x), &\mbox{in}\ \Omega=(0,1),
\end{array}
\right.$$ where ${{Q}}_T=\{(x,t)| x\in (0,l(t))=\Omega_{t},t\in(0,T), l(t)=(1+kt)^{\alpha},k>0,\alpha>0\}$.
(\[16\]) If $u_{0}\in L^{2}(\Omega)$, $f\in L^{2}(0,T;H^{-1}(\Omega_{t})) $, there exists a unique weak solution of (4.1) in the following space $$u\in C([0,T];L^2(\Omega_{t}))\cap L^2(0,T;H^1_{0}(\Omega_{t})).$$ Moreover, there exists a positive constant C (independent with ${Q}_{T},u_{0},f$) such that $$\|u\|^{2}_{L^{\infty}(0,T;L^2(\Omega_{t}))}+ \|u\|^{2}_{L^{2}(0,T;H^{1}_{0}(\Omega_t))}\leq C[\|u_{0}\|^{2}_{L^2(\Omega)}+\|f\|^{2}_{L^2(0,T;H^{-1}(\Omega_t))}].$$
If $u_{0}\in H^{1}_{0}(\Omega)$, $f\in L^{2}(0,T;L^{2}(\Omega_{t})) $, problem (4.1) has a unique strong solution $u$, $$u\in C([0,T];H^1_{0}(\Omega_{t}))\cap L^2(0,T;H^2\cap H^1_{0}(\Omega_{t}))\cap H^1(0,T;L^2(\Omega_{t})).$$ Moreover, there exists a positive constant C (independent with ${Q}_{T},u_{0},f$) such that $$\|u\|^{2}_{L^{\infty}(0,T;H^{1}_{0}({\Omega}_t))}+ \|u\|^{2}_{L^{2}(0,T;H^{2}({\Omega}_t))}
\leq C[\|u_{0}\|^{2}_{H^{1}_{0}(\Omega)}+\|f\|^{2}_{L^2(0,T;L^{2}(\Omega_{t}))}].$$
[**Proof of Lemma 4.2:**]{} The authors (\[16\]) deduced the energy estimate $$\|u\|^{2}_{L^{\infty}(0,T;H^{1}_{0}({\Omega}_t))}+ \|u\|^{2}_{L^{2}(0,T;H^{2}({\Omega}_t))}
\leq C[\|u_{0}\|^{2}_{H^{1}_{0}(\Omega)}+\|f\|^{2}_{L^2(Q_{T})}],$$ with $C>0$ dependent of ${Q}_{T}$.
If $f\in L^{2}(0,T;L^{2}(\Omega_{t}))$, a slight variation in the argument allows also to get $$\|u\|^{2}_{L^{\infty}(0,T;H^{1}_{0}({\Omega}_t))}+ \|u\|^{2}_{L^{2}(0,T;H^{2}({\Omega}_t))}
\leq C[\|u_{0}\|^{2}_{H^{1}_{0}(\Omega)}+\|f\|^{2}_{L^2(0,T;L^{2}(\Omega_{t}))}],$$ with $C>0$ independent of ${Q}_{T}$. Indeed, instead of using Gronwall inequality to ((14) in \[16\]) $$\frac{d}{dt}\int_{{\Omega}_t}|\nabla u|^{2}dx\leq-\frac{1}{2}\int_{\Omega_t}|\Delta u|^{2}dx+\| f\|_{L^2(\Omega_{t})}\|\Delta u\|_{L^2(\Omega_{t})}+c\|\nabla u\|^{2}_{L^2(\Omega_{t})},$$ we integrate (4.5) with respect to $t$ from 0 to $t$ $$\label{b37}
\begin{array}{ll}
&\int_{{\Omega}_t}|\nabla u|^{2}dx+\frac{1}{2}\int^{t}_{0}\int_{\Omega_t}|\Delta u|^{2}dxdt\\[3mm]
&\leq\int_{{\Omega}}|\nabla u_{0}|^{2}dx+\int^{t}_{0}\| f\|_{L^2(\Omega_{t})}\| \Delta u\|_{L^2(\Omega_{t})}dt+c\int^{t}_{0}\|\nabla u\|^{2}_{L^2(\Omega_{t})}dt.\\[3mm]
\end{array}$$ Using the Young inequality, we have $$\label{b37}
\begin{array}{ll}
&\int_{{\Omega}_t}|\nabla u|^{2}dx+\frac{1}{2}\int^{t}_{0}\int_{\Omega_t}|\Delta u|^{2}dxdt\\[3mm]
&\leq\|\nabla u_{0}\|^{2}_{H^{1}_{0}(\Omega)}+c\|f\|^{2}_{L^2(0,T;L^{2}(\Omega_{t}))}+\frac{1}{4}\int^{t}_{0}\|\Delta u\|^{2}_{L^2(\Omega_{t})}dt+ c\int^{t}_{0}\|\nabla u\|^{2}_{L^2(\Omega_{t})}dt.\\[3mm]
\end{array}$$ Then, in view of (4.2) we have $$\label{b37}
\begin{array}{ll}
\int_{{\Omega}_t}|\nabla u|^{2}dx+\frac{1}{4}\int^{t}_{0}\int_{\Omega_t}|\Delta u|^{2}dxdt\leq C[\|\nabla u_{0}\|^{2}_{H^{1}_{0}(\Omega)}+\|f\|_{L^2(0,T;L^{2}(\Omega_{t}))}].
\end{array}$$ for a suitable $C>0$. $\Box$
Due to the uniform estimate and non blow-up property of the solution, we get that the existence domain of the solution is $(0,+\infty)$. Lemma 1.1 follows from Lemma 4.1 and Lemma 4.2 directly. And we also obtain the well-posedness of the following system $$\label{b1}
\left\{\begin{array}{ll}
u_{t}-u_{xx}=f(x,t),
&\mbox{in}\ {Q}_t,\\[3mm]
u(0,t)=0,u(l(t),t)=0, &\mbox{in}\ (0,+\infty), \\[3mm]
u(x,0)=u_{0}(x), &\mbox{in}\ \Omega=(0,1),
\end{array}
\right.$$ where ${Q}_t=\{(x,t)| x\in (0,l(t))=\Omega_{t},t\in(0,+\infty), l(t)=(1+kt)^{\alpha},k>0,\alpha>0\}$.
If $u_{0}\in L^{2}(\Omega)$, $f\in L^{2}(0,+\infty;H^{-1}(\Omega_{t})) $, there exists a unique weak solution of (4.9) in the following space $$u\in C([0,+\infty);L^2(\Omega_{t}))\cap L^2(0,+\infty;H^1_{0}(\Omega_{t})).$$ Moreover, there exists a positive constant C (independent with ${Q}_{t},u_{0},f$) such that $$\|u\|^{2}_{L^{\infty}(0,+\infty;L^2(\Omega_{t}))}+ \|u\|^{2}_{L^{2}(0,+\infty;H^{1}_{0}(\Omega_t))}\leq C[\|u_{0}\|^{2}_{L^2(\Omega)}+\|f\|^{2}_{L^2(0,+\infty;H^{-1}(\Omega_t))}].$$
If $u_{0}\in H^{1}_{0}(\Omega)$,$f\in L^{2}(0,+\infty;L^{2}(\Omega_{t})) $, problem (4.9) has a unique strong solution $$u\in C([0,+\infty);H^1_{0}(\Omega_{t}))\cap L^2(0,+\infty;H^2\cap H^1_{0}(\Omega_{t}))\cap H^1(0,+\infty;L^2(\Omega_{t})).$$Moreover, there exists a positive constant C (independent with $Q_{t},u_{0},f$) such that $$\|u\|^{2}_{L^{\infty}(0,+\infty;H^{1}_{0}({\Omega}_t))}+ \|u\|^{2}_{L^{2}(0,+\infty;H^{2}({\Omega}_t))}
\leq C[\|u_{0}\|^{2}_{H^{1}_{0}(\Omega)}+\|f\|^{2}_{L^2(0,+\infty;L^{2}(\Omega_{t}))}].$$
[**Proof of Lemma 1.2:**]{} System (1.2) can be converted to system (4.9) by the transformation $$\hat{u}=u-\frac{x}{l(t)}U(t).$$ And $\hat{u}$ is the solution of $$\label{b1}
\left\{\begin{array}{ll}
\hat{ u}_{t}-\hat{u}_{xx}=f(x,t),
&\mbox{in}\ {Q}_t,\\[3mm]
\hat{u}(0,t)=0,\hat{u}(l(t),t)=0, &\mbox{in}\ (0,+\infty), \\[3mm]
\hat{u}(x,0)=\hat{u}_{0}(x), &\mbox{in}\ \Omega=(0,1),
\end{array}
\right.$$ where $f(x,t)=-\frac{xU'(t)}{l(t)}+\frac{xl'(t)U(t)}{l^{2}(t)}$. It is easy to check that $f(x,t)\in L^{2}(0,+\infty;L^{2}(\Omega_{t}))$ provided that $U(t)\in H^{1}(0,+\infty)$,$\sqrt{l(t)}U'(t)\in L^{2}(0,+\infty)$ and $\alpha\in (0,2]$. So we prove Lemma 1.2.
[1]{} Roberto Triggiani. Boundary feedback stabilizability of parabolic equations. Appl.Math.Optim., 6 (1980) 201–220. I.Lasiecka,R.Triggiani. Stabilization and structural assignment of Dirichlet boundary feedback parabolic equations. SIAM J. Control Optim.,21 (1983) 766–830. V.Barbu, G.Wang. Feedback stabilization of semilinear heat equations. Abstract and Applied Analysis, 12 (2003) 697–714. A.Smyshlyaev, M.Kristic. On control design for PDEs with space-dependent diffusivity or time-dependent reactivity. Automatica, 41(2005)1601–1608. J.M.Coron,B.d’Andr$\acute{e}$a Novel. Stabilization of a rotating body beam without damping.IEEE Trans. Automat. Control, 43(1998)608–618. W.J.Liu, M.Kristic. Backstepping boundary control of Burgers’s equation with actuator dynamics. Systems $\&$ Control letters, 41 (2000)291–303.
Weijiu Liu. Boundary feedback stabilization of an unstable heat equation. SIAM J. Control Optim.,42 (3)(2003) 1033–1043.
A.Smyshyaev, M.Kristic. Backstepping observers for a class of parabolic PDEs, Systems $\&$ Control letters, 54 (2005)613–625.
M.Krstic,B.Z.Guo, A.Balogh and A.Smyshlyaev. Output-feedback stabilization of an unstable wave equation. Automatica, 44(2008)63-74. J.M. Coron, Q.L$\ddot{u}$. Local rapid stabilization for a Korteweg-de Vries equation with a Neumann boundary control on the right. J.Math.Pures Appl., 102(2014)1080–1120. J.M.Coron, Q.L$\ddot{u}$. Fredholm transform and local rapid stabilization for a Kuramoto-Sivashinsky equation. J. Diff.Equations, 259(2015)3683–3729.
Wang,P.K.C.Stabilization and control of distributed systems with time-dependent spatial domains. J.Optim. Theor. Appl.,65(2)(1990)331-362. J.Derby,L.Atherton,P.Thomas,R.Brown. Finite-element methods for analysis of the dynamics and control of Czochralski crystal growth. J.Sci.Comput.,2(4)(1987)297-343.
Mojtaba lzadi, Javad Abdollahi, Stevan S. Dubljevic, PDE backstepping control of moving boundary parabolic PDEs. Automatica, 54 (2015)41–48. Mojtaba lzadi, Stevan Dubljevic. Backstepping output-feedback control of moving boundary parabolic PDEs. European Journal of Control, 21 (2015)27–35. Juan L[í]{}maco, Luis A. Medeiros, Enrique Zuazua. Existence, uniqueness and controllability for parabolic equations in non-cylindrical domains, Mat. Contemp., 23 (2002) 49–70.
[^1]: School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China; School of Science, Northeast Electric Power University, Jilin 132012, China. E-mail address: lilf320@163.com.
[^2]: Experimental High School of Qiqihar City, Qiqihar 161000, China. E-mail address: 820299877@qq.com
[^3]: Corresponding author, School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China. E-mail address: hanggao2013@126.com
[^4]: This work is partially supported by the NSF of China under grants 11471070 and 11371084.
| {
"pile_set_name": "ArXiv"
} |
---
bibliography:
- 'MON4.bib'
---
Introduction
============
Traffic grooming is the generic term for packing low rate signals into higher speed streams (see the surveys [@BeCo06; @DuttaR02; @ModianoL01; @SimmonsGS99; @ZhMu03]). By using traffic grooming, one can bypass the electronics in the nodes which are not sources or destinations of traffic, and therefore reduce the cost of the network. Here we consider unidirectional SONET/WDM ring networks. In that case, the routing is unique and we have to assign to each request between two nodes a wavelength and some bandwidth on this wavelength. If the traffic is uniform and if a given wavelength has capacity for at least $C$ requests, we can assign to each request at most $\frac{1}{C}$ of the bandwidth. $C$ is known as the *grooming ratio* or the *grooming factor*. Furthermore if the traffic requirement is symmetric, it can be easily shown (by exchanging wavelengths) that there always exists an optimal solution in which the same wavelength is given to each pair of symmetric requests. Thus without loss of generality we assign to each pair of symmetric requests, called a *circle*, the same wavelength. Then each circle uses $\frac{1}{C}$ of the bandwidth in the whole ring. If the two end-nodes of a circle are $i$ and $j$, we need one ADM at node $i$ and one at node $j$. The main point is that if two requests have a common end-node, they can share an ADM if they are assigned the same wavelength. For example, suppose that we have symmetric requests between nodes $1$ and $2$, and also between $2$ and $3$. If they are assigned two different wavelengths, then we need 4 ADMs, whereas if they are assigned the same wavelength we need only 3 ADMs.
The so called traffic grooming problem consists in minimizing the total number of ADMs to be used, in order to reduce the overall cost of the network.
Suppose we have a ring with $4$ nodes $\{0,1,2,3\}$ and all-to-all uniform traffic. There are therefore 6 circles (pairs of symmetric requests) $\{i,j\}$ for $0 \leq i < j \leq 3$. If there is no grooming we need 6 wavelengths (one per circle) and a total of 12 ADMs. If we have a grooming factor $C=2$, we can put on the same wavelength two circles, using 3 or 4 ADMs according to whether they share an end-node or not. For example we can put together $\{1,2\}$ and $\{2,3\}$ on one wavelength; $\{1,3\}$ and $\{3,4\}$ on a second wavelength, and $\{1,4\}$ and $\{2,4\}$ on a third one, for a total of 9 ADMs. If we allow a grooming factor $C=3$, we can use only 2 wavelengths. If we put together on one wavelength $\{1,2\}$, $\{2,3\}$, and $\{3,4\}$ and on the other one $\{1,3\}$, $\{2,4\}$, and $\{1,4\}$ we need 8 ADMs (solution *a*); but we can do better by putting on the first wavelength $\{1,2\}$, $\{2,3\}$ and $\{1,3\}$ and on the second one $\{1,4\}$, $\{2,4\}$ and $\{3,4\}$, using 7 ADMs (solution *b*).
Here we study the problem for a unidirectional SONET ring with $n$ nodes, grooming ratio $C$, and all-to-all uniform unitary traffic. This problem has been modeled as a graph partition problem in both [@BermondCICC03] and [@GHLO03]. In the all-to-all case the set of requests is modelled by the complete graph $K_n$. To a wavelength $k$ is associated a subgraph $B_k$ in which each edge corresponds to a pair of symmetric requests (that is, a circle) and each node to an ADM. The grooming constraint, i.e. the fact that a wavelength can carry at most $C$ requests, corresponds to the fact that the number of edges $|E(B_k)|$ of each subgraph $B_k$ is at most $C$. The cost corresponds to the total number of vertices used in the subgraphs, and the objective is therefore to minimize this number.\
<span style="font-variant:small-caps;">Traffic Grooming in the Ring</span>
- **<span style="font-variant:small-caps;">Input:</span>** Two integers $n$ and $C$.
- **<span style="font-variant:small-caps;">Output:</span>** Partition $E(K_n)$ into subgraphs $B_k$, $1\leq k \leq \Lambda$, s.t. $|E(B_k)|\leq C$ for all $k$.
- **<span style="font-variant:small-caps;">Objective:</span>** Minimize $\sum_{k=1}^\Lambda |V(B_k)|$.
In the example above with $n=4$ and $C=3$, solution *a* consists of a decomposition of $K_4$ into two paths with four vertices $[1,2,3,4]$ and $[2,4,1,3]$, while solution *b* corresponds to a decomposition into a triangle $(1,2,3)$ and a star with the edges $\{1,4\}$, $\{2,4\}$, and $\{3,4\}$.
With the all-to-all set of requests, optimal constructions for a given grooming ratio $C$ have been obtained using tools of graph and design theory [@handbook], in particular for grooming ratio $C=3$ [@BermondC03], $C=4$ [@BermondCICC03; @Hu02], $C=5$ [@BermondCLY04], $C=6$ [@BermondCCGLM], $C=7$ [@cfgll] and $C\geq N(N-1)/6$ [@BCM03].
Graph decompositions have been extensively studied for other reasons as well. See [@Bryant] for an excellent survey, [@ColbournR99] for relevant material on designs with blocksize three, and [@handbook] for terminology in design theory.
Most of the papers on grooming deal with a single (static) traffic matrix. Some articles consider variable (dynamic) traffic, such as finding a solution which works for the maximum traffic demand [@BeMo00; @ZZM03] or for all request graphs with a given maximum degree [@MuSa08], but all keep a fixed grooming factor. In [@cqsupper] an interesting variation of the traffic grooming problem, grooming for two-period optical networks, has been introduced in order to capture some dynamic nature of the traffic. Informally, in the two-period grooming problem each time period supports different traffic requirements. During the first period of time there is all-to-all uniform traffic among $n$ nodes, each request using $1/C$ of the bandwidth; but during the second period there is all-to-all traffic only among a subset $V$ of $v$ nodes, each request now being allowed to use a larger fraction of the bandwidth, namely $1/C'$ where $C' < C$.
Denote by $X$ the subset of $n$ nodes. Therefore the two-period grooming problem can be expressed as follows:\
<span style="font-variant:small-caps;">Two-Period Grooming in the Ring</span>
- **<span style="font-variant:small-caps;">Input:</span>** Four integers $n$, $v$, $C$, and $C'$.
- **<span style="font-variant:small-caps;">Output:</span>** A partition (denoted $N(n,v;C,C')$) of $E(K_n)$ into subgraphs $B_k$, $1\leq k \leq \Lambda$, such that for all $k$, $|E(B_k)|\leq C$, and $|E(B_k)\cap (V\times V)|\leq C'$, with $V\subseteq X$, $|V|=v$.
- **<span style="font-variant:small-caps;">Objective:</span>** Minimize $\sum_{k=1}^\Lambda
|V(B_k)|$.
Following [@cqsupper], a grooming is denoted by $N(n,C)$. When the grooming $N(n,C)$ is *optimal*, i.e. minimizes the total ADM cost, then the grooming is denoted by $\mathscr{ON}(n,C)$. Whether general or optimal, the drop cost of a grooming is denoted by $cost\
N(n,C)$ or $cost\ \mathscr{ON}(n,C)$, respectively.
A grooming of a two-period network $N(n,v;C,C')$ with grooming ratios $(C,C')$ coincides with a graph decomposition $(X,\mathcal{B})$ of $K_n$ (using standard design theory terminology, $\mathcal{B}$ is the set of all the *blocks* of the decomposition) such that $(X,\mathcal{B})$ is a grooming $N(n,C)$ in the first time period, and $(X,\mathcal{B})$ faithfully embeds a graph decomposition of $K_v$ such that $(V,\mathcal{D})$ is a grooming $N(v,C')$ in the second time period. Let $V \subseteq X$. The graph decomposition $(X,\cB)$ [*embeds*]{} the graph decomposition $(V,\cD)$ if there is a mapping $f: \cD \to \cB$ such that $D$ is a subgraph of $f(D)$ for every $D
\in \cD$. If $f$ is injective (i.e., one-to-one), then $(X,\cB)$ [*faithfully embeds*]{} $(V,\cD)$. This concept of faithfully embedding has been explored in [@ColbournLQ03; @Quattrocchi02].
We use the notation $\mathscr{ON}(n, v; C, C')$ to denote an optimal grooming $N(n, v; C, C')$.
As it turns out, an $\mathscr{ON}(n, v; C, C')$ does not always coincide with an $\mathscr{ON}(n,C)$. Generally we have $cost\ \mathscr{ON}(n, v; C, C') \geq cost\
\mathscr{ON}(n,C)$ (see Examples \[ex:2\] and \[ex:3\]). Of particular interest is the case when $cost\ \mathscr{ON}(n, v; C, C')$ = $cost\ \mathscr{ON}(n,C)$ (see Example \[ex:1\]).
\[ex:1\] Let $n=7$, $v=4$, $C=4$. Let $V=\{0,1,2,3\}$ and $W=\{a_0,a_1,a_2\}$. An optimal decomposition is given by the three triangles $(a_0,0,1)$, $(a_1,1,2)$, and $(a_2,2,3)$, and the three 4-cycles $(0,2,a_0,a_1)$, $(0,3,a_0,a_2)$, and $(1,3,a_1,a_2)$, giving a total cost of 21 ADMs.
This solution is valid and optimal for both $C'=1$ and $C'=2$, and it is optimal for the classical <span style="font-variant:small-caps;">Traffic Grooming in the Ring</span> problem when $n=7$ and $C=4$. Therefore, $cost\
\mathscr{ON}(7, 4; 4, 1) = cost\ \mathscr{ON}(7, 4; 4, 2)= cost\ \mathscr{ON}(7,4) = 21$.
\[ex:2\] Let $n=7$, $v=5$, $C=4$, and $C'=2$. Let $V=\{0,1,2,3,4\}$ and $W=\{a_0,a_1\}$. We see later that an optimal decomposition is given by the five kites $(a_0,1,2;0)$, $(a_0,3,4;1)$, $(a_1,
1, 3;2)$, $(a_1, 2, 4; 0)$ and $(a_0,a_1,0;1)$, plus the edge $\{0,3\}$, giving a total cost of 22 ADMs. So $cost\ \mathscr{ON}(7, 5; 4, 2) = 22$. Note that this decomposition is not a valid solution for $C'=1$, since there are subgraphs containing more than one edge with both end-vertices in $V$.
\[ex:3\] Let $n=7$, $v=5$, $C=4$, and $C'=1$. Let again $V=\{0,1,2,3,4\}$ and $W=\{a_0,a_1\}$. We see later that an optimal decomposition is given by the four $K_3$s $(a_0,1,2)$, $(a_0,3,4)$, $(a_1,0,3)$, and $(a_1,2,4)$, the $C_4$ $(0,1,a_1,a_0)$, plus the five edges $\{0,4\}$, $\{1,3\}$, $\{0,2\}$, $\{1,4\}$, and $\{2,3\}$, giving a total cost of 26 ADMs. So $cost\ \mathscr{ON}(7, 5;
4, 1) = 26$.
C.J. Colbourn, G. Quattrocchi and V.R. Syrotiuk [@cqsupper; @cqslower] completely solved the cases when $C=2$ and $C=3$ ($C'= 1$ or $2$). In this article we determine the minimum drop cost of an $N(n,v;4,C')$ for all $n \geq v \geq 0$ and $C' \in \{1,2,3\}$.
We are also interested in determining the minimum number of wavelengths, or *wavecost*, required in an assignment of wavelengths to a decomposition. Among the $\mathscr{ON}(n,4)$s one having the minimum wavecost is denoted by $\mathscr{MON}(n,4)$, and the corresponding minimum number of wavelengths by $wavecost \mathscr{MON}(n,4)$. We characterize the $\mathscr{ON}(n,v;C,C')$ whose wavecost is minimum among all $\mathscr{ON}(n,v;C,C')$s and denoted one by $\mathscr{MON}(n,v;C,C')$; the wavecost is itself denoted by $wavecost
\mathscr{MON}(n,v;C,C')$.
We deal separately with each value of $C' \in \{1,2,3\}$. Table \[costform\] summarizes the cost formulas for $n=v+w > 4$.
- $cost\ \mathscr{ON}(v+w,v;4,1) =\left\{\begin{array}{ll}
{v+w \choose 2}&\mbox{ if $v \leq w+1$}\\
\\
{v+w \choose 2} + {v \choose 2} - {\left\lfloor \frac{vw}{2}\right\rfloor}&\mbox{ if $v \geq w+1$}
\end{array}\right.$\
- $cost\ \mathscr{ON}(v+w,v;4,2) =\left\{\begin{array}{ll}
{v+w \choose 2}&\mbox{ if $v \leq 2w$}\\
\\
{v+w \choose 2} + {\left\lceil \frac{1}{2}{v \choose 2}\right\rceil} - \frac{vw}{2}+
\delta &\mbox{ if $v > 2w$ and $v$ even}\\
\\
\ \ \ \ \ \ \ \mbox{ where }\delta=\left\{\begin{array}{cl}
1 & \mbox{ if }w=2, \mbox{ or }\\
& \mbox{ if }w=4\mbox{ and }\\
&\ \ \ \ \ v\equiv 0 \pmod{4}\\
0 & \mbox{ otherwise}
\end{array}\right.\\
\\
{v+w \choose 2} + {\left\lceil \frac{1}{2}\left( {v \choose 2} - vw -{\left\lceil \frac{w}{2}\right\rceil}
\right)\right\rceil} + \delta&\mbox{ if $v >
2w$ and $v$ odd}\\
\\
\ \ \ \ \ \ \ \mbox{ where }\delta=\left\{\begin{array}{cl}
1& \mbox{ if }w=3 \mbox{ and }\\
&\ \ \ \ \ v\equiv 3 \pmod{4}
\\
0& \mbox{ otherwise}
\end{array}\right.\\
\end{array}\right.$
- $cost\ \mathscr{ON}(v+w,v;4,3)={v+w \choose 2}$
Notation and Preliminaries {#sec:prelim}
==========================
We establish some graph-theoretic notation to be used throughout. We denote the edge between $u$ and $v$ by $\{u,v\}$. $K_n$ denotes a complete graph on $n$ vertices and $K_X$ represents the complete graph on the vertex set $X$. A triangle with edges $\{\{x,y\},\{x,z\},\{y,z\}\}$ is denoted by $(x,y,z)$. A 4-cycle with edges $\{\{x,y\},\{y,z\},\{z,u\},\{u,x\}\}$ is denoted by $(x,y,z,u)$. A [*kite*]{} with edges $\{\{x,y\},\{x,z\},\{y,z\},\{z,u\}\}$ is denoted by $(x,y,z;u)$. The groomings to be produced also employ paths; the path on $k$ vertices $P_k$ is denoted by $[x_1,\dots,x_k]$ when it contains edges $\{x_i,x_{i+1}\}$ for $1 \leq i < k$. Now let $G=(X,E)$ be a graph. If $|X|$ is even, a set of $|X|/2$ disjoint edges in $E$ is a [*1-factor*]{}; a partition of $E$ into 1-factors is a [*1-factorization*]{}. Similarly, if $|X|$ is odd, a set of $(|X|-1)/2$ disjoint edges in $E$ is a [*near 1-factor*]{}; a partition of $E$ into near 1-factors is a [*near 1-factorization*]{}. We also employ well-known results on partial triple systems and group divisible designs with block size three; see [@ColbournR99] for background.
The vertices of the set $V$ are the integers modulo $v$ denoted by $0,1,\ldots,v-1$. The vertices not in $V$, that is in $X\setminus V$, forms the set $W$ of size $w =n-v$ and is denoted by $a_0,\ldots,a_{w-1}$, the indices being taken modulo $w$.
Among graphs with three or fewer edges (i.e., when $\Ca = 3$),the only graph with the minimum ratio (number of vertices over the number of edges) is the triangle. For $C=4$ three different such graphs have minimum ratio 1: the triangle, the 4-cycle, and the kite. This simplifies the problem substantially. Indeed, in contrast to the lower bounds in [@cqslower], in this case the lower bounds arise from easy classification of the edges on $V$. We recall the complete characterization for optimal groomings with a grooming ratio of four:
\[lavbermcoudhu\][[@BermondCICC03; @Hu02]]{} $cost\,\mathscr{ON}(4,4)=7$ and, for $n\geq 5$, $cost\,\mathscr{ON}(n,4)={n\choose 2}$. Furthermore a $\mathscr{MON}(4,4)$ employs two wavelengths and can be realized by a kite and a $P_3$ (or a $K_3$ and a star), and a $\mathscr{MON}(n,4)$, $n\geq 5$, employs $\left\lceil\frac{n(n-1)}{8}\right\rceil$ wavelengths and can be realized by $t$ $K_3$s and $\left\lceil\frac{n(n-1)}{8}-t\right\rceil$ 4-cycles or kites, where $$\ t= \left \{
\begin{array}{ll}
0\ \ \ \mbox{if}\ \ n\equiv 0,1\pmod{8}\\
1\ \ \ \mbox{if}\ \ n\equiv 3,6\pmod{8}\\
2\ \ \ \mbox{if}\ \ n\equiv 4,5\pmod{8}\\
3\ \ \ \mbox{if}\ \ n\equiv 2,7\pmod{8}\\
\end{array}
\right..$$
In order to unify the treatment of the lower bounds, in a decomposition $N(v+w,v;4,C')$ for $C'\in\{1,2\}$, we call an edge with both ends in $V$ [*neutral*]{} if it appears in a triangle, 4-cycle, or kite; we call it [*positive*]{} otherwise. An edge with one end in $V$ and one in $W$ is a [*cross edge*]{}.
\[neutral\]
1. In an $N(v+w,v;4,C')$ with $C'\in\{1,2\}$, the number of neutral edges is at most $\frac{1}{2} C' vw$.
2. When $v$ is odd and $C'=2$, the number of neutral edges is at most $ vw - \frac{w}{2}$.
Every neutral edge appears in a subgraph having at least two cross edges. Thus the number of subgraphs containing one or more neutral edges is at most $\frac{1}{2}vw$. Each can contain at most $C'$ neutral edges, and hence there are at most $\frac{1}{2} C'vw$ neutral edges. This proves the first statement.
Suppose now that $C'=2$ and $v$ is odd. Any subgraph containing two neutral edges employs exactly two cross edges incident to the same vertex in $W$. Thus the number $\alpha$ of such subgraphs is at most $\frac{1}{2}w(v-1)$. Then remaining neutral edges must arise (if present) in triangles, kites, or 4-cycles that again contain two cross edges but only one neutral edge; their number, $\beta$, must satisfy $\beta \leq \frac{vw}{2} - \alpha$. Therefore the number of neutral edges, $2\alpha+\beta$, satisfies $ 2\alpha+\beta \leq \frac{1}{2}w(v-1) + \frac{vw}{2} = vw -
\frac{w}{2}$.
When $C=3$ there are strong interactions among the decompositions placed on $V$, on $W$, and on the cross edges [@cqsupper; @cqslower]; fortunately here we shall see that the structure on $V$ suffices to determine the lower bounds. Because every $N(v+w,v;4,C')$ is an $N(v+w,v;4,C'+1)$ for $1 \leq C' \leq 3$, and $N(v+w,v;4,4)$ coincides with $N(v+w,4)$, $cost\ \mathscr{ON}(v+w,v;4,1)
\geq cost\ \mathscr{ON}(v+w,v;4,2) \geq cost\ \mathscr{ON}(v+w,v;4,3) \geq cost \
\mathscr{ON}(v+w,4)$. We use these obvious facts to establish lower and upper bounds without further comment.
Case $C'=1$ {#sec:1}
===========
$\mathscr{ON}(n,v;4,1)$
-----------------------
\[41\] Let $n=v+w \geq 5$.
1. $cost\ \mathscr{ON}(v+w,v;4,1) = cost \ \mathscr{ON}(v+w,4)$ when $v \leq w+1$.
2. $cost\
\mathscr{ON}(v+w,v;4,1) = \binom{v+w}{2}+\binom{v}{2} - \lfloor \frac{vw}{2} \rfloor$ when $v \geq
w+1$.
To prove the lower bound, we establish that $cost\ \mathscr{ON}(v+w,v;4,1) \geq \binom{v+w}{2}+\binom{v}{2} - \lfloor
\frac{vw}{2} \rfloor$. It suffices to prove that the number of subgraphs employed in an $N(v+w,v;4,1)$ other than triangles, kites, and 4-cycles is at least $ \lceil \binom{v}{2} -
\frac{1}{2}vw \rceil = \binom{v}{2} - \lfloor \frac{1}{2}vw \rfloor$. By Lemma \[neutral\], this is a lower bound on the number of positive edges in any such decomposition; because each positive edge lies in a different subgraph of the decomposition, the lower bound follows.
Now we turn to the upper bounds. For the first statement, because an $\mathscr{ON}(v+w,v;4,1)$ is also an $\mathscr{ON}(v+w,v-1;4,1)$, it suffices to consider $v\in\{w,w+1\}$. When $v=w$, write $v
= 4s+t$ with $t \in \{0,3,5,6\}$. Form on $V$ a complete multipartite graph with $s$ classes of size four and one class of size $t$. Replace edge $e=\{x,y\}$ of this graph by the 4-cycle $(x,y,a_x,a_y)$. On $\{x_1,\dots,x_{\ell},a_{x_1},\dots,a_{x_\ell}\}$ whenever $\{x_1,\dots,x_\ell\}$ forms a class of the multipartite graph, place a decomposition that is optimal for drop cost and uses 4, 7, 12, and 17 wavelengths when $\ell$ is 3, 4, 5, or 6, respectively (see Appendix \[ap:1\]).
Now let $v=w+1$. Let $V=\{0,\dots,v-1\}$ and $W=\{a_0,\dots,a_{v-2}\}$. Form triangles $(i,i+1,a_i)$ for $0\leq i < v-1$. Then form 4-cycles $(i,j+1,a_i,a_j)$ for $0 \leq i < j \leq
v-2$.
Finally, suppose that $v \geq w+2$. When $v$ is even, form a 1-factorization $F_0,\dots,F_{v-2}$ on $V$. For $0 \leq i < w$, let $\{ e_{ij} : 1 \leq j \leq \frac{v}{2}\}$ be the edges of $F_i$, and form triangles $T_{ij} = \{a_i\} \cup e_{ij}$. Now for $0 \leq i < w$; $1 \leq j \leq \lfloor
\frac{w}{2} \rfloor$; and furthermore $j \neq
\frac{w}{2}$ if $i \geq \frac{w}{2}$ and $w$ is even, adjoin edge $\{a_i,a_{i+j\bmod w}\}$ to $T_{ij}$ to form a kite. All edges of 1-factors $\{F_i: w \leq i < v-1\}$ are taken as $K_2$s.
When $v$ is odd, form a near 1-factorization $F_0,\dots,F_{v-1}$ on $V$, in which $F_{v-1}$ contains the edges $\{\{2h,2h+1\} : 0\leq h < \frac{v-1}{2}\}$, and near 1-factor $F_i$ misses vertex $i$ for $0\leq i < v$. Then form 4-cycles $(2h,2h+1,a_{2h+1},a_{2h})$ for $0 \leq h <
\lfloor \frac{w}{2} \rfloor$. For $0 \leq i < w$, let $\{ e_{ij} : 1 \leq j \leq \frac{v-1}{2}\}$ be the edges of $F_i$, and form triangles $T_{ij} = \{a_i\} \cup e_{ij}$. Without loss of generality we assume that $w-1 \in e_{01}$; when $w$ is odd, adjoin $\{w-1, a_{w-1}\}$ to $T_{01}$ to form a kite. Now for $0 \leq i < w$; $1 \leq j \leq \lfloor \frac{w}{2} \rfloor$; and furthermore $j \neq
\frac{w}{2}$ if $i \geq \frac{w}{2}$ and $w$ is even and $j \neq 1$ if $i = 2h $ for $0 \leq h < \lfloor \frac{w}{2} \rfloor$, adjoin edge $\{a_i,a_{i+j\bmod w}\}$ to $T_{ij}$ to form a kite. All edges of near 1-factors $\{F_i: w \leq i < v-1\}$ and the $\frac{v-1}{2} - \lfloor \frac{w}{2} \rfloor$ remaining edges of $F_{v-1}$ are taken as $K_2$s.
When $v \geq w+1$, each subgraph contains exactly one edge on $V$ and so their number is ${v
\choose 2}$. This fact is later used to prove Theorem \[mon41\].
$\mathscr{MON}(n,v;4,1)$
------------------------
\[mon41small\] Let $v+w \geq 5$. For $C'=1$ and $v \leq w$, $$wavecost \, \mathscr{MON}(v+w,v;4,1) = wavecost \, \mathscr{MON}(v+w,4).$$
We need only treat the cases when $v \in \{w,w-1\}$; the case with $v=w$ is handled in the proof of Theorem \[41\]. When $v=w-1$, the argument is identical to that proof, except that we choose $v=4s+t$ with $t \in \{0,1,2,3\}$ and place decompositions on $\{x_1,\dots,x_{\ell},a_{x_1},\dots,a_{x_\ell},a_v\}$ instead, with 1,3,6,9 wavelengths when $\ell
= 1,2,3,4$ respectively (see Appendix \[ap:2\]).
\[mon41\]When $v > w$, $$wavecost \, \mathscr{MON}(v+w,v;4,1) = \binom{v}{2}.$$
Since every edge on $V$ appears on a different wavelength, $\binom{v}{2}$ is a lower bound. As noted in the proof of Theorem \[41\] the constructions given there meet this bound.
The solutions used from Theorem \[41\] are (essentially) the only ones to minimize the number of graphs in an $\mathscr{ON}(v+w,v;4,1)$ with $v > w$. However, perhaps surprisingly they are not the only ones to minimize the number of wavelengths. To see this, consider a $\mathscr{ON}(v+w,v;4,1)$ with $v > w > 2$ from Theorem \[41\]. Remove edges $\{a_0,a_1\}$, $\{a_0,a_2\}$, and $\{a_1,a_2\}$ from their kites, and form a triangle from them. This does not change the drop cost, so the result is also an $\mathscr{ON}(v+w,v;4,1)$. It has one more graph than the original. Despite this, it does not need an additional wavelength, since the triangle $(a_0,a_1,a_2)$ can share a wavelength with an edge on $V$. In this case, while minimizing the number of connected graphs serves to minimize the number of wavelengths, it is not the only way to do so.
Case $C'=2$ {#sec:2}
===========
$\mathscr{ON}(n,v;4,2)$
-----------------------
\[42e\] Let $v+w \geq 5$ and $v$ be even.
1. When $v \leq 2w$, $cost\
\mathscr{ON}(v+w,v;4,2) = cost \ \mathscr{ON}(v+w,4)$.
2. When $v \geq 2w+2$, $cost\
\mathscr{ON}(v+w,v;4,2) = \binom{v+w}{2}+ \lceil \frac{1}{2}\binom{v}{2} \rceil - \frac{vw}{2} +
\delta$, where $\delta = 1$ if $w=4$ or if $w=2$ and $v \equiv 0 \pmod{4}$, and $\delta=0$ otherwise.
By Lemma \[neutral\], $ \binom{v}{2} - vw$ is a lower bound on the number of positive edges in any $N(v+w,v;4,2)$; every subgraph of the decomposition containing a positive edge contains at most two positive edges. So the number of subgraphs employed in an $N(v+w,v;4,2)$ other than triangles, kites, and 4-cycles is at least $\lceil \frac{1}{2} \left ( \binom{v}{2} - vw \right ) \rceil$. The lower bound follows for $w \neq
2,4$.
As in the proof of Lemma \[neutral\], denote by $\alpha$ (resp. $\beta$) the number of subgraphs containing $2$ (resp $1$) neutral edges and so at least two cross edges. We have $2\alpha+\beta \leq
2\alpha+2\beta \leq vw $. Equality in the lower bound, when $v\equiv 0
\pmod{4}$, arises only when $\beta = 0$ and therefore to meet the bound an $\mathscr{ON}(w,4)$ must be placed on $W$ implying that $\delta = 1$ if $ w = 2$ or $4$. When $v \equiv 2 \pmod{4}$, we can have $2\alpha+\beta = vw -1$ and so $\beta = 1$. We can use an edge on $W$ in a graph with an edge on $V$. But when $w=4$, the five edges that would remain on $W$ require drop cost 6, and so $\delta = 1$.
Now we turn to the upper bounds. If $w \geq v-1$, apply Theorem \[41\]. Suppose that $w \leq
v-2$. Let $V = \{0,\dots,2t-1\}$ and $W=\{a_0,\dots,a_{w-1}\}$. Place an $ \mathscr{ON}(w,4)$ on $W$. Form a 1-factorization on $V$ containing factors $ \{F_0,\dots,F_{w-1},G_0,\dots,G_{2t-2-w}\}$ in which the last two 1-factors are $\{\{2h,2h+1\} : 0 \leq h < t\}$ and $\{\{2h+1,2h+2 \bmod 2t\}
: 0 \leq h < t\}$, whose union is a Hamilton cycle. For $0 \leq i < w$, form triangles $T_{ij}$ by adding $a_i$ to each edge $e_{ij} \in F_i$. For $0 \leq i < \mbox{min}(w,2t-1-w)$, observe that $H_i = F_i \cup G_i$ is a 2-factor containing even cycles. Hence there is a bijection $\sigma$ mapping edges of $F_i$ to edges of $G_i$ so that $e$ and $\sigma(e)$ share a vertex. Adjoin edge $\sigma(e_{ij})$ to the triangle $T_{ij}$ to form a kite. In this way, all edges between $V$ and $W$ appear in triangles or kites, and all edges on $V$ are employed when $v \leq 2w$. When $v \geq
2w+2$, the edges remaining on $V$ are those of the factors $G_{w},\dots,G_{v-2-w}$.
When $v\neq
2w+2$, the union of these edges is connected because the union of the last two is connected, and hence it can be partitioned into $P_3$s (and one $P_2$ when $v \equiv 2 \pmod{4}$) [@CarSch; @Yav]. When $w=2$ and $v \equiv 2 \pmod{4}$, the drop cost can be reduced by 1 as follows. Let $\{x,y\}$ be the $P_2$ in the decomposition, and let $\{x,z\} \in G_0$. Let $T$ be the triangle obtained by removing $\{x,z\}$ from its kite. Add $\{a_0,a_1\}$ to $T$ to form a kite. Add the $P_3$ $[y,x,z]$. In this way two isolated $P_2$s are replaced by a $P_3$, lowering the drop cost by 1.
When $v=2w+2$, we use a variant of this construction. Let $R$ be a graph with vertex set $V$ that is isomorphic to $\frac{v}{4}$ $K_4$s when $v \equiv 0 \pmod{4}$ and to $\frac{v-6}{4}$ $K_4$s and one $K_{3,3}$ when $v \equiv 2 \pmod{4}$. Let $F_1,\dots,F_{w-1},G_1,\dots,G_{w-1}$ be the 1-factors of a 1-factorization of the complement of $R$ (one always exists [@RosaWallis]). Proceed as above to form kites using $a_i$ for $1 \leq i < w$ and the edges of $F_i$ and $G_i$. For each $K_4$ of $R$ with vertices $\{p,q,r,s\}$, form kites $(a_0,q,p;r)$ and $(a_0,r,s;p)$. Then add the $P_3$ $[r,q,s]$. If $R$ contains a $K_{3,3}$ with bipartition $\{\{p,q,r\},\{s,t,u\}\}$, add kites $(a_0,s,p;t)$, $(a_0,q,t;r)$, and $(a_0,r,u;p)$. What remains is the $P_4$ $[r,s;q,u]$, which can be partitioned into a $P_2$ and a $P_3$.
In order to treat the odd case, we establish an easy preliminary result:
\[cocktail\] Let $w>3$ be a positive integer. The graph on $w$ vertices containing all edges except for $\lfloor \frac{w}{2}\rfloor$ disjoint edges (i.e., $K_w \setminus \lfloor \frac{w}{2} \rfloor K_2 $) can be partitioned into
1. 4-cycles when $w$ is even;
2. kites and 4-cycles when $w \equiv 1 \pmod{4}$; and
3. kites, 4-cycles, and exactly two triangles when $w \equiv 3 \pmod{4}$.
Let $W =
\{a_0,\dots,a_{w-1}\}$. When $w$ is even, form 4-cycles $\{(a_{2i},a_{2j},a_{2i+1},a_{2j+1}): 0
\leq i < j < \frac{w}{2}\}$ leaving uncovered the $\frac{w}{2}$ edges $\{a_{2i}, a_{2i+1}\}$. (This is also a consequence of a much more general result in [@FuR00].)
When $w$ is odd, the proof is by induction on $w$ by adding four new vertices. So we provide two base cases for the induction to cover all odd values of $w$.
For $w=5$, $K_5\setminus\{\{a_0,a_1\},\{a_2,a_3\}\}$ can be partitioned into the two kites $(a_2,a_4,a_0;a_3)$ and $(a_3,a_4,a_1;a_2)$.
For $w=7$, $K_7\setminus\{\{a_0,a_1\},\{a_2,a_3\}, \{a_4,a_5\}\}$ can be partitioned into the kites $(a_3,a_6,a_0;a_5)$, $(a_1,a_6,a_4; a_3)$ and $(a_5,a_6,a_2;a_1)$, and the $K_3$s $(a_0,a_2,a_4)$ and $(a_1,a_3,a_5)$.
By induction consider an optimal decomposition of $K_w-F$, with $F = \{\{a_{2h},a_{2h+1}\} : 0 \leq
h < \frac{w-1}{2}\}$. Add four vertices $a_w,a_{w+1},a_{w+2},a_{w+3}$. Add the $C_4$s $(a_{2h},a_w,a_{2h+1},a_{w+1})$ and $ (a_{2h},a_{w+2},a_{2h+1},a_{w+3})$ where $0 \leq h <
\frac{w-1}{2}$. Cover the edges of the $K_5$ on $\{a_{w-1},a_w,a_{w+1},a_{w+2},a_{w+3}\}$ minus the edges $\{a_{w-1},a_w\}$ and $\{a_{w+1},a_{w+2}\}$, using two kites as shown for the case when $w=5$.
\[42o\] Let $v+w \geq 5$ and $v$ be odd.
1. When $v \leq 2w-1$, $cost\
\mathscr{ON}(v+w,v;4,2) = cost \ \mathscr{ON}(v+w,4)$.
2. When $v
\geq 2w+1$, $cost\ \mathscr{ON}(v+w,v;4,2) = \binom{v+w}{2}+ \lceil \frac{1}{2} \left (
\binom{v}{2} - vw + \lceil \frac{w}{2} \rceil \right ) \rceil
+ \delta$, where $\delta = 1$ if $w=3$ and $v \equiv 3
\pmod{4}$, $0$ otherwise.
To prove the lower bound, it suffices to prove that the number of subgraphs employed in an $N(v+w,v;4,2)$ other than triangles, kites, and 4-cycles is at least $\lceil \frac{1}{2} \left ( \binom{v}{2} - vw + \lceil \frac{w}{2}
\rceil \right ) \rceil$. As in the proof of Theorem \[42e\], this follows from Lemma \[neutral\]. When $w =3$ and $v \equiv 3 \pmod{4}$, at least $\binom{v}{2} - 3v + 2$ edges are positive, an even number. To meet the bound, exactly one cross edge remains and exactly two edges on $W$ remain. These necessitate a further graph that is not a triangle, kite, or 4-cycle.
Now we turn to the upper bounds. By Theorem \[42e\], $cost\ \mathscr{ON}((v+1)+(w-1),v+1;4,2) =
cost \ \mathscr{ON}(v+w,4)$ when $v\leq 2w-3$. So suppose that $v \geq 2w-1$. Write $v=2t+1$.
When $w=t+1$, form a near 1-factorization on $V$ consisting of $2t+1$ near 1-factors, $F_0,\dots,F_{t},$ $G_0,\dots,G_{t-1}$. Without loss of generality, $F_i$ misses vertex $i$ for $0
\leq i \leq t$, and $F_t$ contains the edges $\{\{k,t+k+1\} : 0 \leq k < t\}$. The union of any two near 1-factors contains a nonnegative number of even cycles and a path with an even number of edges. For $0 \leq i \leq t$, form triangles $T_{ij}$ by adding $a_i$ to each edge $e_{ij} \in
F_i$. As in the proof of Theorem \[42e\], for $0 \leq i < t$, use the edges of $G_i$ to convert every triangle $T_{ij}$ into a kite. Then add edge $\{i,a_i\}$ to triangle $T_{ti}$ constructed from edge $\{i,t+1+i\}$. What remains is the single edge $\{t,a_t\}$ together with all edges on $W$.
When $w \not\in\{2,4\}$, place an $\mathscr{ON}(w,4)$ on $W$ of cost $\binom{w}{2}$ so that $a_t$ appears in a triangle in the decomposition, and use the edge $\{t,a_t\}$ to convert this to a kite. We use a decomposition having $1 \leq \delta \leq 4$ triangles, therefore getting a solution with at most 3 triangles. Such a decomposition exists by Theorem \[lavbermcoudhu\] if $ w \nequiv
0,1\pmod{8}$. If $ w \equiv 0,1\pmod{8}$ we build a solution using $4$ triangles as follows. If $w
\equiv 1 \pmod{8}$, form an $\mathscr{ON}(w-2,4)$ on vertices $\{0,\dots,w-3\}$ with $3$ triangles. Add the triangle $(w-3,w-2,w-1)$ and the 4-cycles $\{(2h,w-2,2h+1,w-1) : 0 \leq h <
\frac{w-3}{2}\}$. For $w =8$ a solution with 4 triangles is given in Appendix \[ap:3\]. In general, for $w \equiv 0 \pmod{8}$, form an $\mathscr{ON}(w-8,4)$ on vertices $\{0,\dots,w-9\}$ with $4$ triangles. Add the 4-cycles $\{(2h,w-2j,2h+1,w-2j + 1) : 0 \leq h < \frac{w-8}{2}\}; 1\leq
j \leq 4$ and an $\mathscr{ON}(8,4)$ without triangles on the $8$ vertices $\{w-8,\dots,w-1\}$.
Two values for $w$ remain. When $w=2$, an $\mathscr{ON}(5,3;4,1)$ is also an $\mathscr{ON}(5,3;4,2)$. The case when $v=7$ and $w=4$ is given in Appendix \[ap:3\]. The solution given has only $1$ triangle. Henceforth $w \leq t$. For $t>2$, form a near 1-factorization $\{F_0,\dots,F_{w-1},G_0,\dots,G_{2t-1-w}\}$ of $K_v\setminus C_t$, where $C_t$ is the $t$-cycle on $(0,1,\dots,t-1)$; such a factorization exists [@Plantholt81]. Name the factors so that the missing vertex in $F_i$ is $\lfloor i/2 \rfloor$ for $0 \leq i < w$ (this can be done, as every vertex $i$ satisfying $0 \leq i < t$ is the missing vertex in two of the near 1-factors). Form triangles using $F_0,\dots,F_{w-1}$ and convert to kites using $G_0,\dots,G_{w-1}$ as before. There remain $2(t-w)$ near 1-factors $G_w, \dots, G_{2t-1-w}$. For $ 0 \leq h < t-w$, $G_{w+2h}
\cup G_{w+2h+1}$ contains even cycles and an even path, and so partitions into $P_3$s. Then the edges remaining are (1) the edges of the $t$-cycle; (2) the edges $\{\{\lfloor i/2 \rfloor,a_i\}:0
\leq i < w\}$; and (3) all edges on $W$. For $0 \leq i < \lfloor \frac{w}{2} \rfloor$, form triangle $(i,a_{2i},a_{2i+1})$ and add edge $\{i,i+1\}$ to convert it to a kite. Edges $\{\{i,i+1
\bmod t\} : \lfloor \frac{w}{2} \rfloor \leq i < t\}$ of the cycle remain from (1); edge $\{
\frac{w-1}{2}, a_{w-1} \}$ remains when $w$ is odd, and no edge remains when $w$ is even, from (2); and all edges excepting a set of $\lfloor \frac{w}{2} \rfloor$ disjoint edges on $W$ remain.
When $w\neq 3$, we partition the remaining edges in (1) (which form a path of length $t-\lfloor
\frac{w}{2} \rfloor$), into $P_3$s when $t-\lfloor \frac{w}{2} \rfloor$ is even, and into $P_3$s and the $P_2$ $\{0,t-1\}$ when $t-\lfloor \frac{w}{2} \rfloor$ is odd. We adjoin edge $\{
\frac{w-1}{2}, a_{w-1} \}$ to the $P_3$ (from the $t$-cycle) containing the vertex $\frac{w-1}{2}$ to form a $P_4$. Finally, we apply Lemma \[cocktail\] to exhaust the remaining edges on $W$.
When $w=3$, the remaining edges are those of the path $[0,t-1,t-2,\dots,2,1,a_2]$ and edges $\{
\{a_{2},a_0\}, \{a_{2},a_1\}\}$. Include $\{ \{1,2\}, \{1,a_{2}\}, \{a_{2},a_0\}, \{a_{2},a_1\}\}$ in the decomposition, and partition the remainder into $P_3$s and, when $v \equiv 3 \pmod{4}$, one $P_2$ $\{0,t-1\}$.
The case when $t=2$ is done in Example \[ex:2\] (the construction is exactly that given above, except that we start with a near 1-factorization of $K_5\setminus\{\{0,1\},\{0,3\}\}$).
$\mathscr{MON}(n,v;4,2)$
------------------------
\[mon42small\] For $C'=2$ and $v \leq 2w$, $$wavecost \, \mathscr{MON}(v+w,v;4,2) = wavecost \, \mathscr{MON}(v+w,4) .$$
It suffices to prove the statement for $v \in
\{ 2w-2,2w-1,2w\}$. When $v=2w-1$, apply the construction given in the proof of Theorem \[42o\], where we noted that there are at most 3 triangles. The proof of Theorem \[42o\] provides explicit solutions when $w \in \{2,4\}$.
Now suppose that $v=2w$. In the proof of Theorem \[42e\], $\frac{v}{2}=w$ triangles containing one edge on $V$ and two edges between a vertex of $V$ and $a_{w-1}$ remain. Then convert $w-1$ triangles to kites using edges on $W$ incident to $a_{w-1}$. That leaves one triangle. When the remaining edges on the $w-1$ vertices of $W$ support a $\mathscr{MON}(w-1,4)$ that contains at most two triangles, we are done. It remains to treat the cases when $w-1 \equiv 2,7 \pmod{8}$ or $w-1=4$.For the first case, let $x$ be one vertex of the triangle left containing $a_{w-1}$, namely $(a_{w-1},x,y)$. Consider the pendant edge $\{x,t\} \in
G_{w-2}$ used in a kite containing $a_{w-2}$. Delete $\{x,t\}$ from this kite and adjoin $\{a_{w-3},a_{w-2}\}$ to the unique triangle so formed forming another kite. Finally adjoin $\{x,t\}$ to the triangle $(a_{w-1},x,y)$. Proceed as before, but partition all edges on $\{a_0,\dots,a_{w-2}\}$ except edge $\{a_{w-3},a_{w-2}\}$ into 4-cycles and kites. The case when $w-1 = 4$ is similar, but we leave three of the triangles arising from $F_{w-1}$ and partition $K_5\setminus P_3$ into two kites.
Now suppose that $v=2w- 2$. We do a construction similar to that above. In the proof of Theorem \[42e\], there remain $3 \frac{v}{2}=3(w-1)$ triangles joining $a_{w-3}$ (resp. $a_{w-2},a_{w-1}$) to $F_{w-3}$ (resp. $F_{w-2},F_{w-1}$). Then convert the $w-1$ triangles containing $a_{w-1}$ to kites using edges on $W$ incident to $a_{w-1}$, $w-2$ triangles containing $a_{w-2}$ to kites using the remaining edges on $W$ incident to $a_{w-2}$, and $w-3$ triangles containing $a_{w-3}$ to kites using edges on $W$ incident to $a_{w-3}$. That leaves three triangles. So, if $w-3 \equiv 0,1 \pmod{8}$ we are done. Otherwise, as above, choose in each of the three remaining triangles vertices $x_1,x_2,x_3$; consider the edges $\{x_1,t_1\}$ (resp. $\{x_2,t_2\}$) appearing in the kites containing $a_{w-4}$ and $x_1$ (resp. $a_{w-4}$ and $x_2$), and the edge $\{x_3,t_3\}$ in the kite containing $a_{w-5}$ and $x_3$. Delete these edges and adjoin them to the three remaining triangles. Finally adjoin the edges $\{a_{w-4},a_{w-5}\}$ and $\{a_{w-4},a_{w-6}\}$ to the two triangles obtained from the two kites containing $a_{w-4}$, and adjoin the edge $\{a_{w-5},a_{w-6}\}$ to the triangle obtained from the kite containing $a_{w-5}$. Proceed as before, but partition all edges on $\{a_0,\dots,a_{w-4}\}$ except the triangle $(a_{w-6},a_{w-5},a_{w-4})$ into 4-cycles and kites.
\[mon42\]
1. When $v > 2w$ is even, $$wavecost \, \mathscr{MON}(v+w,v;4,2) =
\left \lceil \left ( {2\binom{v}{2} + \binom{w}{2} } \right ) / 4 \right \rceil .$$
2. When $v > 2w$ is odd, $$wavecost \, \mathscr{MON}(v+w,v;4,2) = \left \lceil \left
( {2\binom{v}{2} + \frac{(w-1)(w+1)}{2} } \right ) / 4 \right \rceil .$$
First we treat the case when $v$ is even. Then (by Theorem \[42e\]) an $\mathscr{ON}(v+w,v;4,2)$ must employ $vw$ or $vw-1$ neutral edges, using all $vw$ edges between $V$ and $W$. Each such graph uses two edges on $V$ and none on $W$, except that a single graph may use one on $V$ and one on $W$. Now the edges of $V$ must appear on $\lceil \frac{1}{2} \binom{v}{2} \rceil$ different wavelengths, and these wavelengths use at most one edge on $W$ (when $v \equiv 2 \pmod{4}$). Thus at least $\lceil \binom{w}{2}/4 \rceil$ additional wavelengths are needed when $v \equiv 0 \pmod{4}$, for a total of $\lceil \binom{v}{2}/2 + \binom{w}{2}/4 \rceil$. When $v
\equiv 2 \pmod{4}$, at least $\lceil (\binom{w}{2}-1)/4 \rceil$ additional wavelengths are needed; again the total is $\lceil
\binom{v}{2}/2 + \binom{w}{2}/4 \rceil$. Theorem \[42e\] realizes this bound.
When $v$ is odd, first suppose that $w$ is even. In order to realize the bound of Theorem \[42o\] for drop cost, by Lemma \[neutral\], $\frac{w}{2}$ neutral edges appear in subgraphs with one neutral edge and all other neutral edges appear in subgraphs with two. In both cases, two edges between $V$ and $W$ are consumed by such a subgraph. When two neutral edges are used, no edge on $W$ can be used ; when one neutral edge is used, one edge on $W$ can also be used. It follows that the number of wavelengths is at least $\frac{1}{2} (\binom{v}{2} -
\frac{w}{2}) + \frac{w}{2} + \frac{1}{4}(\binom{w}{2} -
\frac{w}{2})$. This establishes the lower bound. The case when $w$ is odd is similar. The proof of Theorem \[42o\] gives constructions with at most $3$ triangles and so establishes the upper bound except when $v \equiv 1 \pmod{4}$ and $w \equiv 3 \pmod{4}$, $w \neq 3$, where the construction employs one more graph than the number of wavelengths permitted. However, one graph included is the $P_2$ $\{0,t-1\}$, and in the decomposition on $W$, there is a triangle. These can be placed on the same wavelength to realize the bound.
When $v \equiv 1 \pmod{4}$ and $w \equiv 3 \pmod{4}$, $w \neq 3$, we place a disconnected graph, $P_2 \cup K_3$, on one wavelength in order to meet the bound. The construction of Theorem \[42o\] could be modified to avoid this by instead using a decomposition of $K_w \setminus (K_3
\cup \frac{w-3}{2} K_2)$ into 4-cycles and kites, and using the strategy used in the case for $w=3$. In this way, one could prove the slightly stronger result that the number of (connected) subgraphs in the decomposition matches the lower bound on number of wavelengths needed.
In Theorem \[mon41\], the number of wavelengths and the drop cost are minimized simultaneously by the constructions given; each constructed $\mathscr{ON}(v+w,v;4,1)$ has not only the minimum drop cost but also the minimum number of wavelengths over all $N(v+w,v;4,1)$s. This is not the case in Theorem \[mon42\]. For example, when $v > (1+\sqrt{2})w$, it is easy to construct an $N(v+w,v;4,2)$ that employs only $\lceil \binom{v}{2} / 2 \rceil$ wavelengths, which is often much less than are used in Theorem \[mon42\]. We emphasize therefore that a $\mathscr{MON}(v+w,v;4,2)$ minimizes the number of wavelengths over all $\mathscr{ON}(v+w,v;4,2)$s, [*not necessarily*]{} over all $N(v+w,v;4,2)$s.
Case $C'=3$ {#sec:3}
===========
$\mathscr{ON}(n,v;4,3)$
-----------------------
\[43\] Let $v+w \geq 5$.
1. When $w \geq 1$, $cost\ \mathscr{ON}(v+w,v;4,3) = cost \ \mathscr{ON}(v+w,4)$.
2. $cost\ \mathscr{ON}(v+0,v;4,3) = cost \ \mathscr{ON}(v,3)$.
The second statement is trivial. Moreover $cost \ \mathscr{ON}(n,4) = cost \ \mathscr{ON}(n,3)$ when $n\equiv 1,3 \pmod{6}$, and hence the first statement holds when $v+w \equiv 1,3 \pmod{6}$. To complete the proof it suffices to treat the upper bound when $w=1$.
When $v+1 \equiv 5 \pmod{6}$, there is a maximal partial triple system $(X,{\cal B})$ with $|X|=v+1$ covering all edges except those in the 4-cycle $(r,x,y,z)$. Set $W=\{r\}$, $V=X\setminus
W$, and add the 4-cycle to the decomposition to obtain an $\mathscr{ON}(v+1,v;4,3)$.
When $v \equiv 1,5 \pmod{6}$, set $\ell=v-1$ and when $v\equiv 3 \pmod{6}$ set $\ell=v-3$. Then $\ell$ is even. Form a maximal partial triple system $(V,{\cal B})$, $|V|=v$, covering all edges except those in an $\ell$-cycle $(0,1,\dots,\ell-1)$ [@ColbournR86]. Add a vertex $a_0$ and form kites $(a_0,2i,2i+1;(2i+2)\bmod \ell)$ for $0 \leq i < \frac{\ell}{2}$. For $i \in
\{\ell,\dots,v-1\}$, choose a triple $B_i \in {\cal B}$ so that $i \in B_i$ and $B_i = B_j$ only if $i=j$. Add $\{a_0,i\}$ to $B_i$ to form a kite. This yields an $\mathscr{ON}(v+1,v;4,3)$.
$\mathscr{MON}(n,v;4,3)$
------------------------
We focus first on lower bounds in Section \[sec:lower\] and then we provide constructions attaining these lower bounds in Section \[sec:upper\].
### Lower Bounds {#sec:lower}
When $C'=3$, Theorem \[43\] makes no attempt to minimize the number of wavelengths. We focus on this case here. Except when $n \in \{2,4\}$ or $v = n$, $cost\, \mathscr{ON}(n,v;4,3) =
\binom{n}{2}$, and every graph in an $\mathscr{ON}(n,v;4,3)$ is a triangle, kite, or 4-cycle. Let $\delta$, $\kappa$, and $\gamma$ denote the numbers of triangles, kites, and 4-cycles in the grooming, respectively. Then $3\delta + 4\kappa + 4\gamma = \binom{n}{2}$, and the number of wavelengths is $\delta+\kappa+\gamma$. Thus in order to minimize the number of wavelengths, we must minimize the number $\delta$ of triangles. We focus on this equivalent problem henceforth.
In an $\mathscr{ON}(n,v;4,3)$, for $0 \leq i \leq 3$ and $0 \leq j \leq 4$, let $\delta_{ij}$, $\kappa_{ij}$, and $\gamma_{ij}$ denote the number of triangles, kites, and 4-cycles, respectively, each having $i$ edges on $V$ and $j$ edges between $V$ and $W$. The only counts that can be nonzero are $\delta_{00}$, $\delta_{02}$, $\delta_{12}$, $\delta_{30}$; $\kappa_{00}$, $\kappa_{01}$, $\kappa_{02}$, $\kappa_{03}$, $\kappa_{12}$, $\kappa_{13}$, $\kappa_{22}$, $\kappa_{31}$; $\gamma_{00}$, $\gamma_{02}$, $\gamma_{04}$, $\gamma_{12}$, $\gamma_{22}$. We write $\sigma_{ij} =
\kappa_{ij} + \gamma_{ij}$ when we do not need to distinguish kites and 4-cycles. Our objective is to minimize $\delta_{00} + \delta_{02} + \delta_{12}+\delta_{30}$ subject to certain constraints; we adopt the strategy of [@cqslower] and treat this as a linear program.
Let $\varepsilon = 0$ when $v \equiv 1,3 \pmod{6}$, $\varepsilon = 2$ when $v \equiv 5 \pmod{6}$, and $\varepsilon = \frac{v}{2}$ when $v \equiv 0 \pmod{2}$. We specify the linear program in Figure \[linprog\]. The first row lists the primal variables. The second lists coefficients of the objective function to be minimized. The remainder list the coefficients of linear inequalities, with the final column providing the [*lower bound*]{} on the linear combination specified. The first inequality states that the number of edges on $V$ used is at least the total number on $V$, while the second specifies that the number of edges used between $V$ and $W$ is at most the total number between $V$ and $W$. For the third, when $v \equiv 5 \pmod{6}$ at least four edges on $V$ are not in triangles, and so at least two graphs containing edges of $V$ do not have a triangle on $V$; when $v \equiv 0 \pmod{2}$ every graph can induce at most two odd degree vertices on $V$, yet all are odd in the decomposition.
$\delta_{30}$ $\delta_{12}$ $\delta_{02}$ $\delta_{00}$ $\kappa_{31}$ $\sigma_{22}$ $\kappa_{13}$ $\sigma_{12}$ $\gamma_{04}$ $\kappa_{03}$ $\sigma_{02}$ $\kappa_{01}$ $\sigma_{00}$
--------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- ----------------
1 1 1 1 0 0 0 0 0 0 0 0 0
3 1 0 0 3 2 1 1 0 0 0 0 0 $\binom{v}{2}$
0 -2 -2 0 -1 -2 -3 -2 -4 -3 -2 -1 0 $-vw$
0 1 0 0 0 1 1 1 0 0 0 0 0 $\varepsilon$
We do not solve this linear program. Rather we derive lower bounds by considering its dual. Let $y_1$, $y_2$, and $y_3$ be the dual variables. A dual feasible solution has $y_1 = \frac{1}{3}$, $y_2=1$, and $y_3=\frac{4}{3}$, yielding a dual objective function value of $\frac{1}{6}v(v-1) - vw
+ \frac{4}{3} \varepsilon$. Recall that every dual feasible solution gives a lower bound on all primal feasible solutions
On the other hand, $3 \delta \equiv \binom{n}{2} \pmod{4}$ and so $\delta \equiv 9\delta \equiv
3\binom{n}{2} \pmod{4}$. The value of $3\binom{n}{2} \pmod{4}$ is in fact the value of $t$ given in Theorem \[lavbermcoudhu\]. Therefore if $x$ is a lower bound on $\delta$ in an $\mathscr{ON}(n,v;4,3)$, so is $\langle x \rangle_n$, where $\langle x \rangle_n$ denotes the smallest nonnegative integer $\overline{x}$ such that $\overline{x} \geq x$ and $\overline{x}
\equiv 3\binom{n}{2} \pmod{4}$.
The discussion above proves the general lower bound on the number of triangles:
\[lower43\] Let $v+w \geq 5$, and let
$$L(v,w) = \left \{ \begin{array}{ccl}
\frac{1}{6}v(v-1) - vw & \mbox{if} & v \equiv 1,3 \pmod{6}\\
\frac{1}{6}v(v-1) - vw + \frac{8}{3} & \mbox{if} & v \equiv 5 \pmod{6}\\
\frac{1}{6}v(v+3) - vw & \mbox{if} & v \equiv 0 \pmod{2}\\
\end{array} \right .$$
Then the number of triangles in an $\mathscr{ON}(v+w,v;4,3)$ is at least $$\delta_{\min}(v,w) = \langle L(v,w) \rangle_{v+w}$$
\[rmk:0\] In particular, if $v$ is odd and $w\geq \lceil \frac{v-1}{6}\rceil$ or if $v$ is even and $w\geq \lceil \frac{v-4}{6}\rceil$, then $L(v,w)\leq 0$ and the minimum number of triangles is $\delta_{\min}(v,w) = \langle 0 \rangle_{v+w} \leq 3$.
### Upper Bounds {#sec:upper}
We first state two simple lemmas to be used intensively in the proof of Theorem \[thm:MON43\]. The following result shows that in fact we do not need to check *exactly* that the number of triangles of an optimal construction meets the bound of Theorem \[lower43\].
\[lemma:interval\] Any $\mathscr{ON}(v+w,v;4,3)$ is a $\mathscr{MON}(v+w,v;4,3)$ if the number of triangles that it contains is at most $\max(3,\lceil L(v,w)\rceil+3)$.
$$\max(3,\lceil L(v,w)\rceil+3)= \left \{ \begin{array}{ccl}
\max(3, \frac{1}{6}(v(v-1-6w)+18)) & \mbox{if} & v \equiv 1,3 \pmod{6}\\
\max(3, \frac{1}{6}(v(v-1-6w)+34)) & \mbox{if} & v \equiv 5 \pmod{6}\\
\max(3, \frac{1}{6}(v(v+3-6w)+18)) & \mbox{if} & v \equiv 0 \pmod{6}\\
\max(3,\frac{1}{6}(v(v+3-6w)+20)) & \mbox{if} & v \equiv 2,4 \pmod{6}\\
\end{array} \right.$$
In the closed interval $[\lceil L(v,w)\rceil,\lceil L(v,w)\rceil+3]$ there is exactly one integer congruent to $3\binom{n}{2} \pmod{4}$, and so necessarily exactly one integer equal to $\delta_{\min}(v,w)$.
Combining Remark \[rmk:0\] and Lemma \[lemma:interval\] we deduce that when $v$ is odd and $w\geq \lceil \frac{v-1}{6}\rceil$ or if $v$ is even and $w\geq \lceil \frac{v-4}{6}\rceil$, to prove the optimality of a construction it is enough to check that there are at most three triangles.
As a prelude to the constructions, let $(V,{\cal B})$ be a partial triple system, $V = \{0,\dots,
v-1\}$, and ${\cal B} = \{B_1,\dots,B_b\}$. Let $r_i$ be the number of blocks of ${\cal B}$ that contain $i \in V$. A [*headset*]{} is a multiset $S = \{s_1,\dots,s_b\}$ so that $s_k \in B_k$ for $1 \leq k \leq b$, and for $0 \leq i \leq v-1$ the number of occurrences of $i$ in $S$ is $\lfloor
\frac{r_i}{3} \rfloor$ or $\lceil \frac{r_i}{3} \rceil$.
\[equit\] Every partial triple system has a headset.
Form a bipartite graph $\Gamma$ with vertex set $V\cup{\cal B}$, and an edge $\{v,B\}$ for $v \in
V$ and $B \in {\cal B}$ if and only if $v \in B$. The graph $\Gamma$ admits an equitable 3-edge-colouring [@deW]; that is, the edges can be coloured green, white, and red so that every vertex of degree $d$ is incident with either $\lfloor d/3\rfloor$ or $\lceil d/3 \rceil$ edges of each colour. Then for $1 \leq k \leq b$, $B_k$ is incident to exactly three edges, and hence to exactly one edge $\{i_k,B_k\}$ that is green; set $s_k = i_k$. Then $(s_1,\dots,s_b)$ forms the headset.
\[thm:MON43\] Let $v+w \geq 5$. When $w \geq 1$, $$wavecost \, \mathscr{MON}(v+w,v;4,3) =
\left \lceil \left ( \binom{v+w}{2} + \delta_{\min}(v,w) \right ) / 4 \right \rceil .$$
The lower bound follows from Theorem \[lower43\], so we focus on the upper bound.
When $w \geq 1$, an $\mathscr{ON}(v+w,v;4,3)$ of cost $\binom{v+w}{2}$ is an $\mathscr{ON}(v+w,v-1;4,3)$. Let us show that it suffices to prove the statement for $w \leq
\frac{v+9}{6}$ when $v$ is odd, and for $w \leq \frac{v+4}{6}$ when $v$ is even. Equivalently, we show that if it is true for these values of $w$, then it follows for any $w$. Note that $\delta_{\min}(v,w) \leq 3$ if $\delta_{\min}(v+1,w-1)\leq 3$.
Indeed, let $v$ be even. If $w=\lfloor \frac{v+4}{6}\rfloor+1$, the result follows from the case for $v+1$ (odd) and $w-1=\lfloor \frac{v+4}{6}\rfloor \leq \frac{v+1+9}{6}$, in which case $\delta_{\min}(v+1,w-1)=\langle 0\rangle_{v+w}$. If $w=\lfloor \frac{v+4}{6}\rfloor+2$ it follows from the case for $v+1$ (odd) and $w-1=\lfloor \frac{v+4}{6}\rfloor + 1\leq \frac{v+1+9}{6}$, and $\delta_{\min}(v+1,w-1)=\langle 0\rangle_{v+w}$. If $w \geq \lfloor \frac{v+4}{6}\rfloor+3$ it follows from the case for $v+2$ (even) and $w-2$.
Let $v$ be odd. If $w=\lfloor \frac{v+9}{6}\rfloor+1$ it follows from the case for $v+1$ (even) and $w-1$, which has been already proved (in this case also $\delta_{\min}(v+1,w-1)=\langle
0\rangle_{v+w}$). If $w \geq \lfloor \frac{v+9}{6}\rfloor+2$ it follows from the case for $v+2$ (odd) and $w-2$.
In each case, we use the same general prescription. Given a partial triple system $(V,{\cal B})$, a headset $S = \{s_1,\dots,s_b\}$ is formed using Lemma \[equit\]. Add vertices $W=\{a_0,\dots,a_{w-1}\}$, a set disjoint from $V$ of size $w \geq 1$. For each $i$ let $D_i$ be a subset of $\{0,\ldots,w-1\}$, which is specified for each subcase, and that satisfies the following property: $|D_i|$ is at most the number of occurrences of $i$ in the headset $S$. Among the blocks $B_k$ such that $s_k=i$, we choose $|D_i|$ of them, namely the subset $\{B_k^j:j \in D_i\}$, and form $|D_i|$ kites by adding for each $j \in D_i$ the edge $\{a_j,i\}$ to the block $B_k^j$.
The idea behind the construction is that if we can choose $|D_i|=w$, we use all the edges between $V$ and $W$ leaving a minimum number of triangles in the partition of $V$ (see Case **O1a**). Unfortunately it is not always possible to choose $|D_i|=w$, in particular when $w$ is greater than the number of occurrences of $i$ in the headset. So we distinguish different cases:\
[**Case O1a. $v=6t+1$ or $6t+3$ and $w \leq \frac{v-1}{6}$.**]{} Let $(V,{\cal B})$ be a Steiner triple system. For $0 \leq i < v$, let $D_i=\{0,\ldots,w-1\}$. Apply the general prescription. If $v=6t+1$, $i$ appears $t$ times in $S$ and $w \leq \frac{v-1}{6}=t$. If $v=6t+3$, $i$ appears $t$ or $t+1$ times in $S$ and $w \leq t$. In both cases $|D_i|$ is at most the number of occurrences of $i$ in $S$, so the construction applies and all the edges between $V$ and $W$ are used in the kites. All the edges on $V$ are used and $\frac{v(v-1)}{6}-vw$ triangles remain. Finally, it remains to partition the edges of $W$. When $w\not\in\{2,4\}$, form a $\mathscr{MON}(w,4)$ on $W$, and doing so we have at most $\delta_{\min}$ triangles. If $w=2$ or $w=4$ remove edges $\{a_0,0\}$ and $\{a_1,0\}$ from their kites and partition $K_W$ together with these edges into a triangle ($w=2$) or two kites ($w=4$).\
[**Case O1b. $v=6t+5$ and $w \leq \frac{v-1}{6}$.**]{} Form a partial triple system $(V,{\cal B})$ covering all edges except those in the $C_4$ $(0,1,2,3)$. For $0 \leq i \leq 3$, let $D_i=\{0,\ldots,w-2\}$ and for $4 \leq i <v$ $D_i=\{0,\ldots,w-1\}$. Apply the general prescription. Add the kites $(a_{w-1},1,2;3)$ and $(a_{w-1},3,0;1)$. Here again $i$ appears at least $t$ times in $S$ and $w \leq t$. So $D_i$ is at most the number of occurrences of $i$ in $S$. Again we have used all the edges on $V$ and all the edges between $V$ and $W$. It remains to partition the edges of $W$, and this can be done as in the Case [**O1a**]{}.\
[**Case O2. $v = 6t +3$ and $w = t+1$, $v
> 3$.**]{} Form a partial triple system covering all edges except those on the $v$-cycle $\{\{i,(i+1) \bmod v\} : 0 \leq i < v\}$ [@ColbournR86]. Set $D_i =
\{1,\dots,w-1\}$ for all $i$. Apply the general prescription. Adjoin edges from $a_{0}$ to a partition of the cycle, minus edge $\{0,v-1\}$, into $P_3$s. The only edge between $V$ and $W$ that remains is $\{a_0,v-1\}$. When an $\mathscr{ON}(w,4)$ exists having 1, 2, 3, or 4 triangles, this edge is used to convert a triangle to a kite. This handles all cases except when $w \in \{2,4\}$. In these cases, remove the pendant edge $\{a_1,v-1\}$ from its kite. When $w=2$, $\{a_0,a_1,v-1\}$ forms a triangle. When $w=4$, partition the edges on $W$ together with $\{a_0,v-1\}$ and $\{a_1,v-1\}$ into two kites.\
[**Case O3. $v =6t+1$ and $w = t+1$.**]{}
When $t=1$, a $\mathscr{MON}(7+2,7;4,3)$ has $\cB=\{(0,a_1,a_0;6)$, $(2,0,6;a_1)$, $ (3,0,4;a_1)$, $(1,0,5;a_1)$, $(3,6,5;a_0)$, $ (4,6,1;a_1)$, $(3,2,1;a_0)$, $(5,2,4;a_0)$, $ (a_0,2,a_1,3)\}$.
A solution with $t=2$ is given in Appendix \[ap:4\].
When $t \geq 3$, form a 3-GDD of type $6^t$ with groups $\{\{6p+q : 0 \leq q < 6\} : 0 \leq p <
t\}$. Let $D_{6p+q} = \{0,\dots,w-2\} \setminus \{p\}$ for $0 \leq p < t$ and $0 \leq q < 6$. Apply the general prescription. For $0 \leq p < t$, on $\{6p+q : 0 \leq q < 6\} \cup \{v-1\} \cup
\{a_{w-1},a_p\}$ place a $\mathscr{MON}(7+2,7;4,3)$ obtained from the solution $\cB$ for $t=1$, by replacing $q$ by $6p+q : 0 \leq q < 6$, 6 by $v-1$ $a_0$ by $a_{w-1}$ and $a_1$ by $a_p$; then omit the kite $(a_p,6p,a_{w-1};v-1)$. All edges on $W$ remain; the edges $\{a_{w-1},6p\}$ and $\{a_p,6p\}$ remain for $0 \leq p < t$, and the edge $\{a_{w-1},v-1\}$ remains.
Add the kites $(a_{w-2},6(w-2),a_{w-1};v-1)$ and for $0 \leq j < w-2 =t-1$ $(6j,a_{w-1},a_j;a_{w-2})$. If $w-2\not\in\{2,4\}$, that is $t\not\in\{3,5\}$, place a $\mathscr{MON}(w-2,4)$ on $W-a_{w-2}-a_{w-1}$. Note that, as $3\binom{w-2}{2} \equiv
3\binom{v+w}{2} \pmod{4}$, we have the right number of triangles (at most $3$). If $w-2 \in\{2,4\}$ remove edges $\{a_0,w-2\}$ and $\{a_1,w-2\}$ from their kites, and partition $K_w$ together with these edges.\
[**Case O4. $v = 6t + 5$ and $w = t+1$.**]{}
For $t=0$, a $\mathscr{MON}(5+1,5;4,3)$ has kites $(3,a_0,0;1)$, $(1,a_0,2;3)$, $(1,3,4;a_0)$, and triangle $(0,2,4)$.
For $t=1$, let $V=\{0,\ldots,10\}$ and $W=\{a_0,a_1\}$. A $\mathscr{MON}(11+2,11;4,3)$ is formed by using an $\mathscr{MON}(5+1,5;4,3)$ on $\{0,1,2,3,4\} \cup \{a_0\}$, and a partition of the remaining edges, denoted by ${\cal Q}$, into 15 kites and a triangle. So we have two triangles, attaining $\delta_{min}(11,2)$ as $13\equiv 5 {\pmod 8}$. The partition of ${\cal Q}$ is as follows: the triangle $(a_0,a_1,10)$ and the kites $(0,6,5;a_0)$, $(1,8,6;a_0)$, $(2,9,7;a_0)$, $(3,10,8;a_0)$, $(4,6,9;a_0)$, $(8,9,0;a_1)$, $(5,7,1;a_1)$, $(5,8,2;a_1)$, $(6,7,3;a_1)$, $(5,10,4;a_1)$, $(3,9,5;a_1)$, $(2,10,6;a_1)$, $(0,10,7;a_1)$, $(4,7,8;a_1)$, and $(1,10,9;a_1)$.
For $t=2$, a $\mathscr{MON}(17+3,17;4,3)$ is given in Appendix \[ap:4\].
For $t \geq 3$, form a 3-GDD of type $6^t$ with groups $\{\{6p+q : 0 \leq q < 6\} : 0 \leq p <
t\}$. Let $D_{6p+q} = \{0,\dots,w-2\} \setminus \{p\}$ for $0 \leq p < t$ and $0 \leq q < 6$. Apply the general prescription. There remain uncovered for each $p$ the edges of the set ${\cal
Q}_p$ obtained from the complete graph on the set of vertices $\{6p+q: 0 \leq q < 6\} \cup
\{v-5,v-4,v-3,v-2,v-1\} \cup \{a_{w-1},a_p\}$ minus the complete graph on $\{v-5,v-4,v-3,v-2,v-1\}
\cup \{a_{w-1}\}$.
To deal with the edges of ${\cal Q}_p$, we start from a partition of ${\cal Q}$, where we replace pendant edges in kites as follows: Replace $\{a_1,4\}$ by $\{a_1,10\}$, $\{a_0,8\}$ by $\{a_0,10\}$, and $\{a_1,2\}$ by $\{a_0,8\}$. We delete the triangle $(a_0,a_1,10)$, resulting in a new partition of ${\cal Q}$ into 15 kites and the 3 edges $\{a_0,a_1\}$, $\{a_1,2\}$, and $\{a_1,4\}$. Then we obtain a partition of ${\cal Q}_p$ by replacing $\{0,1,2,3,4\}$ by $\{v-5,v-4,v-3,v-2,v-1\}$, $q+5$ by $6p+q$ for $0 \leq q < 6$, $a_0$ by $a_{w-1}$, and $a_1$ by $a_p$. At the end we get a partition of ${\cal Q}_p$ into 15 kites plus the 3 edges $\{a_{w-1},a_p\}$, $\{a_p,v-3\}$, and $\{a_p,v-1\}$.
Now the $3t$ edges $\{\{a_{w-1},a_p\}, \{a_p,v-3\}, \{a_p,v-1\}: 0 \leq p < t\}$ plus the uncovered edges of $K_W$ form a $K_{t+3}$ missing a triangle on $\{a_{w-1},v-3,v-1\}$. If $t+3 \equiv
2,3,4,5,6,7 \pmod{8}$, use Theorem \[lavbermcoudhu\] to form a $\mathscr{ON}(t+3,4)$ having a triangle $(v-3,v-1,a_{w-1})$ and 0, 1, or 2 other triangles; remove the triangle $(v-3,v-1,a_{w-1})$ to complete the solution with 1, 2, or 3 triangles (the triangle $(v-5,v-3,v-1)$ is still present). A variant is needed when $t+3 \equiv 0,1 \pmod{8}$. In these cases, form a $\mathscr{ON}(t+3,4)$ (having no triangles) in which $(v-3,a_{w-1},v-1;a_1)$ is a kite. Remove all edges of this kite, and use edge $\{a_1,v-1\}$ to convert triangle $(v-5,v-3,v-1)$ to a kite.
Finally, place a $\mathscr{MON}(5+1,5;4,3)$ on $\{v-5,v-4,v-3,v-2,v-1\} \cup \{a_0\}$. Altogether we have a partition of all the edges using at most 3 triangles.\
[**Case O5. $v = 6t+5$ and $w = t+2$.**]{}
When $t=0$, partition all edges on $\{0,1,2,3,4\} \cup \{a_0,a_1\}$ except $\{a_0,a_1\}$ into kites $(3,1,a_0;0)$, $ (3,2,a_1;0)$, $(a_1,1,4;2)$, $(0,1,2;a_0)$, and $(3,0,4;a_0)$. Then a $\mathscr{MON}(5+2,5;4,3)$ is obtained by removing pendant edges $\{a_0,0\}$ and $\{a_1,0\}$ and adding triangle $(a_0,a_1,0)$.
When $t=1$, a $\mathscr{MON}(11+3,11;4,3)$ on $\{0,\dots,10\} \cup \{a_0,a_1,a_2\}$ is obtained by taking the above partition on $\{0,1,2,3,4\} \cup \{a_0,a_1\}$, the triangle $(a_0,a_1,a_2)$, and a partition of the remaining edges (which form a graph called $\cal Q$) into 11 kites and 6 4-cycles as follows: kites $(2,9,7;a_0)$, $(4,5,10;a_0)$, $(2,10,6;a_1)$, $(4,6,9;a_2)$, $(7,10,0;a_2)$, $(6,8,1;a_2)$, $(5,8,2;a_2)$, $(5,9,3;a_2)$, $(7,8,4;a_2)$, $(6,7,5;a_2)$, and $(9,10,8;a_1)$; and 4-cycles $(0,6,a_0,5)$, $(0,8,a_0,9)$, $(1,5,a_1,7)$, $(1,9,a_1,10)$, $(3,6,a_2,7)$, and $(3,8,a_2,10)$.
A solution with $t=2$ is given in Appendix \[ap:4\].
When $t \geq 3$, form a 3-GDD of type $6^t$ with groups $\{\{6p+q : 0 \leq q < 6\} : 0 \leq p <
t\}$. Let $D_{6p+q} = \{0,\dots,w-3\} \setminus \{p\}$ for $0 \leq p < t$ and $0 \leq q < 6$. Apply the general prescription. Add a partition of the complete graph on $\{v-5,v-4,v-3,v-2,v-1\}
\cup \{a_{w-2},a_{w-1}\}$ as in the case when $t=0$. It remains to partition, for each $p$, $0 \leq
p < t$, the graph ${\cal Q}_p$ is obtained from the complete graph on $\{6p+q : 0 \leq q < 6\} \cup
\{v-5,v-4,v-3,v-2,v-1\} \cup \{a_{w-2},a_{w-1},a_p\}$ minus the complete graph on $\{v-5,v-4,v-3,v-2,v-1\} \cup \{a_{w-2},a_{w-1}\}$. This partition is obtained from that of $\cal
Q$ by replacing $\{0,1,2,3,4\}$ by $\{v-5,v-4,v-3,v-2,v-1\}$, $a_0$ by $a_{w-2}$, $a_1$ by $a_{w-1}$, and $a_2$ by $a_p$. What remains is precisely the edges on $W$, so place a $\mathscr{MON}(w,4)$ on $W$ to complete the construction.\
[**Case O6. $v = 6t+3$ and $w = t+2$.**]{}
When $t=0$, a $\mathscr{MON}(3+2,3;4,3)$ has triangles $(a_0,0,1)$ and $\{a_1,1,2\}$ and 4-cycle $(0,2,a_0,a_1)$.
When $t=1$, on $\{0,\dots,8\} \cup \{a_0,a_1,a_2\}$, place kites $(2,6,4;a_0)$, $(0,8,4;a_1)$, $(0,5,7;a_1)$, $(3,6,0;a_2)$, $(1,7,4;a_2)$, $(5,8,2;a_2)$, $(1,6,5;a_2)$, $(2,7,3;a_2)$, $(3,8,1;a_2)$, $(3,5,a_0;a_2)$, $(7,a_0,6;a_2)$, $(6,8,a_1;a_2)$, $(7,a_2,8;a_0)$, and 4-cycle $(3,4,5,a_1)$. Adding the blocks of a $\mathscr{MON}(3+2,3;4,3)$ forms a $\mathscr{MON}(9+3,9;4,3)$.
A solution with $t=2$ is given in Appendix \[ap:4\].
When $t \geq 3$, form a 3-GDD of type $6^t$ with groups $\{\{6p+j : 0 \leq q < 6\} : 0 \leq p <
t\}$. Let $D_{6p+q} = \{0,\dots,w-3\} \setminus \{p\}$ for $0 \leq p < t$ and $0 \leq q < 6$. Apply the general prescription. For $0 \leq p < t$, on $\{6p+q : 0 \leq q < 6\} \cup
\{v-3,v-2,v-1\} \cup \{a_{w-2},a_{w-1},a_p\}$ place a $\mathscr{MON}(9+3,9;4,3)$, omitting a $\mathscr{MON}(3+2,2;4,3)$ on $\{a_{w-2},a_{w-1},v-3,v-2,v-1\}$. Place a $\mathscr{MON}(3+2,2;4,3)$ on $\{a_{w-2},a_{w-1},v-3,v-2,v-1\}$. Remove edges $\{a_0,a_{w-2}\}$ and $\{a_1,a_{w-1}\}$ from their kites, and convert the two triangles in the $\mathscr{MON}(3+2,2;4,3)$ to kites using these. What remains is all edges on $\{a_0,\dots,a_{w-3}\}$ and everything is in kites or 4-cycles excepting one triangle involving $a_0$ and one involving $a_1$. If $w-2 \equiv 0,1,3,6 \pmod{8}$, place a $\mathscr{MON}(w-2,4)$ on $\{a_0,\dots,a_{w-3}\}$. Otherwise partition all edges on $\{a_0,\dots,a_{w-3}\}$ except $\{a_0,a_2\}$ and $\{a_1,a_2\}$ into kites, 4-cycles, and at most one triangle, and use the last two edges to form kites with the excess triangles involving $a_0$ and $a_1$. The partition needed is easily produced for $w-2 \in \{4,5,7,9\}$ and hence by induction for all the required orders.\
[**Case E1. $v \equiv 0 \pmod{2}$ and $w \leq \frac{v+2}{6}$.**]{} Write $v=6t+s$ for $s
\in \{0,2,4\}$. Let $L=(V,E)$ be a graph with edges $$\{\{3i,3i+1\},\{3i,3i+2\},\{3i+1,3i+2\} : 0 \leq i < t\} \cup
\{\{i,3t+i\}: 0 \leq i < 3t\},$$ together with $\{6t,6t+1\}$ when $s=2$ and with $\{\{6t,6t+1\},\{6t,6t+2\},\{6t,6t+3\}\}$ when $s=4$. Let $(V,{\cal B})$ be a partial triple system covering all edges except those in $L$ (this is easily produced). Let $D_i=\{0,\dots,w-2\}$ for $0\leq i < v$. Apply the general prescription. For $0 \leq i < t$ and $j \in \{0,1,2\}$, form the 4-cycle $(a_{w-1},3i+((j+1) \bmod 3),3i+j,3t+3i+j)$. When $s=4$, form 4-cycle $(a_{w-1},6t+2,6t,6t+3)$. When $s \in \{2,4\}$, form a triangle $(a_{w-1},6t,6t+1)$. All edges on $V$ are used and all edges on $W$ remain. All edges between $V$ and $W$ are used. Except when $w
\in\{2,4\}$, or $w \equiv 2,7 \pmod{8}$ and $v \equiv 2,4 \pmod{6}$ form a $\mathscr{MON}(w,4)$ on $W$ to complete the proof. When $w\equiv 2,7 \pmod{8}$ and $v \equiv 2,4 \pmod{6}$, convert $\{a_{w-1},6t,6t+1\}$ to a kite using an edge of the $K_w$, and partition the $K_w\setminus K_2$ into kites and 4-cycles. When $w \in \{2,4\}$, remove edges $\{a_0,0\}$ and $\{a_1,0\}$ from their kites, and partition $K_w$ together with these edges.\
[**Case E2. $v \equiv 2 \pmod{6}$ and $w = \frac{v+4}{6}$.**]{} Choose $m$ as large as possible so that $m \leq \frac{v}{2}$, $m \leq \binom{w}{2}$, and $\binom{w}{2} - m \equiv 0
\pmod{4}$. Partition the $\binom{w}{2}$ edges on $W$ into sets $E_c$ and $E_o$ with $|E_c| = m$, so that the edges on $E_o$ can be partitioned into kites and 4-cycles; this is easily done. Place these kites and 4-cycles on $W$. Then let $\{e_i : 0 \leq i < m\}$ be the edges in $E_c$; let $a_{f_i} \in e_i$ when $0 \leq i < m$; $f_i=0$ when $m \leq i < \frac{v-2}{2}$; and $f_{(v-2)/2} =
1$ if $m < \frac{v}{2}$. Next form a 3-GDD of type $2^{v/2}$ on $V$ so that $\{\{2i,2i+1\} : 0 \leq
i < \frac{v}{2}\}$ forms the groups, and ${\cal B}$ forms the blocks. For $0 \leq i <
\frac{v}{2}$, let $D_{2i} = D_{2i+1} = \{0,\dots,w-1\} \setminus \{f_i\}$. Apply the general prescription. Now for $0 \leq i < \frac{v}{2}$, form the triangle $(a_{f_i},2i,2i+1)$ and for $0
\leq i < m$ add edge $e_i$ to form a kite. At most three triangles remain [*except when*]{} $v \in
\{14,20\}$, where four triangles remain. To treat these cases, we reduce the number of triangles; without loss of generality, the 3-GDD contains a triple $\{v-8,v-6,v-4\}$ in a kite with edge $\{a_1,v-8\}$. Remove this kite, and form kites $(a_0,v-7,v-8;v-6)$, $(a_0,v-5,v-6;v-4)$, $(a_0,v-3,v-4;v-8)$, and $(v-2,v-1,a_1;v-8)$.
\[caso3wgrande\] Let $v\geq 4$ and $\mu_3(v)$ be defined by:
$v$ $6$ $6t,t\geq 2$ $1+6t$ $2+6t$ $9$ $3+6t,t\geq 2$ $4$ $10$ $4+6t,t\geq 2$ $5+6t$
------------ ----- -------------- -------- -------- ----- ---------------- ----- ------ ---------------- --------
$\mu_3(v)$ $1$ $1+t$ $t$ $1+t$ $1$ $1+t$ $1$ $2$ $2+t$ $1+t$
Then $wavecost \, \mathscr{MON}(v+w,v;4,3) =\left\lceil\frac{(v+w)(v+w-1)}{8}\right\rceil$ if and only if $w\geq\mu_3(v)$.
Conclusions
===========
The determination of $cost \, \mathscr{ON}(n,v;\Ca,\Cb)$ appears to be easier when $C'=4$ than the case for $C'=3$ settled in [@cqsupper; @cqslower]. Nevertheless the very flexibility in choosing kites, 4-cycles, or triangles also results in a wide range of numbers of wavelengths among decompositions with optimal drop cost. This leads naturally to the question of minimizing the drop cost and the number of wavelengths simultaneously. In many cases, the minima for both can be realized by a single decomposition. However, it may happen that the two minimization criteria compete. Therefore we have determined the minimum number of wavelengths among all decompositions of lowest drop cost for the specified values of $n$, $v$, and $C'$.
Acknowledgments {#acknowledgments .unnumbered}
===============
Research of the authors is supported by MIUR-Italy (LG,GQ), European project IST FET AEOLUS, PACA region of France, Ministerio de Educación y Ciencia of Spain, European Regional Development Fund under project TEC2005-03575, Catalan Research Council under project 2005SGR00256, and COST action 293 GRAAL, and has been partially done in the context of the [crc Corso]{} with France Telecom.
Small constructions in the proof of Theorem \[41\] {#ap:1}
==================================================
$\mathscr{MON}(3+3,3;4,1)$: $\cB=\{(0,a_0,1;a_2)$, $(1,a_1,2;a_0)$, $(2,a_2,0;a_1)$, $(a_0,a_1,a_2)\}$.
$\mathscr{MON}(4+4,4;4,1)$: $\cB=\{(1,2,a_3;a_0)$, $(0,3,a_2;a_1)$, $(a_1,1,3;a_0)$, $(a_0,a_2,1;0)$, $(a_0,a_1,2;0)$, $(a_1,a_3,0;a_0)$, $(2,3,a_3,a_2)\}$.
$\mathscr{MON}(5+5,5;4,1)$: $\cB=\{(1,2,a_3;a_0)$, $(0,3,a_2;a_1)$, $(a_1,1,3;a_0)$, $(a_0,a_2,1;0)$, $(a_0,a_1,2;0)$, $(a_1,a_3,0;a_0)$, $(2,a_2,4;a_4)$, $(3,a_3,4)$, $(a_2,a_3,a_4)$, $(2,3,a_4)$, $(0,4,a_0,a_4)$, $(1,4,a_1,a_4)\}$.
$\mathscr{MON}(6+6,6;4,1)$: $\cB=\{(1,2,a_3;a_0)$, $(0,3,a_2;a_1)$, $(a_1,1,3;a_0)$, $(a_0,a_2,1;0)$, $(a_0,a_1,2;0)$, $(a_1,a_3,0;a_0)$, $(4,5,a_5;a_4)$, $(2,a_2,4;a_4)$, $(2,3,a_4;5)$, $(3,4,a_3)$, $(a_2,a_3,a_4)$, $(0,4,a_0,a_4)$, $(1,4,a_1,a_4)$, $(0,5,a_0,a_5)$, $(1,5,a_1,a_5)$, $(2,5,a_2,a_5)$, $(3,5,a_3,a_5)\}$.
Small constructions in the proof of Theorem \[mon41small\] {#ap:2}
==========================================================
$\mathscr{MON}(1+2,1;4,1)$: $\cB=\{(0,a_0,a_1)\}$.
$\mathscr{MON}(2+3,2;4,1)$: $\cB=\{(0,a_0,a_1)$, $(1,a_1,a_2)$, $(0,1,a_0,a_2) \}$.
$\mathscr{MON}(3+4,3;4,1)$: $\cB=\{(0,a_0,a_1)$, $(1,a_1,a_2)$, $(0,1,a_0,a_2)$, $(2,a_2,a_3)$, $(0,2,a_0,a_3)$, $(1,2,a_1,a_3) \}$.
$\mathscr{MON}(4+5,4;4,1)$: $\cB=\{(0,1,a_0;a_3)$, $(0,2,a_1; a_3)$, $(0,3,a_2;a_3)$, $(2,3,a_0;a_4)$, $(1,3,a_1;a_4)$, $(1,2,a_3;3)$, $(0,a_3,a_4;3)$, $(1,a_2,a_4;2)$, $(a_0,a_1,a_2;2)\}$.
Small constructions in the proof of Theorem \[42o\] {#ap:3}
===================================================
$\mathscr{ON}(8,4)$ with 4 triangles: $\cB=\{(1,2,0;4)$, $(0,3,6;7)$, $(0,7,5;2)$, $(4,5,3;1)$, $(1,4,7)$, $(1,5,6)$, $(2,3,7)$, $(2,4,6)\}$.
$\mathscr{MON}(7+4,7;4,2)$: $\cB=\{ (a_0,4,2;3)$, $(a_0,3,6;0)$, $(a_0,0,5;1)$, $(a_1,5,3;4)$, $(a_1,4,6;1)$, $(a_1,1,0;2)$, $(a_2,0,4;5)$, $(a_2,6,5;a_3)$, $(a_2,1,2;5)$, $(0,3,a_3;2)$, $(1,a_0,a_2,3)$, $(a_0,a_1,a_2,a_3)$, $(a_1,2,6,a_3)$, $(1,4,a_3)\}$.
Small constructions in the proof of Theorem \[thm:MON43\] {#ap:4}
=========================================================
$\mathscr{MON}(13+3,13;4,3)$: $\cB=\{(5+i,4+i,1+i;a_1)\mid i=0,1,\ldots,9\}\cup
\{(1+i,5+i,4+i;a_0)\mid i=10,11,12\}\cup \{(3+i,1+i,9+i;a_2)\mid i=6,7,\ldots,12\}\cup
\{(9+i,1+i,3+i;a_0)\mid i=1,2,\ldots,5\}\cup \{(9,3,1;a_2), (0,a_1,a_2;12), (12,a_1,a_0;0),(a_0, 9,
a_2, 10), (a_0,a_2,11;a_1), (a_0,9,a_2,10)\}$, where the sums are computed modulo 13.
$\mathscr{MON}(15+4,15;4,3)$: $\cB=\{(1,2,3)$, $(a_0,4,a_1,5),$ $(a_0,10,a_1,11)$, $(5,4,1;a_3)$, $(7,1,6;a_1)$, $(6,4,2;a_3)$, $(7,5,2;a_2)$, $(4,7,3;a_2)$, $(6,5,3;a_3),$ $(9,1,8;a_1)$, $(10,1,14;a_0)$, $(11,1,0;a_2)$, $(13,1,12;a_2)$, $(10,2,8;a_2)$, $(11,2,9;a_0)$, $(12,2,14;a_2),$ $ (0,2,13;a_3)$, $(8,3,11;a_3)$, $(10,3,12;a_0)$, $(13,3,9;a_2)$, $(14,3,0;a_3)$, $(12,8,4;a_2)$, $(11,4,13;a_0),$ $ (0,10,4;a_3)$, $(9,4,14;a_1)$, $(8,5,13;a_2)$, $(0,5,12;a_3)$, $(14,11,5;a_3)$, $(10,9,5;a_2)$, $(8,6,0;a_0),$ $ (14,13,6;a_3)$, $(9,6,12;a_1)$, $(10,6,11;a_2)$, $(14,8,7;a_3)$, $(9,7,0;a_1)$, $(10,7,13;a_1)$, $(12,11,7;a_0),$ $ (6,a_2,a_0;2)$, $(7,a_2,a_1;3)$, $(10,a_3,a_2;1)$, $(1,a_0,a_1;2)$, $(9,a_1,a_3;14)$, $(8,a_3,a_0;3)\}$.
$\mathscr{MON}(17+3,17;4,3)$: $\cB=\{(7,16,0)$, $(a_0,a_2,0)$, $(a_0,1,2;3)$, $(a_0,3,4;1)$, $(4,5,2;a_1)$, $(1,3,5;a_0),$ $(16,a_0,a_1;a_2)$, $(6,10,1;a_1)$, $(9,14,1;a_2)$, $(15,1,7;a_2)$, $(1,8,12;a_2)$, $(1,0,13;a_2)$, $(1,16,11;a_1),$ $(2,11,6;a_1)$, $(2,16,8;a_2)$, $(10,15,2;a_2)$, $(9,2,13;a_1)$, $(0,2,12;a_1)$, $(2,7,14;a_2)$, $(6,13,3;a_1),$ $(11,3,7;a_1)$, $(12,3,16;a_2)$, $(9,0,3;a_2)$, $(3,10,14;a_1)$, $(8,3,15;a_1)$, $(14,6,4;a_2)$, $(4,11,15;a_2),$ $(7,12,4;a_1)$, $(13,4,8;a_1)$, $(4,16,9;a_2)$, $(0,4,10;a_1)$, $(5,12,6;a_2)$, $(7,13,5;a_2)$, $(8,14,5;a_1),$ $(15,5,9;a_1)$, $(5,16,10;a_2)$, $(5,0,11;a_2)$, $(9,7,6;a_0)$, $(10,8,7;a_0)$, $(11,9,8;a_0)$, $(12,10,9;a_0),$ $(13,11,10;a_0)$, $(14,12,11;a_0)$, $(15,13,12;a_0)$, $(16,14,13;a_0)$, $(0,15,14;a_0)$, $(6,16,15;a_0),$ $(8,6,0;a_1)\}$.
$\mathscr{MON}(17+4,17;4,3)$: $\cB=\{(2,9,11)$, $(9,12,16),$ $(a_0,13,14;15)$, $(a_0,15,16;13)$, $(16,0,14;a_1)$, $(13,15,0;a_0)$, $(13,2,1;a_3)$, $(13,12,3;a_3),$ $(13,11,4;a_3)$, $(5,10,13;a_1)$, $(6,9,13;a_2)$, $(7,8,13;a_3)$, $(14,4,2;a_3)$, $(14,12,5;a_3)$, $(11,14,6;a_3),$ $(14,10,7;a_3)$, $(1,3,14;a_2)$, $(9,8,14;a_3)$, $(1,4,15;a_1)$, $(3,5,15;a_2)$, $(2,6,15;a_3)$, $(15,7,12;a_3),$ $(15,11,8;a_3)$, $(1,16,5;a_1)$, $(6,4,16;a_2)$, $(3,7,16;a_3)$, $(2,8,16;a_1)$, $(10,16,11;a_3)$, $(1,6,0;a_1),$ $(4,8,0;a_2)$, $(10,15,9,;a_3)$, $(2,10,0;a_3)$, $(5,0,7;a_1)$, $(3,0,9;a_1)$, $(12,0,11;a_1)$, $(1,a_0,7;6),$ $(8,6,a_0;a_3)$, $(9,a_0,5;11)$, $(10,a_0,4;9)$, $(11,a_0,3;10)$, $(2,a_0,12;8)$, $(8,a_1,1;11)$, $(10,a_1,6;3),$ $ (12,4,a_1;a_3)$, $(3,a_1,2;7)$, $(1,a_2,9;7)$, $(10,a_2,8;3)$, $(11,a_2,7;4)$, $(12,a_2,6;5)$, $(2,a_2,5;8),$ $
(3,a_2,4;5)$, $(a_1,a_0,a_2;a_3)$, $(12,1,10;a_3)\}$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Recently, Wadler presented a continuation-passing translation from a session-typed functional language, GV, to a process calculus based on classical linear logic, CP. However, this translation is one-way: CP is more expressive than GV. We propose an extension of GV, called [HGV]{}, and give translations showing that it is as expressive as CP. The new translations shed light both on the original translation from GV to CP, and on the limitations in expressiveness of GV.'
author:
- 'Sam Lindley J. Garrett Morris'
bibliography:
- 'cpgv.bib'
title: Sessions as Propositions
---
Introduction {#sect:introduction}
============
Linear logic has long been regarded as a potential typing discipline for concurrency. Girard [@Girard87] observes that the connectives of linear logic can be interpreted as parallel computation. Abramsky [@Abramsky92] and Bellin and Scott [@BellinScott94] interpret linear logic proofs as $\pi$-calculus processes. While they provide $\pi$-calculus interpretations of all linear logic proofs, they do not provide a proof-theoretic interpretation for arbitrary $\pi$-calculus terms. Caires and Pfenning [@CairesPfenning10] give a propositions-as-types correspondence between intuitionistic linear logic and session types, interpreting linear logic propositions as session types for a restricted $\pi$-calculus, $\pi$DILL. Of particular importance to this work, they interpret the multiplicative connectives as prefixing, and the exponentials as replicated processes.
Wadler [@Wadler12] adapts Caires and Pfenning’s work to classical linear logic, interpreting proofs as processes in a restricted $\pi$-calculus, CP. Additionally, Wadler shows that a core session-typed linear functional language, GV, patterned after a similar language due to Gay and Vasconcelos [@GayVasconcelos10], may be translated into CP. However, GV is less expressive than CP: there are proofs which do not correspond to any GV program.
Our primary contribution is [HGV]{}(Harmonious GV), a version of GV extended with constructs for session forwarding, replication, and polymorphism. We identify [HGV$\pi$]{}, the session-typed fragment of [HGV]{}, and give a type-preserving translation from [HGV]{}to [HGV$\pi$]{}(${({-})^\star}$); this translation depends crucially on the new constructs of [HGV]{}. We show that [HGV]{}is sufficient to express all linear logic proofs by giving type-preserving translations from [HGV$\pi$]{}to CP (${\llbracket{-}\rrbracket}$), and from CP to [HGV$\pi$]{}(${\llparenthesis{-}\rrparenthesis}$). Factoring the translation of [HGV]{}into CP through ${({-})^\star}$ simplifies the presentation, and illuminates regularities that are not apparent in Wadler’s original translation of GV into CP. Finally, we show that [HGV]{}, [HGV$\pi$]{}, and CP are all equally expressive.
The [HGV]{}Language {#sect:hgv}
===================
This section describes our session-typed language [HGV]{}, contrasting it with Gay and Vasconcelos’s functional language for asynchronous session types [@GayVasconcelos10], which we call [LAST]{}, and Wadler’s GV [@Wadler12]. In designing [HGV]{}, we have opted for programming convenience over uniformity, while insisting on a tight correspondence with linear logic. The session types of [HGV]{}are given by the following grammar:
S & ::= & [.[S]{}]{} [.[S]{}]{} [[end]{}\_!]{}[[end]{}\_?]{} X [\[[X]{}\].[S]{}]{} [\[[X]{}\].[S]{}]{}
Types for input (${\mathord{?}{T}.{S}}$), output (${\mathord{!}{T}.{S}}$), selection (${\oplus {{\{ {l}_i:S_i \}}_{i}}}$) and choice (${\binampersand {{\{ {l}_i:S_i \}}_{i}}}$) are standard. Like GV, but unlike [LAST]{}, we distinguish output (${{\mathsf}{end}_!}$) and input (${{\mathsf}{end}_?}$) session ends; this matches the situation in linear logic, where there is no conveniently self-dual proposition to represent the end of a session. Variables and their duals ($X,{\overline{X}}$) and type input (${\mathord{?}[{X}].{S}}$) and output (${\mathord{!}[{X}].{S}}$), permit definition of polymorphic sessions. We include a notion of replicated sessions, corresponding to exponentials in linear logic: a channel of type ${\sharp {S}}$ is a “service”, providing any number of channels of type $S$; a channel of type ${\flat {S}}$ is the “server” providing such a service. Each session type $S$ has a dual ${\overline{S}}$ (with the obvious dual for variables $X$): $${{\begin{array}}{@{}c@{}}}{\overline{{\mathord{!}{T}.{S}}}} = {\mathord{?}{T}.{{\overline{S}}}}
\qquad
{\overline{{\oplus {{\{ {l}_i:S_i \}}_{i}}}}} = {\binampersand {{\{ {l}_i:{\overline{S_i}} \}}_{i}}}
\qquad
{\overline{{{\mathsf}{end}_!}}} = {{\mathsf}{end}_?}\qquad
{\overline{{\mathord{!}[{X}].{S}}}} = {\mathord{?}[{X}].{{\overline{S}}}}
\qquad
{\overline{{\sharp {S}}}} = {\flat {{\overline{S}}}}
\\
{\overline{{\mathord{?}{T}.{S}}}} = {\mathord{!}{T}.{{\overline{S}}}}
\qquad
{\overline{{\binampersand {{\{ {l}_i:S_i \}}_{i}}}}} = {\oplus {{\{ {l}_i:{\overline{S_i}} \}}_{i}}}
\qquad
{\overline{{{\mathsf}{end}_?}}} = {{\mathsf}{end}_!}\qquad
{\overline{{\mathord{?}[{X}].{S}}}} = {\mathord{!}[{X}].{{\overline{S}}}}
\qquad
{\overline{{\flat {S}}}} = {\sharp {{\overline{S}}}}
\\
{{\end{array}}}$$ Note that dualisation leaves input and output types unchanged. In addition to sessions, [HGV]{}’s types include linear pairs, and linear and unlimited functions: $$T,U,V ::= S \mid {{T} \otimes {U}} \mid {{T} {\multimap}{U}} \mid {{T} \to {U}}$$ Every type $T$ is either linear (${\mathit{lin}(T)}$) or unlimited (${\mathit{un}(T)}$); the only unlimited types are services (${\mathit{un}({\sharp {S}})}$), unlimited functions (${\mathit{un}({{T} \to {U}})}$), and end input session types (${\mathit{un}({{\mathsf}{end}_?})}$). In GV, ${{\mathsf}{end}_?}$ is linear. We choose to make it unlimited in [HGV]{}because then we can dispense with GV’s explicit ${\mathsf}{terminate}$ construct while maintaining a strong correspondence with CP—${{\mathsf}{end}_?}$ corresponds to ${\bot}$ in CP, for which weakening and contraction are derivable.
Structural rules
Lambda rules
Session rules
\
\
Figure \[fig:hgv-typing\] gives the terms and typing rules for [HGV]{}; the first block contains the structural rules, the second contains the (standard) rules for lambda terms, and the third contains the session-typed fragment. The ${\mathsf}{fork}$ construct provides session initiation, filling the role of GV’s ${\mathsf}{with}\dots{\mathsf}{connect}\dots{\mathsf}{to}\dots$ structure, but without the asymmetry of the latter. The two are interdefinable, as follows: $${{\mathsf}{fork}~{x}.{M}} \equiv {{\mathsf}{with}~{x}~{\mathsf}{connect}~{M}~{\mathsf}{to}~{x}} \qquad {{\mathsf}{with}~{x}~{\mathsf}{connect}~{M}~{\mathsf}{to}~{N}} \equiv {{\mathsf}{let}~{x}={{{\mathsf}{fork}~{x}.{M}}}~{\mathsf}{in}~{N}}$$ We add a construct ${{\mathsf}{link}~{M}~{N}}$ to implement channel forwarding; this form is provided in neither GV nor [LAST]{}, but is necessary to match the expressive power of CP. (Note that while we could define session forwarding in GV or LAST for any particular session type, it is not possible to do so in a generic fashion.) We add terms ${{\mathsf}{sendType}~{S}~{M}}$ and ${{\mathsf}{receiveType}~{X}.{M}}$ to provide session polymorphism, and ${{\mathsf}{serve}~{x}.{M}}$ and ${{\mathsf}{request}~{M}}$ for replicated sessions. Note that, as the body $M$ of ${{\mathsf}{serve}~{x}.{M}}$ may be arbitrarily replicated, it can only refer to the unlimited portion of the environment. Channels of type ${\sharp {S}}$ offer arbitrarily many sessions of type $S$; correspondingly, channels of type ${\flat {S}}$ must consume arbitrarily many $S$ sessions. The rule for ${{\mathsf}{serve}~{x}.{M}}$ parallels that for ${\mathsf}{fork}$: it defines the server (which replicates $M$) and returns the channel by which it may be used (of type ${\overline{{\flat {S}}}} =
{\sharp {{\overline{S}}}}$). As a consequence, there is no rule involving type ${\flat {S}}$. We experimented with having such a rule, but found that it was always used immediately inside a ${\mathsf}{fork}$, while providing no extra expressive power. Hence we opted for the rule presented here.
From [HGV]{}to [HGV$\pi$]{} {#sect:hgv-to-hgvpi}
===========================
The language [HGV$\pi$]{}is the restriction of [HGV]{}to session types, that is, [HGV]{}without ${\multimap}$, $\to$, or $\otimes$. In order to avoid $\otimes$, we disallow plain ${{\mathsf}{receive}~{M}}$, but do permit it to be fused with a pair elimination ${{{\mathsf}{let}~{{({x},{y})}}={{{\mathsf}{receive}~{M}}}~{\mathsf}{in}~{N}}}$. We can simulate all non-session types as session types via a translation from [HGV]{}to [HGV$\pi$]{}. The translation on types is given by the homomorphic extension of the following equations: $${({{{T} {\multimap}{U}}})^\star} = {\mathord{!}{{({T})^\star}}.{{({U})^\star}}} \qquad
{({{{T} \to {U}}})^\star} = {\sharp {({\mathord{!}{{({T})^\star}}.{{({U})^\star}}})}} \qquad
{({{{T} \otimes {U}}})^\star} = {\mathord{?}{{({T})^\star}}.{{({U})^\star}}}$$Each target type is the *interface* to the simulated source type. A linear function is simulated by input on a channel; its interface is output on the other end of the channel. An unlimited function is simulated by a server; its interface is the service on the other end of that channel. A tensor is simulated by output on a channel; its interface is input on the other end of that channel. This duality between implementation and interface explains the flipping of types in Wadler’s original CPS translation from GV to CP. The translation on terms is given by the homomorphic extension of the following equations:
[([[.[M]{}]{}]{})\^]{} &=& [[fork]{} [z]{}.[[[let]{} [[([x]{},[z]{})]{}]{}=[[[receive]{} [z]{}]{}]{} [in]{} [[[link]{} [[([M]{})\^]{}]{} [z]{}]{}]{}]{}]{}]{}\
[([[[L]{} [M]{}]{}]{})\^]{} &=& [[send]{} [[([M]{})\^]{}]{} [[([L]{})\^]{}]{}]{}\
[([M,N]{})\^]{} &=& [[fork]{} [z]{}.[[[link]{} [([[send]{} [[([M]{})\^]{}]{} [z]{}]{})]{} [[([N]{})\^]{}]{}]{}]{}]{}\
[([[[let]{} [[([x]{},[y]{})]{}]{}=[M]{} [in]{} [N]{}]{}]{})\^]{} &=& [[let]{} [[([x]{},[y]{})]{}]{}=[[[receive]{} [[([M]{})\^]{}]{}]{}]{} [in]{} [[([N]{})\^]{}]{}]{}\
[([L : [[T]{} ]{}]{})\^]{} &=& [[serve]{} [z]{}.[[[link]{} [[([L]{})\^]{}]{} [z]{}]{}]{}]{}\
[([L : [[T]{} [U]{}]{}]{})\^]{} &=& [[request]{} [[([L]{})\^]{}]{}]{}\
[([[[receive]{} [M]{}]{}]{})\^]{} &=& [([M]{})\^]{}\
Formally, this is a translation on derivations. We write type annotations to indicate $\to$ introduction and elimination. For all other cases, it is unambiguous to give the translation on plain term syntax. Each introduction form translates to an interface ${{\mathsf}{fork}~{z}.{M}}$ of type ${\overline{S}}$, where $M : {{\mathsf}{end}_!}$ provides the implementation, with $z : S$ bound in $M$. We can extend the translation on types to a translation on contexts:
[([x\_1:T\_1, …, x\_n:T\_n]{})\^]{} &=& x\_1:[([T\_1]{})\^]{}, …, x\_n:[([T\_n]{})\^]{}\
It is straightforward to verify that our translation preserves typing.
If ${{\Phi} \vdash {M} : {T}}$ then ${{{({\Phi})^\star}} \vdash {{({M})^\star}} : {{({T})^\star}}}$.
From [HGV$\pi$]{}to CP {#sect:hgvpi-to-cp}
======================
We present the typing rules of CP in Figure \[fig:cp-typing\]. Note that the propositions of CP are exactly those of classical linear logic, as are the cut rules (if we ignore the terms). Thus, CP enjoys all of the standard meta theoretic properties of classical linear logic, including confluence and weak normalisation. A minor syntactic difference between our presentation and Wadler’s is that our sum ($\oplus$) and choice ($\binampersand$) types are $n$-ary, matching the corresponding session types in [HGV]{}, whereas he presents binary and nullary versions of sum and choice. Duality on CP types (${{(-)}^\bot}$) is standard: $${{\begin{array}}{@{}c@{}}}{{({{A} \otimes {B}})}^\bot} \!=\! {{{{A}^\bot}} \mathbin{\bindnasrepma} {{{B}^\bot}}}
~~
{{({\oplus {{\{ {l}_i:A_i \}}_{i}}})}^\bot} \!=\! {\binampersand {{\{ {l}_i:{{A_i}^\bot} \}}_{i}}}
~~
{{{1}}^\bot} \!=\! {\bot}~~
{{({\exists {X}.{B}})}^\bot} \!=\! {\forall {X}.{{{B}^\bot}}}
~~
{{({\mathord{!}{A}})}^\bot} \!=\! {\mathord{?}{{{A}^\bot}}}
\\
{{({{A} \mathbin{\bindnasrepma} {B}})}^\bot} \!=\! {{{{A}^\bot}} \otimes {{{B}^\bot}}}
~~
{{({\binampersand {{\{ {l}_i:A_i \}}_{i}}})}^\bot} \!=\! {\oplus {{\{ {l}_i:{{A_i}^\bot} \}}_{i}}}
~~
{{{\bot}}^\bot} \!=\! {1}~~
{{({\forall {X}.{B}})}^\bot} \!=\! {\exists {X}.{{{B}^\bot}}}
~~
{{({\mathord{?}{A}})}^\bot} \!=\! {\mathord{!}{{{A}^\bot}}}
\\
{{\end{array}}}$$
The semantics of CP terms follows the cut elimination rules in classical linear logic. We interpret the cut relation ${\longrightarrow}$ modulo $\alpha$-equivalence and structural cut equivalence:
[[x]{} ]{} && [[y]{} ]{}\
[.([P]{} )]{} && [.([Q]{} )]{}\
[.([[.([P]{} )]{}]{} )]{} && [.([P]{} )]{}\
The principal cut elimination rules correspond to communication between processes.
[.([[[w]{} ]{}]{} )]{} && [[P]{}\[[w]{}/[x]{}\]]{}\
[.([[[x]{}\[[y]{}\].([P]{} )]{}]{} )]{} && [.([P]{} )]{}\
[.([[[x]{}\[[[l]{}\_j]{}\].[P]{}]{}]{} )]{} && [.([P]{} )]{}\
[.([[![x]{}([y]{}).[P]{}]{}]{} )]{} && [.([P]{} )]{}\
[.([[![x]{}([y]{}).[P]{}]{}]{} )]{} && Q, x [(Q)]{}\
[.([[![x]{}([y]{}).[P]{}]{}]{} )]{} && [.([[![x]{}([y]{}).[P]{}]{}]{} )]{}\
[.([[[x]{}\[[A]{}\].[P]{}]{}]{} )]{} && [.([P]{} )]{}\
[.([[[x]{}\[\].0]{}]{} )]{} && P\
Finally, we provide commuting conversions, moving communication under unrelated prefixes.
[.([[[x]{}\[[y]{}\].([P]{} )]{}]{} )]{} && [[x]{}\[[y]{}\].([[.([P]{} )]{}]{} )]{}, z [(P)]{}\
[.([[[x]{}\[[y]{}\].([P]{} )]{}]{} )]{} && [[x]{}\[[y]{}\].([P]{} )]{}, z [(Q)]{}\
[.([[[x]{}([y]{}).[P]{}]{}]{} )]{} && [[x]{}([y]{}).[[.([P]{} )]{}]{}]{}\
[.([[[x]{}\[[[l]{}]{}\].[P]{}]{}]{} )]{} && [[x]{}\[[[l]{}]{}\].[[.([P]{} )]{}]{}]{}\
[.([[[x]{}.[case]{} [[[{ [l]{}\_i.Q\_i }]{}\_[i]{}]{}]{}]{}]{} )]{} && [[x]{}.[case]{} [[[{ [l]{}\_i.[.([Q\_i]{} )]{} }]{}\_[i]{}]{}]{}]{}\
[.([[![x]{}([y]{}).[P]{}]{}]{} )]{} && [![x]{}([y]{}).[[.([P]{} )]{}]{}]{}\
[.([[?[x]{}\[[y]{}\].[P]{}]{}]{} )]{} && [?[x]{}\[[y]{}\].[[.([P]{} )]{}]{}]{}\
[.([[[x]{}\[[A]{}\].[P]{}]{}]{} )]{} && [[x]{}\[[A]{}\].[[.([P]{} )]{}]{}]{}\
[.([[[x]{}([X]{}).[P]{}]{}]{} )]{} && [[x]{}([X]{}).[[.([P]{} )]{}]{}]{}\
[.([[[x]{}().[P]{}]{}]{} )]{} && [[x]{}().[[.([P]{} )]{}]{}]{}\
A fuller account of CP can be found in Wadler’s work [@Wadler12].
We now give a translation from [HGV$\pi$]{}to CP. Post composing this with the embedding of [HGV]{}in [HGV$\pi$]{}yields a semantics for [HGV]{}. The translation on session types is as follows: $${\begin{array}}{@{}c@{\qquad}c@{\qquad}c@{\qquad}c@{}}
{\begin{array}}{@{}r@{~}c@{~}l@{}}
{\llbracket{{\mathord{!}{T}.{S}}}\rrbracket} &=& {{{{{\llbracket{T}\rrbracket}}^\bot}} \otimes {{\llbracket{S}\rrbracket}}} \\
{\llbracket{{\mathord{?}{T}.{S}}}\rrbracket} &=& {{{\llbracket{T}\rrbracket}} \mathbin{\bindnasrepma} {{\llbracket{S}\rrbracket}}} \\
{\llbracket{{{\mathsf}{end}_!}}\rrbracket} &=& {1}\\
{\end{array}}&
{\begin{array}}{@{}r@{~}c@{~}l@{}}
{\llbracket{{\oplus {{\{ {l}_i:S_i \}}_{i}}}}\rrbracket} &=& {\oplus {{\{ {l}_i:{\llbracket{S_i}\rrbracket} \}}_{i}}} \\
{\llbracket{{\binampersand {{\{ {l}_i:S_i \}}_{i}}}}\rrbracket} &=& {\binampersand {{\{ {l}_i:{\llbracket{S_i}\rrbracket} \}}_{i}}} \\
{\llbracket{{{\mathsf}{end}_?}}\rrbracket} &=& {\bot}\\
{\end{array}}&
{\begin{array}}{@{}r@{~}c@{~}l@{}}
{\llbracket{{\flat {S}}}\rrbracket} &=& {\mathord{!}{{\llbracket{S}\rrbracket}}} \\
{\llbracket{{\sharp {S}}}\rrbracket} &=& {\mathord{?}{{\llbracket{S}\rrbracket}}} \\
{\llbracket{X}\rrbracket} &=& X \\
{\end{array}}&
{\begin{array}}{@{}r@{~}c@{~}l@{}}
{\llbracket{{\mathord{!}[{X}].{S}}}\rrbracket} &=& {\exists {X}.{{\llbracket{S}\rrbracket}}} \\
{\llbracket{{\mathord{?}[{X}].{S}}}\rrbracket} &=& {\forall {X}.{{\llbracket{S}\rrbracket}}} \\
{\llbracket{{\overline{X}}}\rrbracket} &=& {{X}^\bot} \\
{\end{array}}{\end{array}}$$ The translation is homomorphic except for output, where the output type is dualised. This accounts for the discrepancy between ${\overline{{\mathord{!}{T}.{S}}}} = {\mathord{?}{T}.{{\overline{S}}}}$ and ${{({{A} \otimes {B}})}^\bot} = {{{{A}^\bot}} \mathbin{\bindnasrepma} {{{B}^\bot}}}$.
The translation on terms is formally specified as a CPS translation on derivations as in Wadler’s presentation. We provide the full translations of weakening and contraction for ${{\mathsf}{end}_?}$, as these steps are implicit in the syntax of [HGV]{}terms. The other constructs depend only on the immediate syntactic structure, so we abbreviate their translations as mappings on plain terms:
z &=&\
z &=&\
z &=& z\
z &=& [.([[[x]{}\[[y]{}\].([y]{} )]{}]{} )]{}\
z &=& [.([y]{} )]{}\
z &=& [.([x]{} )]{}\
z &=& [.([x]{} )]{}\
z &=& [.([[.([y]{} )]{}]{} )]{}\
z &=& [[z]{}().[[.([x]{} )]{}]{}]{}\
z &=& [.([x]{} )]{}\
z &=& [.([x]{} )]{}\
z &=& [![z]{}([y]{}).[[.([x]{} )]{}]{}]{}\
z &=& [.([x]{} )]{}\
Channel $z$ provides a continuation, consuming the output of the process representing the original [HGV$\pi$]{}term. The translation on contexts is pointwise.
&=& x\_1:, …, x\_n:\
As with the translation from [HGV]{}to [HGV$\pi$]{}, we can show that this translation preserves typing.
If ${{\Phi} \vdash {M} : {S}}$ then ${{{\llbracket{M}\rrbracket}z} \vdash {{\llbracket{\Phi}\rrbracket},z:{{{\llbracket{S}\rrbracket}}^\bot}}}$.
From CP to [HGV$\pi$]{} {#sect:cp-to-hgvpi}
=======================
We now present the translation ${\llparenthesis{-}\rrparenthesis}$ from CP to [HGV$\pi$]{}. The translation on types is as follows: $${\begin{array}}{@{}c@{\qquad}c@{\qquad}c@{\qquad}c@{}}
{\begin{array}}{@{}r@{~}c@{~}l@{}}
{\llparenthesis{{{A} \otimes {B}}}\rrparenthesis} &=& {\mathord{!}{{\overline{{\llparenthesis{A}\rrparenthesis}}}}.{{\llparenthesis{B}\rrparenthesis}}} \\
{\llparenthesis{{{A} \mathbin{\bindnasrepma} {B}}}\rrparenthesis} &=& {\mathord{?}{{\llparenthesis{A}\rrparenthesis}}.{{\llparenthesis{B}\rrparenthesis}}} \\
{\llparenthesis{{1}}\rrparenthesis} &=& {{\mathsf}{end}_!}\\
{\end{array}}&
{\begin{array}}{@{}r@{~}c@{~}l@{}}
{\llparenthesis{{\oplus {{\{ {l}_i:A_i \}}_{i}}}}\rrparenthesis} &=& {\oplus {{\{ {l}_i:{\llparenthesis{A_i}\rrparenthesis} \}}_{i}}} \\
{\llparenthesis{{\binampersand {{\{ {l}_i:A_i \}}_{i}}}}\rrparenthesis} &=& {\binampersand {{\{ {l}_i:{\llparenthesis{A_i}\rrparenthesis} \}}_{i}}} \\
{\llparenthesis{{\bot}}\rrparenthesis} &=& {{\mathsf}{end}_?}\\
{\end{array}}&
{\begin{array}}{@{}r@{~}c@{~}l@{}}
{\llparenthesis{{\exists {X}.{A}}}\rrparenthesis} &=& {\mathord{!}[{X}].{{\llparenthesis{A}\rrparenthesis}}} \\
{\llparenthesis{{\forall {X}.{A}}}\rrparenthesis} &=& {\mathord{?}[{X}].{{\llparenthesis{A}\rrparenthesis}}} \\
{\llparenthesis{X}\rrparenthesis} &=& X \\
{\end{array}}&
{\begin{array}}{@{}r@{~}c@{~}l@{}}
{\llparenthesis{{\mathord{?}{A}}}\rrparenthesis} &=& {\sharp {{\llparenthesis{A}\rrparenthesis}}} \\
{\llparenthesis{{\mathord{!}{A}}}\rrparenthesis} &=& {\flat {{\llparenthesis{A}\rrparenthesis}}} \\
{\llparenthesis{{{X}^\bot}}\rrparenthesis} &=& {\overline{X}} \\
{\end{array}}{\end{array}}$$ The translation on terms makes use of ${\mathsf}{let}$ expressions to simplify the presentation; these are expanded to [HGV$\pi$]{}as follows: $${{\mathsf}{let}~{x}={M}~{\mathsf}{in}~{N}} \equiv {({(\lambda x.N) M})^\star} \equiv {{\mathsf}{send}~{M}~{({{\mathsf}{fork}~{z}.{{{{\mathsf}{let}~{{({x},{z})}}={{{\mathsf}{receive}~{z}}}~{\mathsf}{in}~{{{\mathsf}{link}~{N}~{z}}}}}}})}}.$$ $${\begin{array}}{@{}r@{~}c@{~}l@{}}
{\llparenthesis{{{x}[{y}].({P} \mid {Q})}}\rrparenthesis} &=&
{{\mathsf}{let}~{x}={{{\mathsf}{send}~{({{\mathsf}{fork}~{y}.{{\llparenthesis{P}\rrparenthesis}}})}~{x}}}~{\mathsf}{in}~{{\llparenthesis{Q}\rrparenthesis}}} \\
{\llparenthesis{{{x}({y}).{P}}}\rrparenthesis} &=&
{{\mathsf}{let}~{{({y},{x})}}={{{\mathsf}{receive}~{x}}}~{\mathsf}{in}~{{\llparenthesis{P}\rrparenthesis}}} \\
{\llparenthesis{{{x}[{{l}}].{P}}}\rrparenthesis} &=&
{{\mathsf}{let}~{x}={{{\mathsf}{select}~{{l}}~{x}}}~{\mathsf}{in}~{{\llparenthesis{P}\rrparenthesis}}} \\
{\llparenthesis{{{x}.{\mathsf}{case}~{{{\{ {l}_i.P_i \}}_{i}}}}}\rrparenthesis} &=&
{{\mathsf}{case}~{x}~{\mathsf}{of}~{{{\{ {l}_i(x).{\llparenthesis{P_i}\rrparenthesis} \}}_{i}}}} \\
{\llparenthesis{{{x}[].0}}\rrparenthesis} &=& x \\
{\llparenthesis{{{x}().{P}}}\rrparenthesis} &=& {\llparenthesis{P}\rrparenthesis} \\
{\llparenthesis{{\nu {x}.({P} \mid {Q})}}\rrparenthesis} &=&
{{\mathsf}{let}~{x}={{{\mathsf}{fork}~{x}.{{\llparenthesis{P}\rrparenthesis}}}}~{\mathsf}{in}~{{\llparenthesis{Q}\rrparenthesis}}} \\
{\llparenthesis{{{x} \leftrightarrow {y}}}\rrparenthesis} &=& {{\mathsf}{link}~{x}~{y}} \\
{\llparenthesis{{{x}[{A}].{P}}}\rrparenthesis} &=&
{{\mathsf}{let}~{x}={{{\mathsf}{sendType}~{{\llparenthesis{A}\rrparenthesis}}~{x}}}~{\mathsf}{in}~{{\llparenthesis{P}\rrparenthesis}}} \\
{\llparenthesis{{{x}({X}).{P}}}\rrparenthesis} &=&
{{\mathsf}{let}~{x}={{{\mathsf}{receiveType}~{X}.{x}}}~{\mathsf}{in}~{{\llparenthesis{P}\rrparenthesis}}} \\
{\llparenthesis{{!{s}({x}).{P}}}\rrparenthesis} &=&
{{\mathsf}{link}~{s}~{({{\mathsf}{serve}~{x}.{{\llparenthesis{P}\rrparenthesis}}})}} \\
{\llparenthesis{{?{s}[{x}].{P}}}\rrparenthesis} &=&
{{\mathsf}{let}~{x}={{{\mathsf}{request}~{s}}}~{\mathsf}{in}~{{\llparenthesis{P}\rrparenthesis}}} \\
{\end{array}}$$ Again, we can extend the translation on types to a translation on contexts, and show that the translation preserves typing.
If ${{P} \vdash {{\Gamma}}}$ then ${{{\llparenthesis{{\Gamma}}\rrparenthesis}} \vdash {{\llparenthesis{P}\rrparenthesis}} : {{{\mathsf}{end}_!}}}$.
Correctness {#sect:correctness}
===========
If we extend ${\llbracket{-}\rrbracket}$ to non-session types, as in Wadler’s original presentation (Figure \[fig:hgvcp-ext\]), then it is straightforward to show that this monolithic translation factors through ${({-})^\star}$.
\[th:factor\] ${\llbracket{{({M})^\star}}\rrbracket}z {\longrightarrow}^* {\llbracket{M}\rrbracket}z$ (where ${\longrightarrow}^*$ is the reflexive transitive closure of $\equiv{\longrightarrow}\equiv$).
The key soundness property of our translations is that if we translate a term from CP to [HGV$\pi$]{}and back, then we obtain a term equivalent to the one we started with.
\[th:soundness\] If ${{P} \vdash {{\Gamma}}}$ then ${\nu {z}.({{{z}[].0}} \mid {{\llbracket{{\llparenthesis{P}\rrparenthesis}}\rrbracket}z})} {\longrightarrow}^* P$.
Together, Theorem \[th:factor\] and \[th:soundness\] tell us that [HGV]{}, [HGV$\pi$]{}, and CP are equally expressive, in the sense that every $X$ program can always be translated to an equivalent $Y$ program, where $X,Y \in \{$[HGV]{}, [HGV$\pi$]{}, CP$\}$.
Here our notion of expressivity is agnostic to the nature of the translations. It is instructive also to consider Felleisen’s more refined notion of expressivity [@Felleisen91]. Both ${({-})^\star}$ and ${\llparenthesis{-}\rrparenthesis}$ are local translations, thus both [HGV]{}and CP are *macro-expressible* [@Felleisen91] in [HGV$\pi$]{}. However, the need for a global CPS translation from [HGV$\pi$]{}to CP illustrates that [HGV$\pi$]{}is not macro-expressible in CP; hence [HGV$\pi$]{}is more expressive, in the Felleisen sense, than CP.
$${\begin{array}}{@{}cc@{}}
{\begin{array}}{@{}r@{~}c@{~}l@{}}
\multicolumn{3}{@{}l@{}}{\textbf{Types}} \\
\multicolumn{3}{@{}l@{}}{{\llbracket{T}\rrbracket} = {{{\llceil{T}\rrceil}}^\bot}, T \text{ not a session type}} \\
\multicolumn{3}{@{}l@{}}{\text{where}} \\
\quad{\llceil{{{T} {\multimap}{U}}}\rrceil} &=& {{{{{\llceil{T}\rrceil}}^\bot}} \mathbin{\bindnasrepma} {{\llceil{U}\rrceil}}} \\
\quad{\llceil{{{T} \to {U}}}\rrceil} &=& {\mathord{!}{({{{{{\llceil{T}\rrceil}}^\bot}} \mathbin{\bindnasrepma} {{\llceil{U}\rrceil}}})}} \\
\quad{\llceil{{{T} \otimes {U}}}\rrceil} &=& {{{\llceil{T}\rrceil}} \otimes {{\llceil{U}\rrceil}}} \\
\quad{\llceil{S}\rrceil} &=& {\llbracket{S}\rrbracket} \\
{\end{array}}&
{\begin{array}}{@{}r@{~}c@{~}l@{}}
\multicolumn{3}{@{}l@{}}{\textbf{Terms}} \\
{\llbracket{{\lambda {x}.{N}}}\rrbracket}z &=& {{z}({x}).{{\llbracket{N}\rrbracket}z}} \\
{\llbracket{{{L}~{M}}}\rrbracket}z &=& {\nu {y}.({{\llbracket{L}\rrbracket}y} \mid {{{y}[{x}].({{\llbracket{M}\rrbracket}x} \mid {{{y} \leftrightarrow {z}}})}})} \\
{\llbracket{L : {{T} \to {U}}}\rrbracket}z &=& {!{z}({y}).{{\llbracket{L}\rrbracket}y}} \\
{\llbracket{L : {{T} {\multimap}{U}}}\rrbracket}z &=& {\nu {y}.({{\llbracket{L}\rrbracket}y} \mid {{?{y}[{x}].{{{x} \leftrightarrow {z}}}}})} \\
{\llbracket{{({M},{N})}}\rrbracket}z &=& {{z}[{y}].({{\llbracket{M}\rrbracket}y} \mid {{\llbracket{N}\rrbracket}z})} \\
{\llbracket{{{\mathsf}{let}~{{({x},{y})}}={M}~{\mathsf}{in}~{N}}}\rrbracket}z
&=& {\nu {y}.({{\llbracket{M}\rrbracket}y} \mid {{{y}({x}).{{\llbracket{N}\rrbracket}z}}})} \\
{\end{array}}{\end{array}}$$
Conclusions and Future Work {#sect:conclusion}
===========================
We have proposed a session-typed functional language, [HGV]{}, building on similar languages of Wadler [@Wadler12] and of Gay and Vasconcelos [@GayVasconcelos10]. We have shown that [HGV]{}is sufficient to encode arbitrary linear logic proofs, completing the correspondence between linear logic and session types. We have also given an embedding of all of [HGV]{}into its session-typed fragment, simplifying translation from [HGV]{}to CP.
Dardha et al [@DardhaGS12] offers an alternative foundation for session types through a CPS translation of $\pi$-calculus with session types into a linear $\pi$-calculus. There appear to be strong similarities between their CPS translation and ours. We would like to make the correspondence precise by studying translations between their systems and ours.
In addition we highlight several other areas of future work. First, the semantics of [HGV]{}is given only by cut elimination in CP. We would like to give [HGV]{}a semantics directly, in terms of reductions of configurations of processes, and then prove a formal correspondence with cut elimination in CP. Second, replication has limited expressive power compared to recursion; in particular, it cannot express services whose behaviour changes over time or in response to client requests. We believe that the study of fixed points in linear logic provides a mechanism to support more expressive recursive behaviour without sacrificing the logical interpretation of [HGV]{}. Finally, as classical linear logic proofs, and hence CP processes, enjoy confluence, [HGV]{}programs are deterministic. We hope to identify natural extensions of [HGV]{}that give rise to non-determinism, and thus allow programs to exhibit more interesting concurrent behaviour, while preserving the underlying connection to linear logic.
#### Acknowledgements
We would like to thank Philip Wadler for his suggestions on the direction of this work, and for his helpful feedback on the results. This work was funded by EPSRC grant number EP/K034413/1.
\[sect:bib\]
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This review describes the multiboson algorithm for Monte Carlo simulations of lattice QCD, including its static and dynamical aspects, and presents a comparison with Hybrid Monte Carlo.'
address: 'ETH, CH-8092 Zürich, Switzerland'
author:
- Philippe de Forcrand
title: The MultiBoson method
---
Monte Carlo, fermions, algorithms
section1.tex section2.tex section3.tex section4.tex concl.tex
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present a scaling analysis of electronic and transport properties of metal-semiconducting carbon nanotube interfaces as a function of the nanotube length within the coherent transport regime, which takes fully into account atomic-scale electronic structure and three-dimensional electrostatics of the metal-nanotube interface using a real-space Green’s function based self-consistent tight-binding theory. As the first example, we examine devices formed by attaching finite-size single-wall carbon nanotubes (SWNT) to both high- and low- work function metallic electrodes through the dangling bonds at the end, where the length of the SWNT molecule varies from the molecular limit to the bulk limit and the strength of metal-SWNT coupling varies from the strong coupling to the weak coupling limit. We analyze the nature of Schottky barrier formation at the metal-nanotube interface by examining the electrostatics, the band lineup and the conductance of the metal-SWNT molecule-metal junction as a function of the SWNT molecule length and metal-SWNT coupling strength. We show that the confined cylindrical geometry and the atomistic nature of electronic processes across the metal-SWNT interface leads to a different physical picture of band alignment from that of the planar metal-semiconductor interface. We analyze the temperature and length dependence of the conductance of the SWNT junctions, which shows a transition from tunneling- to thermal activation-dominated transport with increasing nanotube length. The temperature dependence of the conductance is much weaker than that of the planar metal-semiconductor interface due to the finite number of conduction channels within the SWNT junctions. We find that the current-voltage characteristics of the metal-SWNT molecule-metal junctions are sensitive to models of the potential response to the applied source/drain bias voltages. Our analysis applies in general to devices based on quasi-one-dimensional nanostructures including molecules, carbon nanotubes and semiconductor nanowires.'
author:
- 'Yongqiang Xue$^{*}$'
- 'Mark A. Ratner'
title: 'Scaling analysis of electron transport through metal-semiconducting carbon nanotube interfaces: Evolution from the molecular limit to the bulk limit'
---
Introduction
============
It is interesting to note that all the semiconductor devices that have had a sustaining impact on integrated microelectronics were invented before 1974, [@Sze1] the year when Chang, Esaki and Tsu reported the first observation of negative differential resistance (NDR) in semiconductor heterojunction resonant-tunneling diodes (RTD). [@RTD] The operation of such semiconductor devices relies on the (controlled) presence of imperfections in otherwise perfect crystals, [@Shockley] through doping or through interfaces between materials with different electronic and/or lattice structures. Doping introduces electronic impurities (electrons/holes) into the otherwise perfect band structure through introducing atomic impuries (dopants) into the otherwise perfect lattice structure. [@E-H] The presence of interfaces, on the other hand, induces spatial charge and potential inhomogeneities which control the injection and modulate the motion of excess charge carriers within the device. A number of fundamental buiding blocks of microelectronics can therefore be identified according to the interface structures that control the device operation, [@Sze1; @Sze2] including metal-semiconductor (MS) interfaces, semiconductor homo-(p-n) junctions, semiconductor heterojunctions and metal-insulator-semiconductor (MIS) interfaces. [@Sze1; @Sze2] Despite the continuous shrinking of feature size and correspondingly the increasing importance of hot-carrier and quantum mechanical effects, [@HotQu] the design and operation of semiconductor transistors have followed remarkably well the scaling rules for device miniaturization [@Scaling] derived from the semi-classical semiconductor transport equations. [@DD; @SeBook] There are also theoretical arguments that support the use of semiclassical pictures even in high-field transport [@WilkSemi] and nanoscale ballistic silicon transistors. [@Lundstrom]
The discovery of single-wall carbon nanotubes (SWNTs) in the early 1990s [@Iijima] has led to intense world-wide activity exploring their electrical properties and potential applications in nanoelectronic devices. [@Dekker99; @Dress98; @NTDevice] SWNTs are nanometer-diameter all-carbon cylinders with unique structure-property relations: They consist of a single graphene sheet wrapped up to form a tube and their physical properties are controlled by the boundary conditions imposed on the wrapping directions. They provide ideal artifical laboratories for studying transport on the length scale ranging from the molecular limit as all-carbon cylindrical molecules to the bulk limit as quasi-one-dimensional quantum wires with the same lattice configuration and local bonding environment. [@Dekker99; @Dress98; @NTDevice] Many device concepts well known in conventional semiconductor microelectronics have been successfully demonstrated on a single-tube basis, ranging from intramolecular homo(hetero)-junctions, modulation doping to field-effect transitors. [@DekkerNT; @AvNT; @McEuNT; @DaiNT; @JohnNT] This prompts interest in knowing if the physical mechanisms underlying the operation of conventional microelectronic devices remain valid down to such ultra-small scales. Research on SWNT-based nanoelectronic devices therefore presents unique opportunities both for exploring novel device technology functioning at the nano/molecular-scale and for re-examining the physical principles of semiconductor microelectronics from the bottom-up atomistic approach. In addition, the concepts and techniques developed can be readily generalized to investigate other quasi-one-dimensional nanostructures, in particular semiconductor nanowires. [@Wire]
Among the device physics problems arising in this context, the nature of electron transport through a metal-semiconducting SWNT interface [@Xue99NT; @TersoffNT; @Odin00; @De02] stands out due to its simplicity and its role as one of the basic device building blocks. [@Sze1; @Sze2; @MSBook] As the device building block, it is also crucial for understanding the mechanisms and guiding the design of SWNT-based electrochemical sensors, [@NTCH] electromechanical devices, [@NTME] and field-effect transistors (NTFET), [@AvFET; @DaiFET; @McEuFET] where electron transport through the metal-SWNT-metal junction is modulated through molecular adsorption, mechanical strain and electrostatic gate field respectively. Note that in the case of NTFET, metals have been used as the source, drain and gate electrodes, in contrast to silicon-based transistors which use heavily-doped polycrystalline materials. [@AvFET; @DaiFET; @McEuFET]
The nature of charge transport through metal-semiconductor interfaces has been actively investigated for decades due to their importance in microelectronic technology, [@MSBook; @MSMonch; @MSReview] but is still not fully resolved, in particular regarding the mechanism of Schottky barrier formation/height and high-field transport phenomena. [@MSPaper; @MSTran] Compared to their bulk semiconductor counterpart, metal-SWNT interfaces present new challenges in that: (1) Both the contact area and the active device region can have atomic-scale dimensions; (2) The quasi-one-dimensional structure (cylindrical for nanotube materials) makes the screening of electron-electron interaction ineffective and leads to long range correlation between electrons within SWNT-based devices; (3) Last but probably the most important difference lies in the fact that quasi-one-dimensional wires, no matter how long, cannot be treated as electron reservoirs. [@Landauer] This is partly due to the fact that the restricted phase space in such systems prevents rapid relaxation of injected carriers to a pre-defined equilibrium state through electron-electron and/or electron-phonon scatterings. But more importantly, this can be understood from a simple geometrical argument: Since the total current is conserved, there will always be a finite current density flowing along the wire and consequently a non-equilibrium state persists no matter how strong electron-electron and/or electron-phonon scattering is. An equilibrium state can be achieved only through the widened (adiabatic) contact with the (three-dimensional) metallic electrodes (or other macroscopic measurement apparatus) attached to them, where the finite current density can be effectively “diluted” through the larger cross sectional area. [@Landauer] *Correspondingly electron transport through metal-SWNT interfaces can only be studied within the configuration of metal-SWNT-metal junction* (as are other quasi-one-dimensional systems), in contrast to the planar metal-semiconductor interface, where the presence of the second electrode can be implicitly neglected and the analysis of transport characteristics proceeds by analyzing the interface region and the bulk semiconductor region separately. [@MSBook]
The last fact has important implications in the assessment of Schottky barrier effects on the measured transport characteristics, since transport mechanisms both at the interface and inside the active device region have to be considered simultaneously even for a long nanotube. Since the back-scattering of electrons by impurities [@Ando] and the low-energy acoustic phonons [@Kane; @WT98; @NTPhonon] are weak in such quasi-one-dimensional systems, the nature of the electron transport through metal-SWNT interfaces generally depends on the type of the SWNTs (length/diameter/chirality), the type of the contacts, and the temperature and bias voltage range. Experimentally, this matter is further complicated by the different fabrication/contact schemes used and the lack of knowledge of the atomic structure of the SWNT junctions.
Recent works have studied electrical transport through a metal-long carbon nanotube interface using the bulk (infinitely long) band structures and electrostatics of ideal cylinders [@TersoffNT; @Odin00; @KM01; @De02]. For nanoelectronics research, it will be important to explore the device functionality of finite-size carbon nanotubes with lengths ranging from nanometers to tens of nanometers. Since most of the SWNT devices currently investigated are based on SWNTs with length of $100nm$ or longer, an investigation of the finite-size effect will shed light on the scaling limit of carbon nanotube devices, [@Limit; @NTLimit; @GuoNT] as well as establish the validity or viability of using bulk device physics concepts in nanotube device research.
The finite-size SWNT can be either a finite cylindrical all-carbon molecule attached to the metal surfaces through the dangling bonds at the end (end-contact scheme), [@XueNT03] a finite segment of a long carbon nanotube wire whose ends are buried inside the metallic electrodes (embedded-contact scheme), [@DaiNT; @XueNT04] or a finite segment of a long nanotube wire which is deposited on top of predefined metallic electrodes and side-contacted to the surfaces of the electrodes (side-contact sheme). [@Dekker99; @Contact] In the case of finite SWNT molecules, a transition from the molecular limit to the bulk (infinitely long) limit in the electronic structure will occur as the length of the finite SWNT varies from nanometers to tens of nanometers. In the case of long SWNT wires, the electronic structure of the finite SWNT segment remains that of the bulk (which may be perturbed by the coupling to the electrodes), but the electrostatics of the metal-SWNT-metal junction varies with the SWNT length. Due to the nanoscale contact geometry and reduced dimensionality of SWNTs, a correct description of the Schottky barrier formation at the metal-finite SWNT interface generally requires an atomistic description of the electronic processes throughout the metal-SWNT-metal junctions.
The purpose of this work is thus to present a self-consistent atomistic analysis of the electronic and transport properties of the metal-SWNT interfaces within the configuration of metal-SWNT-metal junction as a function of the SWNT length, which is varied from the nanometer to tens of nanometer range. In contrast to previous theoretical works, [@Xue99NT; @TersoffNT; @Odin00; @KM01; @De02] we use a novel Green’s function based self-consistent tight-binding (SCTB) theory in real-space, which takes fully into account the three-dimensional electrostatics and the atomic-scale electronic structure of the SWNT junctions. In accordance with the nanometer length-scale of the SWNT studied, we treat electron transport within the coherent transport regime. [@WT98; @NTPhonon] In this first paper, we consider the device formed by attaching a finite cylindrical SWNT molecule to the electrode surface through the dangling bonds at the end (Fig. \[xueFig1\]). The case of a finite-segment of long SWNT wires in both embedded-contact and side-contact schemes will be treated in the subsequent paper.
The device configuration considered here represents an atomic-scale analogue (both the contact area and the active device region are atomic-scale) to the planar metal-semiconductor interface, where dangling bonds also exist at the semiconductor surface layers and contribute to the Schottky barrier formation. [@MSMonch; @MSReview] Compared to other molecular-scale devices where the individual organic molecule is self-assembled onto the metallic electrode through appropriate end groups [@MEReview; @XueMol1; @XueMol2], the SWNT molecule presents a homogeneous device structure where the only electronic inhomogeneity is introduced at the metal-SWNT interface through the ring of dangling-bond carbon atoms. The device structure considered here thus provides an ideal system for studying the length dependence of device characteristics on an atomic scale. In particular, the effect of the coupling strength can be studied by varing the SWNT end-electrode surface distance.
![\[xueFig1\] (Color online) Schematic illustration of the metal-SWNT molecule-metal junction. We have also shown the coordinate system of the SWNT junction. ](xueFig1.eps){height="3.2in" width="4.0in"}
The rest of the paper is organized as follows: We present the details of the Green’s function based self-consistent tight-binding model in section II. We analyze the evolution of the SWNT electronic structure with the length of the SWNT molecules in section III. We devote section IV to analyzing the nature of Schottky barrier formation at the metal-SWNT molecule interface by examining the electrostatics (change transfer and electrostatic potential change), the electron transmission characteristics and the “band” lineup. In section V, we present the temperature and length dependence of the SWNT junction conductance. We show in section VI that the current-voltage (I-V) characteristics of the SWNT junction are sensitive to the spatial variation of the voltage drop across the junction. Finally in section VII, we summarize our results and discuss their implications for the functioning of SWNT-based devices. A preliminary report of some of the results presented here has been published elsewhere. [@XueNT03] We use atomic units throughout the paper unless otherwise noted.
Theoretical Model
=================
\[Theory\] Real-space Green’s function based self-consistent tight-binding (SCTB) theory
----------------------------------------------------------------------------------------
Modeling electron transport in nanoscale devices is much more difficult than in bulk and mesoscopic semiconductor devices due to the necessity of including microscopic treatment of the electronic structure and the contacts to the measurement electrodes, which requires combining the non-equilibrium statistical mechanics of a open quantum system [@MesoPhy; @Buttiker86; @LB88] with an atomistic modeling of the electronic structure. [@Xue02; @GuoAB]. For small molecular-scale devices where the inelastic carrier scattering can be neglected, this has been done using a self-consistent Matrix Green’s function (SCMGF) method, [@XueMol1; @XueMol2; @Xue02] which combines the Non-Equilibrium Green’s Function (NEGF) theory of quantum transport [@MesoEE; @NEGF] with an effective single-particle description of the electronic structure using density-functional theory (DFT). [@DFT] To treat larger nanoscale systems, e.g., carbon nanotubes or semiconductor nanowires containing thousands or tens of thousands of atoms, a simpler tight-binding-type theory is more appropriate. [@TB; @Lake97; @Xue99Mol; @JoachimTB; @DattaMol; @RatnerMol; @Frauenheim] Correspondingly, we have developed a real-space self-consistent tight-binding (SCTB) method which includes atomic-scale description of the electronic structure and the three-dimensional electrostatics of the metal-SWNT-metal junction. The method is essentially the semi-empirical version of the SCMGF method for treating molecular electronic devices and is applicable to arbitrary nanostructured devices. The details and applications of the SCMGF method have been described extensively elsewhere [@XueMol1; @XueMol2; @Xue02], here we give a brief summary of the self-consistent tight-binding implementation.
The method starts from the Hamiltonian $H_{0}$ describing the isolated nanostructure and the bare metallic electrodes, which can be obtained using either *ab initio* or empirical approaches as appropriate. The effect of the coupling to the electrodes is included as self-energy operators. [@MesoEE; @NEGF] The coupling to the external contacts leads to charge transfer between the electrodes and the nanostructure. Applying a finite bias voltage also leads to charge redistribution (screening) within the nanostructure. Both the effect of the coupling to the contact and the screening of the applied field thus introduced will need to be treated self-consistently. The Hamiltonian describing the coupled metal-nanostructure-metal junction is thus $H=H_{0}+V_{ext}+\delta V[\delta \rho]$, where an external potential of the type $V_{ext}(\vec r)=-e\vec E \cdot \vec r$ should be added in the case of a nonzero source-drain or gate voltage and $\delta \rho$ is the change in the charge density distribution. Given the Hamiltonian matrix, the density matrix $\rho_{ij}$ and therefore the electron density are calculated using the Non-Equilibrium Green’s Function (NEGF) method [@Xue02; @GuoAB; @MesoEE; @NEGF] from either $$\begin{aligned}
\label{GE}
G^{r}
&=& \{ E^{+}S-H-\Sigma_{L}(E)-\Sigma_{R}(E) \}^{-1}, \\
\rho &=& \int \frac{dE}{2\pi }Imag[G^{r}](E).\end{aligned}$$ for device at equilibrium or $$\begin{aligned}
\label{GNE}
G^{<}
&=& i[G^{r}(E)\Gamma_{L}(E)G^{a}(E)]f(E-\mu_{L}) \\ \nonumber
&+& i[G^{r}(E)\Gamma_{R}(E)G^{a}(E)]f(E-\mu_{R}), \\
\rho &=& \int \frac{dE}{2\pi i}G^{<}(E).\end{aligned}$$ for device at non-equilibrium. Here $S$ is overlap matrix and $f(E-\mu_{L(R)})$ is the Fermi-Dirac distribution function at the left (right) electrode. The Green’s functions $G^{r}$ and $G^{<}$ are defined in the standard manner. [@NEGF; @MesoEE] $\Sigma_{L(R)}$ is the self-energy operator due to the coupling to the left (right) electrode which is calculated from the metal surface Green’s function, while $\Gamma_{L(R)}=i(\Sigma_{L(R)}-(\Sigma^{\dagger}_{L(R)})$ (See Refs. for details).
Within the local-density-approximation of density-functional theory, [@DFT] the long range part of $\delta V$ is just the coulomb potential $\delta V(\vec r)=\int \frac{\delta \rho (\vec r')}{|\vec r- \vec r'|}d\vec r'$. For self-consistent treatment of the charging effect within the tight-binding formulation, we follow the density-functional tight-binding (DFTB) theory developed by Frauenheim and coworkers [@Frauenheim] by approximating the charge distribution as a superposition of normalized atomic-centered charge distributions $\delta \rho(\vec r)=\sum_{i} \delta N_{i} \rho_{i}(\vec r-\vec r_{i})$ where $\delta N_{i}$ is the net number of electrons on atomic-site $i$ and $\rho_{i}$ is taken as a normalized Slater-type function $\rho_{i}(\vec r)=\frac{1}{N_{\zeta_{i}}} e^{-\zeta_{i} r}$ and $\int d\vec r \rho_{i}(\vec r)=1$. The exponent $\zeta_{i}$ is chosen such that the electron-electron repulsion energy due to two such charge distributions on atomic-site $i$ equals the difference between the atomic electronic affinity and ionization potential $\int d\vec r d\vec r' \rho_{i}(\vec r) \rho_{i}(\vec r')/|\vec r- \vec r'|
=I_{i}-A_{i}$ [@Frauenheim], which incorporates implicitly the short-range on-site electron-electron interaction effect. In this way, the change in the electrostatic potential can be written as superposition of atomic-centered potentials $\delta V(\vec r)=\sum_{i} \delta N_{i} V_{i}(\vec r -\vec r_{i})$. The advantage of the present approximation is that $V_{i}(\vec r -\vec r_{i})
=\int d\vec r' \rho_{i}(\vec r'-\vec r_{i})/|\vec r- \vec r'|$ can be evaluated analytically, [@Frauenheim; @XueMRS03] $$\label{Vi}
V_{i}=(1-e^{-\zeta_{i}|\vec r-\vec r_{i}|}
(1+\zeta_{i}|\vec r-\vec r_{i}|/2))/|\vec r-\vec r_{i}|.$$
For the metal-SWNT-metal junction considered here, we take into account the image-potential effect by including within $\delta V$ contributions from both atom-centered charges and their image charges (centered around the image positions), rather than imposing an image-type potential correction on $\delta V$. The charge transfer-induced electrostatic potential change is thus: $$\label{VES}
\delta V(\vec r)=\sum_{i} [\delta N_{i} V_{i}(\vec r -\vec r_{i})
+ \delta N_{i;image} V_{i}(\vec r -\vec r_{i;image})]$$ where the image charges $\delta N_{i;image}$ and their positions $\vec r_{i;image}$ are determined from standard electrostatics considerations. [@CED; @NoteImag] The self-consistent cycle proceeds by evaluating the matrix elements of the potential $\delta V_{mn}=\int d\vec r \phi_{m}^{*}(\vec r) \delta V(\vec r)
\phi_{n}(\vec r)$ using two types of scheme: (1) If $m,n$ belong to the same atomic site $i$, we calculate it by direct numerical integration; (2) If $m,n$ belong to different atomic sites, we calculate it from the corresponding on-site element using the approximation $\delta V_{mn}=1/2S_{mn}(\delta V_{mm}+\delta V_{nn})$ where $S_{mn}$ is the corresponding overlap matrix element. We also calculate the matrix elements of the external potential $V_{ext}$ by direct numerical integration whenever applicable. Given the Hamiltonian matrix $H=H_{0}+V_{ext}+\delta V$, the self-consistent calulation then proceeds by calculating the density matrix $\rho$ from the Green’s function by integrating over a complex energy contour [@Xue02; @XueMol1; @XueMol2; @Contour] and evaluating the net charge on atomic-site $i$ from $\delta N_{i}=(\rho S)_{ii}-N_{i}^{0}$ where $N_{i}^{0}$ is the number of valence electrons on atomic-site $i$ of the bare SWNT. Note that the advantage of the present self-consistent tight-binding treatment is that *no adjustable parameters have been introduced* besides those that may be present in the initial Hamiltonian $H_{0}$.
Once the self-consistent calculation converges, we can calculate the transmission coefficient through the SWNT junction from $$\label{TEV}
T(E,V)=
Tr[\Gamma_{L}(E,V)G^{r}(E,V)\Gamma_{R}(E,V)[G^{r}]^{\dagger}(E,V)],$$ and the spatially-resolved local density of states (LDOS) from $$\label{SLDOS}
n(\vec r,E)=-\frac{1}{\pi} \lim_{\delta \to 0^{+}} \sum_{ij}
Imag[G_{ij}^{r}(E+i\delta)] \phi_{i}(\vec r) \phi_{j}^{*}(\vec r),$$ The spatial integration of LDOS gives the density of states, $$\label{DOS}
n^{\sigma}(E)=\int d\vec r n^{\sigma}(\vec r,E)
= -\frac{1}{\pi} \lim_{\delta \to 0^{+}}
Tr\{Imag[G^{r}(E+i\delta)] S\}=\sum_{i}n_{i}(E)$$ where the atomic site-resolved density of states is $n_{i}(E)= -\frac{1}{\pi} \lim_{\delta \to 0^{+}}
[Imag[G^{r}(E+i\delta)] S]_{ii}$. Within the coherent transport regime, the terminal current is related to the transmission coefficient through the Landauer formula [@MesoPhy; @Buttiker86; @LB88] $$\label{IV}
I=\frac{2e}{h} \int dE T(E,V) [ f(E-\mu_{L})-f(E-\mu_{R}) ]$$ where we can separate the current into two components, the “tunneling” component $I_{tun}$ and the “thermionic emission” component $I_{th}$ as follows, $$I = I_{tun}+I_{th} \nonumber
= \frac{e}{h} [\int_{\mu_{L}}^{\mu_{R}}+
(\int_{-\infty }^{\mu_{L}}+\int^{+\infty }_{\mu_{R}})]
dE T(E,V)[f(E-\mu_{L})-f(E-\mu_{R})]$$ Similarly, we can separate the zero-bias conductance $$\label{GT}
G=\frac{2e^{2}}{h}\int dE T(E)[-\frac{df}{dE}(E-E_{F})]=G_{Tu}+G_{Th}$$ into the tunneling contribution $G_{tun}=\frac{2e^{2}}{h}T(E_{F})$ and thermal-activation contribution $G_{th}=G-G_{tun}$.
Device model
------------
In this work, we take $(10,0)$ SWNT as the protype semiconducting SWNT. Since for metal-semiconductor contacts, high- and low- work function metals are used for electron injection and hole-injection respectively, we consider both gold (Au) and titanium (Ti) electrodes as examples of high- and low- work function metals (with work functions of $5.1$ and $4.33$ eV respectively [@Note; @CRC]). The work function of the (10,0) SWNT is taken the same as that of the graphite ($4.5$ eV) [@DekkerNT; @AvNT]. In this paper, the Hamiltonian $H_{0}$ describing the bare SWNT is obtained using the semi-empirical Extended Huckel Theory (EHT) with corresponding non-orthogonal Slater-type basis sets $\phi_{m}(\vec r)$ [@Hoffmann88; @AvEHT] describing the valence ($sp$) electrons of carbon, while the self-energy due to the contact to the metallic electrodes is evaluated using tight-binding parameters obtained from fitting accurate bulk band structure. [@Xue02; @Papa86] The calculation is performed at room temperature.
The $(10,0)$ SWNT has a diameter of $7.8(\AA)$ and unit cell length of $4.1(\AA)$. The unit cell consists of 4 carbon rings with 10 carbon atoms each. The calculated bulk band gap using EHT is $\approx 0.9(eV)$. Since the contacts involved in most transport measurement are not well characterized, a microscopic study as presented here necessarily requires a simplified model of the interface, which is illustrated schematically in Fig. \[xueFig1\]. Here the finite SWNT molecules are attached to the electrode surface through the ring of dangling-bond carbon atoms at the ends. We neglect the possible distortion of the SWNT atomic structure induced by the open-end and its subsequent adsorption onto the electrode surface. [@Contact] We assume that the axis of the SWNT molecule (the Z-axis) lies perpendicular to the electrode surface (the XY-plane). Only nearest-neighbor metal atoms on the surface layer of the electrode are coupled to the SWNT end, the surface Green’s functions of which are calculated using the tight-binding parameter of Ref. assuming a semi-inifinte substrate corresponding to the $\langle 111 \rangle$ and hcp surface for the gold and titanium electrodes respectively.
![\[xueFig2\] (a) shows net electron distribution in the isolated SWNT molecule as a function of SWNT length for seven different lengths. (b) shows magnified view at the left end of all SWNT molecules studied. Here the solid lines show the results obtained using EHT with self-consistent correction, while the dotted lines show the results obtained using EHT without self-consistent correction. ](xueFig2.eps){height="3.2in" width="4.0in"}
![\[xueFig3\] Local density of states in the middle of the $(10,0)$ SWNT molecule calculated using the self-consistent EHT for SWNT lengths of $2.0,8.4,16.9$ and $25.4(nm)$ respectively. The vertical line at $E=-4.5(eV)$ denotes the Fermi-level position of the bulk SWNT. The dotted line is the LDOS of the bulk $(10,0)$ SWNT. For clarity, the figures have been cut off at the top where necessary. ](xueFig3.eps){height="3.2in" width="4.0in"}
The lengths of the $(10,0)$ SWNT molecule investigated are $L=2.0,4.1,
8.4,12.6,16.9,21.2$ and $25.4$ (nm), corresponding to $5,10,20,30,40,50$ and $60$ unit cells respectively. As discussed in the following sections, the variation of SWNT length from $5$ to $60$ unit cells spans the entire range from the molecular limit to the bulk limit. To evaluate the dependence of Schottky barrier formation on the strength of metal-SWNT interface coupling, we consider three SWNT end-metal surface distances of $\Delta L = 2.0,2.5$ and $3.0 (\AA)$. Note that the average of the nearest-neighbor atom distance in the SWNT and Au/Ti electrode is around $2.1 (\AA)$. From our previous work on first-principles based modeling of molecular electronic devices, [@XueMol2] we find that increasing metal-molecule distance by $1.0(\AA)$ is sufficient to reach the weak interfacial coupling limit. Therefore the three choices of metal-SWNT distance are sufficient to demonstrate the trend of Schottky barrier formation as the strength of interface coupling varies from the strong coupling to the weak coupling limit.
Evolution of the electronic structure of the SWNT molecule with length
======================================================================
![\[xueFig4\] (Color online) Local density of states as a function of position along the axis of the $5$-unit cell (a) and $60$-unit cell (b) SWNT molecule. The LDOS is obtained by summing over the 10 carbon atoms of each ring of the $(10,0)$ SWNT. Each cut along the energy axis for a given position along the NT axis gives the LDOS at the corresponding carbon ring. Each cut along the postion axis for a given energy illustrates the spatial extension of the corresponding electron state. For the $5$-unit cell SWNT(a), localized dangling bond state exists around $-5.0(eV)$, whose wavefunction decays into the interior of the SWNT molecule. For the $60$-unit cell SWNT which has approached the bulk limit, the localized dangling bond state is located instead around $-4.5(eV)$, .i.e., the middle of the conduction/valence band gap. ](xueFig4-1.eps "fig:"){height="3.2in" width="5.0in"} ![\[xueFig4\] (Color online) Local density of states as a function of position along the axis of the $5$-unit cell (a) and $60$-unit cell (b) SWNT molecule. The LDOS is obtained by summing over the 10 carbon atoms of each ring of the $(10,0)$ SWNT. Each cut along the energy axis for a given position along the NT axis gives the LDOS at the corresponding carbon ring. Each cut along the postion axis for a given energy illustrates the spatial extension of the corresponding electron state. For the $5$-unit cell SWNT(a), localized dangling bond state exists around $-5.0(eV)$, whose wavefunction decays into the interior of the SWNT molecule. For the $60$-unit cell SWNT which has approached the bulk limit, the localized dangling bond state is located instead around $-4.5(eV)$, .i.e., the middle of the conduction/valence band gap. ](xueFig4-2.eps "fig:"){height="3.2in" width="5.0in"}
The dangling $\sigma$ bonds at the open end of the SWNT molecule lead to charge transfer between carbon atoms at the end and carbon atoms in the interior of the SWNT. This should be corrected self-consistently first and gives the intial charge configuration $N_{i}^{0}$for determining the charge transfer within the metal-SWNT-metal junction in later sections. The self-consistent calculation proceeds as described in the previous section, except that there is no self-energy operator associated with the contact in the case of the bare SWNT molecule. The result is shown in Fig. \[xueFig2\], where we plot the net electrons per atom as a function of position along the $(10,0)$ SWNT axis obtained from both EHT and the self-consistent EHT calculations. The self-consistent treatment suppresses both the magnitude and the range of the charge transfer, which are approximately the same for all the SWNT molecules investigated, reflecting the localized nature of the perturbation induced by the end dangling bonds (Fig \[xueFig2\](b)).
To evaluate the evolution of SWNT electronic structure with molecule length, we calculated the local density of states in the middle unit cell of the SWNT molecule using the self-consistent EHT and compared them with those of the bulk (infinitely long) SWNT. The results for SWNT lengths of $2.0,8.4,16.9$ and $25.4(nm)$ are shown in Fig. \[xueFig3\]. Here the LDOS of the isolated finite SWNT molecule is artificially broadened by inserting a small but finite imaginary number ($\delta =10^{-6} (eV)$) into the retarded Green’s function $G^{r}(E+i\delta )$. Therefore only the band edge location but not the exact value of the LDOS should be examined when evaluating the approach to the bulk limit with increasing nanotube length. The LDOS of the shortest SWNT molecule ($2.0$ nm) shows completely different structure from that of the bulk. In particular, there are peaks located within the conduction-valence band gap of the bulk SWNT caused by the localized dangling bond states at the end, which decays into the interior of the short SWNT molecule. This is illustrated by the position-dependent LDOS along the NT axis in Fig. \[xueFig4\]. We can therefore characterize the $5$-unit cell SWNT as being in the molecular limit. The magnitude of the localized dangling-bond states in the middle is suppressed exponentially with increasing SWNT length and is negligible for all other SWNT molecules studied.
The development of the SWNT valence bands with molecule length is clear from Fig. \[xueFig3\]. The development of the SWNT conduction bands is less regular since tight-binding theory constructed for valence electrons generally describes the valence bands better than the conduction bands. [@Chadi75] The approach to the bulk band structure is obvious for SWNT longer than $40$ unit cells and complete for the SWNT molecules of $60$ unit cells long. Therefore the variation of SWNT length from $5$ to $60$ unit cells spans the entire range from the molecular limit to the bulk limit. Note that as the length of the SWNT molecule changes, the energy of localized dangling bond states also changes, which saturates as the SWNT approaches the bulk limit. For the $60$-unit cell SWNT, it is located around $-4.5(eV)$, i.e., the Fermi-level of the bulk SWNT. This is consistent with the previous observation in semiconductor interfaces, where it has been argued that the dangling bond level plays the role of “charge-neutrality-level” (CNL) in band lineup involving semiconductors, which is located around the midgap for semiconductors with approximately symmetric conduction and valence band structures. [@CNL; @TersoffMS]
Scaling analysis of Schottky barrier formation at metal-SWNT molecule interfaces
================================================================================
Schottky barrier formation at planar metal-semiconductor interfaces
-------------------------------------------------------------------
We start with a brief summary of Schottky barrier formation at an ideal planar metal-semiconductor interface [@MSMonch; @MSReview; @TersoffMS] to motivate our discussion of metal-SWNT interface in later sections. An ideal metal-semiconductor interface is formed by reducing the distance between a metal and a semi-infinite semiconductor until an intimate and abrupt interface forms, [@MSMonch] as illustrated in Fig. \[xueFig5\].
![\[xueFig5\] Schematic illustration of the formation of Schottky barrier at the planar metal-semiconductor interfaces. (a) n-type semiconductor; (b) p-type semiconductor. $W_{m},W_{s}$ are the work functions of the metal and semiconductor respectively. $V_{b}$ is the Schottky barrier height for electron (hole) injection at the n-type (p-type) semiconductor interface. $V_{s}$ is the additional potential shift inside the semiconductor due to the depleted dopant charges. ](xueFig5.eps){height="3.2in" width="4.5in"}
The open-end of the semi-infinite semiconductor leads to localized surface states whose wavefunctions decay exponentially into the vacuum and inside the semiconductor, the nature of which can be undersood qualitatively from the complex band structure of the bulk semiconductor by extrapolating the energy band into the band gap region. Upon contact with the metal electrodes, the intrinsic semiconductor surface states are replaced by Metal-Induced Gap States (MIGS), which are the tails of the metal wavefunction decaying into the semiconductor within the band gap since the wavefunctions there are now matched to the continuum of states around the metal Fermi-level. [@BH; @Louie] The corresponding charge transfer induces an interface dipole layer due to the planar structure, the electrostatic potential drop across which shifts rigidly the semiconductor band relative to the metal Fermi-level $E_{F}$. Additional electrostatic potential change can also occur if the semiconductor is doped and a space-charge layer forms due to the depleted dopant charges, as illustrated in Fig. \[xueFig5\]. The total potential shift must be such that the two Fermi-levels across the interface line up. The potential variations away from the interface dipole layer introduced by the space charge layer are slow (on the order of magnitude of $\sim 0.5 (V)$ within hundreds of nm or longer) due to the small percentage of dopant atoms. [@MSBook] This leads to the picture of band shift following electrostatic potential change since such potential variation occurs on a length scale much longer than the semiconductor unit cell size.
The band lineup at the planar metal-semiconductor interface is determined by the overall charge neutrality condition and the corresponding one-dimensional electrostatic considerations: $Q_{m}+Q_{is}+Q_{sc}=0$, where $Q_{m}$, $Q_{is}$ and $Q_{sc}$ are the surface charge density within the metal (m) surface layer, semiconductor surface layer due to the interface states (is) and semiconductor space-charge (sc) layer respectively, which are obtained by averaging the three-dimensional charge density over the plane parallel to the interface. For n(p)-type semiconductor, the Schottky barrier height $V_{b}$ for electron (hole) injection is determined by $E_{F}$ and the conduction (valence) band edge. Since electrons can easily tunnel through the interface dipole layer, current tranport occurs by charge carriers injected into the bulk conduction/valence band states by tunneling through or thermionically emitted over the interface barrier. So the Schottky barrier height alone can be used for characterizing the transport characteristics. [@MSBook]
Two key concepts thus underlie the analysis of Schottky barrier formation at the planar metal-semiconductor interface: (1) The separation into the interface region (dipole layer) and the bulk semiconductor region (including the space-charge layer) with well-defined Fermi-level; (2) The rigid band shift following the local electrostatic potential change due to the planar interface structure. Both concepts are not valid in analyzing Schottky barrier formation at metal-SWNT interfaces.
Electrostatics of the metal-SWNT molecule interface
---------------------------------------------------
The calculated charge transfer and electrostatic potential change at the gold-SWNT-gold and titanium-SWNT-titanium junctions are shown in Figs. \[xueFig6\]-\[xueFig8\] for metal-SWNT distance of $\Delta L=2.0,2.5,3.0(\AA)$ respectively. The electrostatic potential change is obtained as the difference between electrostatic potentials within the metal-SWNT-metal junction and the bare SWNT molecule, which is calculated from the transferred charge throughout the SWNT using Eq. \[VES\]. Due to the molecular-scale dimension of both the SWNTs and the contact area, the transferred charge across the interface is confined in a finite region. Unlike the dipole *layer* at the bulk metal-semiconductor interface which induces a step-wise change in the electrostatic potential, the transferred charge across the metal-SWNT interface takes the form of molecular-size dipole, the electrostatic potential of which *decays to zero in regions far away from the interface*. [@TersoffNT; @Odin00] In addition, the SWNT molecule is undoped. The occupation of the electron states within the SWNT is determined by the Fermi-level of the electrodes, even for a long SWNT which has reached the bulk limit and a Fermi-level can be defined from the bulk band structure.
![\[xueFig6\] Charge transfer (1) and electrostatic potential change (2) at the Au-finite SWNT-Au and Ti-finite SWNT-Ti junctions as a function of SWNT length for seven different lengths at SWNT-metal distance of $\Delta L=2.0(\AA)$. For each junction, we have also shown the magnified view both at the metal-SWNT interface (b) and in the middle of the longest (25.4 nm) SWNT molecule (c). The horizontal lines in the potential plot (2) denote the work function differences between the electrodes and the bulk SWNT. ](xueFig6-1.eps "fig:"){height="3.2in" width="4.0in"} ![\[xueFig6\] Charge transfer (1) and electrostatic potential change (2) at the Au-finite SWNT-Au and Ti-finite SWNT-Ti junctions as a function of SWNT length for seven different lengths at SWNT-metal distance of $\Delta L=2.0(\AA)$. For each junction, we have also shown the magnified view both at the metal-SWNT interface (b) and in the middle of the longest (25.4 nm) SWNT molecule (c). The horizontal lines in the potential plot (2) denote the work function differences between the electrodes and the bulk SWNT. ](xueFig6-2.eps "fig:"){height="3.2in" width="4.0in"}
![\[xueFig7\] Charge transfer (1) and electrostatic potential change (2) at the Au-finite SWNT-Au and Ti-finite SWNT-Ti junctions as a function of SWNT length for seven different lengths at SWNT-metal distance of $\Delta L=2.5(\AA)$. For each junction, we have also shown the magnified view both at the metal-SWNT interface (b) and in the middle of the longest (25.4 nm) SWNT molecule (c). The horizontal lines in the potential plot (2) denote the work function differences between the electrodes and the bulk SWNT. ](xueFig7-1.eps "fig:"){height="3.2in" width="4.0in"} ![\[xueFig7\] Charge transfer (1) and electrostatic potential change (2) at the Au-finite SWNT-Au and Ti-finite SWNT-Ti junctions as a function of SWNT length for seven different lengths at SWNT-metal distance of $\Delta L=2.5(\AA)$. For each junction, we have also shown the magnified view both at the metal-SWNT interface (b) and in the middle of the longest (25.4 nm) SWNT molecule (c). The horizontal lines in the potential plot (2) denote the work function differences between the electrodes and the bulk SWNT. ](xueFig7-2.eps "fig:"){height="3.2in" width="4.0in"}
![\[xueFig8\] Charge transfer (1) and electrostatic potential change (2) at the Au-finite SWNT-Au and Ti-finite SWNT-Ti junctions as a function of SWNT length for seven different lengths at SWNT-metal distance of $\Delta L=3.0(\AA)$. For each junction, we have also shown the magnified view both at the metal-SWNT interface (b) and in the middle of the longest (25.4 nm) SWNT molecule (c). The horizontal lines in the potential plot (2) denote the work function differences between the electrodes and the bulk SWNT. ](xueFig8-1.eps "fig:"){height="3.2in" width="4.0in"} ![\[xueFig8\] Charge transfer (1) and electrostatic potential change (2) at the Au-finite SWNT-Au and Ti-finite SWNT-Ti junctions as a function of SWNT length for seven different lengths at SWNT-metal distance of $\Delta L=3.0(\AA)$. For each junction, we have also shown the magnified view both at the metal-SWNT interface (b) and in the middle of the longest (25.4 nm) SWNT molecule (c). The horizontal lines in the potential plot (2) denote the work function differences between the electrodes and the bulk SWNT. ](xueFig8-2.eps "fig:"){height="3.2in" width="4.0in"}
Note that despite the delocalized nature of SWNT electron states in the conduction/valence band, for a given metal-SWNT distance $\Delta L$, both the magnitude and the range of the charge transfer at the metal-SWNT molecule interface are approximately independent of the SWNT length, reflecting the localized nature of interfacial charge transfer process [@XueMol1; @XueMol2] The charge transfer adjacent to the metal-SWNT interface shows Friedel-like oscillation. [@Friedel] Such Friedel-like oscillations of transferred charge have also been observed in planar metal-semiconductor interfaces, [@Louie] finite atomic chains [@LangAv] and molecular tunnel junctions. [@XueMol1; @XueMol2] The oscillation of the interface-induced charge transfer dies out quickly inside the SWNTs as the length of the SWNT molecule increases. The oscillation in both the transferred charge and electrostatic potential change in the middle of the SWNT are due to the intrinsic two-sublattice structure of the zigzag tube, and persist in an infinitely long zigzag tube. [@XueNT04; @TersoffNT02]
As $\Delta L$ increases from $2.0 \AA$ to $3.0 \AA$, the magnitude of the charge transfer oscillation at the interface decreases with the decreasing interface coupling strength, but the magnitude of charge transfer inside the SWNT molecule is almost independent of the coupling strength across the interface. For the Au-SWNT-Au junction, there is a small positive charge transfer of $4.9 \times 10^{-4}$ per atom in the middle of the $60$-unitcell SWNT, while for the Ti-SWNT-Ti junction, there is instead a small negative charge tranfer of $-6.5\times 10^{-5}$ per atom. [@Note2]
Due to the long-range Coulomb interaction, the electrostatic potential change is determined by the transfered charge throughout the metal-SWNT-junction (Eq. \[VES\]). For a given metal-SWNT distance $\Delta L$, its magnitude in the middle of the SWNT increases with the increasing SWNT size although the charge transfer is small except at the several layers immediately adjacent to the electrodes. The magnitude of the potential change in the interior of the SWNT saturates at the same length where the finite SWNT approaches the bulk limit, i.e., $50$ unit cells corresponding to a length of $21.1(nm)$, for both Au-NT-Au and Ti-NT-Ti junctions. For a given metal-SWNT distance $\Delta L$, the magnitude of the potential shift at the metal-SWNT interface is approximately constant for all the finite SWNTs studied.
The contact-induced charge transfer processes are often characterized as “charge-tranfer doping”. If we follow the common practice in literature, the SWNT is “hole-doped” by contacting to gold (high work function) and “electron-doped” by contacting to titanium (low work function) electrode. Here it is important to recognize the difference in the physical processes governing the short-range and long-range electrostatics of the metal-SWNT interface. The charge transfer close to the metal-SWNT interface reflects the bonding configuration change upon contact to the metallic surfaces, which cannot contribute directly to transport since the corresponding charge distribution is localized [@XueMol1; @XueMol2]. Moving away from the interface, the effect due to the metal-SWNT coupling is reduced. For the longer SWNT molecule which has approached the bulk limit, the effect of the interface coupling on the electron states in the middle of the SWNT can be essentially neglected. However, since the electron occupation is determined by the Fermi-Dirac distribution of the metallic electrodes, the charging state in the interior of the SWNT which has approached the bulk limit is determined by the lineup of the SWNT bands relative to the metal Fermi-level, which in turn is determined by the self-consistent potential shift across the metal-SWNT-metal junction. Within the coherent transport regime, the transfered charge in the interior affects current indirectly by modulating the potential landscape acrosss the metal-SWNT-metal junction, which determines the electron transmission coefficient through Eq. \[TEV\].
![\[xueFig9\] (Color online) Cross sectional view of electrostatic potential change at the Au-SWNT-Au (upper figure) and Ti-SWNT-Ti junction (lower figure) for SWNT molecule length of $8.4(nm)$ and metal-SWNT distance of $\Delta L=2.5(\AA)$. The SWNT diameter is $0.78(nm)$. The electrostatic potential change shown here is induced by the charge transfer across the interface and calculated using Eq. \[VES\]. ](xueFig9-1.eps "fig:"){height="3.2in" width="5.0in"} ![\[xueFig9\] (Color online) Cross sectional view of electrostatic potential change at the Au-SWNT-Au (upper figure) and Ti-SWNT-Ti junction (lower figure) for SWNT molecule length of $8.4(nm)$ and metal-SWNT distance of $\Delta L=2.5(\AA)$. The SWNT diameter is $0.78(nm)$. The electrostatic potential change shown here is induced by the charge transfer across the interface and calculated using Eq. \[VES\]. ](xueFig9-2.eps "fig:"){height="3.2in" width="5.0in"}
A common feature of previous theoretical work on carbon nanotube devices is the use of the electrostatics of an ideal cylinder, [@Xue99NT; @TersoffNT; @Odin00; @De02; @NTLimit] which neglects the electrostatic potential variation across the narrow region around the cylindrical surface where the $\pi$-electron density is non-negligible. However, the electrostatics of *any nanostructure is three-dimensional*. For the cylindrical SWNT, this means that the electrostatic potential across the SWNT junction varies both parallel and perpendicular to the NT axis and on the atomic-scale. This is clearly seen from the three-dimensional plot of the electrostatic potential change in Fig. \[xueFig9\]. For the $(10,0)$ SWNT with a diameter of $\approx 0.8(nm)$, the variation of the charge transfer-induced electrostatic potential change inside the SWNT cylinder is small, but decays to about $1/4$ of its value at the cylindrical center $1(nm)$ away from the SWNT surface for both the Au-SWNT-Au and Ti-SWNT-Ti junctions.
The confined cylindrical geometry and three-dimensional electrostatics of the metal-SWNT interface lead to a profound change in the physical picture of the band shift, which applies to both finite SWNT molecules and long SWNT wires. [@XueNT03; @XueNT04] In particular, the shift of the local density of states along the nanotube axis *does not follow* the change in the electrostatic potential along the nanotube axis, although this is commonly assumed in the literature. This is illustrated in the three-dimensional plot of the LDOS as a function of position along NT axis in Fig. \[xueFig10\]. Note that although the electrostatic potential varies by an amount $\geq 0.5(eV)$ going from the metal-SWNT interface to the middle of the $60$-unit cell SWNT molecule for both junctions (Fig. \[xueFig7\]), there is almost no shift of the conduction and valence band edge going from the interface to the middle of the SWNT molecule. This is in contrast with the planar metal-semiconductor interface, where the band shift away from the interface dipole layer follows the electrostatic potential change since it varies only in one direction *and* on a length scale large compared to the corresponding unit cell size.
![\[xueFig10\] (Color online) Three-dimensional plot of the local density of states at the Au-SWNT-Au (a) and Ti-SWNT-Ti (b) junctions as a function of position along the NT axis for SWNT length of $25.4(nm)$ and metal-SWNT distance of $\Delta L=2.5(\AA)$. Note that the sharp peaks around $ -4.5(eV)$ due to the dangling bond state at the ends of the isolated SWNT (Fig. \[xueFig4\](b)) have been replaced by broadened peaks within the band gap due to the MIGS at the metal-SWNT molecule interface. ](xueFig10-1.eps "fig:"){height="3.2in" width="5.0in"} ![\[xueFig10\] (Color online) Three-dimensional plot of the local density of states at the Au-SWNT-Au (a) and Ti-SWNT-Ti (b) junctions as a function of position along the NT axis for SWNT length of $25.4(nm)$ and metal-SWNT distance of $\Delta L=2.5(\AA)$. Note that the sharp peaks around $ -4.5(eV)$ due to the dangling bond state at the ends of the isolated SWNT (Fig. \[xueFig4\](b)) have been replaced by broadened peaks within the band gap due to the MIGS at the metal-SWNT molecule interface. ](xueFig10-2.eps "fig:"){height="3.2in" width="5.0in"}
The lack of connection between band shift and electrostatic potential change along the SWNT axis is obvious considering the 3-d nature of the electrostatics: Since the electrostatic potential change varies strongly in the direction perpendicular to the SWNT axis where the carbon pi-electron density is significant, there is no simple connection between the band shift and the electrostatic potential change at the cylindrical surface of the SWNT or at any other distance away from the SWNT axis. The relevant physics can be understood as follows: For the nanoscale SWNT considered here, the molecular-size interface dipole induces a long-range three-dimensional electrostatic potential change of $\sim 0.5 (eV)$ within $\sim 5(nm)$ of the interface, which is much weaker than the atomic-scale electrostatic potential variation within the bare SWNT. Since the LDOS of the SWNT junction is obtained from the Hamiltion corrected by the charge transfer-induced electrostatic potential change, we can expect the effect on the spatial variation of the LDOS away from the interface due to such correction is small compared to the strong atomic-scale potential variations included implicitly in the initial Hamiltonian $H_{0}$. The effect of the electrostatic potential change on the LDOS in regions within $\sim 5(nm)$ of the metal-SWNT interface is thus similar to that of small molecules in molecular tunnel junctions, where detailed studies in Ref. have shown that the charge transfer-induced electrostatic potential change in the molecular junction doesn’t lead to a rigid shift of the molecular energy levels (or band edges), but can have different effects on different molecular states (or band structure modification) depending on their charge distributions.
“Band” lineup and electron transmission across the metal-finite SWNT molecule interface
---------------------------------------------------------------------------------------
![\[xueFig11\] Local density of states at the middle of the Au-SWNT-Au junction (a) and Ti-SWNT-Ti junction (b) for SWNT length of $2.0,16.9$ and $25.4(nm)$ respectively. Solid line: $\Delta L=2.0(\AA)$. Dotted line: $\Delta L=2.5(\AA)$. Dashed line: $\Delta L=3.0(\AA)$. The vertical lines show the position of the metal Fermi-level. ](xueFig11-1.eps "fig:"){height="3.2in" width="4.0in"} ![\[xueFig11\] Local density of states at the middle of the Au-SWNT-Au junction (a) and Ti-SWNT-Ti junction (b) for SWNT length of $2.0,16.9$ and $25.4(nm)$ respectively. Solid line: $\Delta L=2.0(\AA)$. Dotted line: $\Delta L=2.5(\AA)$. Dashed line: $\Delta L=3.0(\AA)$. The vertical lines show the position of the metal Fermi-level. ](xueFig11-2.eps "fig:"){height="3.2in" width="4.0in"}
For a planar metal-semiconductor interface, the band lineup is determined once the electrostatic potential drop across the interface is known. The horizontal lines in the potential plots of Figs. \[xueFig6\]-\[xueFig8\](b) denote the work function differences between the electrodes and the bulk SWNT. For a bulk metal-semiconductor interface, this would have given the magnitude of the potential shift which aligns the Fermi-level across the interface. But for the metal-finite SWNT interface considered here, the band lineup should be determined from the local density of states (LDOS) in the middle of the SWNT. This is shown in Fig. \[xueFig11\] for both Au-SWNT-Au and Ti-SWNT-Ti junctions.
![\[xueFig12\] Surface density of states of the gold and titanium electrodes. The vertical lines show the position of metal Fermi-level. ](xueFig12.eps){height="3.2in" width="4.0in"}
![\[xueFig13\] Electron transmission characteristics of the Au-SWNT-Au (upper figure) junction and Ti-SWNT-Ti (lower figure) junction for SWNT length of $2.0,16.9$ and $25.4(nm)$ and metal-SWNT distance of $2.0,2.5$ and $3.0 (\AA )$ respectively. The vertical lines show the position of the metal Fermi-level at each junction. ](xueFig13-1.eps "fig:"){height="3.2in" width="4.0in"} ![\[xueFig13\] Electron transmission characteristics of the Au-SWNT-Au (upper figure) junction and Ti-SWNT-Ti (lower figure) junction for SWNT length of $2.0,16.9$ and $25.4(nm)$ and metal-SWNT distance of $2.0,2.5$ and $3.0 (\AA )$ respectively. The vertical lines show the position of the metal Fermi-level at each junction. ](xueFig13-2.eps "fig:"){height="3.2in" width="4.0in"}
The “band” lineup relevant to the transport characteristics can also be determined equivalently from the electron transmission characteristics of the equilibrium metal-SWNT-metal junction, which is calculated using Eq. \[TEV\] and depends on the surface electronic structure, the coupling across the interface and the electronic structure of the SWNT molecule. The surface density of states of the bare gold and titanium electrodes calculated using tight-binding parameters [@Papa86] are shown in Fig. \[xueFig12\], while the transmission characteristics of the metal-SWNT-metal junctions are shown in Fig. \[xueFig13\]. For the shortest SWNT in the molecular limit ($2.0 nm$), there is significant transmission around the metal Fermi-level $E_{f}$ which is suppressed rapidly with increasing SWNT length. The difference in the electron transmission through the SWNT conduction band region in the Au-SWNT-Au and Ti-SWNT-Ti junctions is mostly due to the difference in the electrode band structures above $E_{f}$ ($sp$-band for Au and $d$-band for Ti).
From both the LDOS and transmission characteristics of the $60$-unit cell SWNT, we can determine that for the Au-SWNT-Au junction the Fermi-level location goes from slightly below (by $\sim 0.1$ eV) the mid-gap of the $60$ unitcell SWNT to the mid-gap as the gold-SWNT distance increases from $2.0(\AA)$ to $3.0(\AA)$. For the Ti-SWNT-Ti junction, the Fermi-level location goes from above (by $\sim 0.25 $ eV) the mid-gap of the $60$-unit cell SWNT molecule to the midgap as the titanium-SWNT distance increases from $2.0(\AA)$ to $3.0(\AA)$. Note that this value is approximately the same for SWNTs longer than $40$-unit cell ($16.9$ nm), i.e., the same length where the magnitude of the electrostatic potential change in the middle of the SWNT begins to saturate (Figs. \[xueFig6\]-\[xueFig8\](b)).
The physical principles of Schottky barrier formation at the metal-SWNT molecule interface can thus be summarized as follows: Since the effect of the interface perturbation on the electron states inside the SWNT molecule is small, for the SWNTs that are long enough to approach the bulk limit, the metal Fermi-level position should be close to the middle of the gap since otherwise extensive charge transfer will occur inside the SWNT junction. Since the screening of the work function difference inside the SWNT junction is weak, the metal Fermi-level should be below (above) the middle of the gap for a high (low) workfunction metal so that the net decrease (increase) of electrons inside the SWNT molecule shifts the SWNT band edge down (up) relative to the metal Fermi-level. Exactly how this is achieved from the interface to the middle of the channel will depend on the details of the contact (types of metal and strength of interface coupling). At the weak coupling limit, the lineup of the Fermi-level for the SWNT molecules which have reached the bulk limit is such that the perturbation of the electron states inside the SWNT molecule is minimal, i.e., at mid-gap. Note that since the LDOS around the midgap is negligible inside the SWNT, the magnitude of the transfered charge in the middle of the SWNT molecule is approximately independent of the interface coupling strength despite the different band lineup scheme at three different metal-SWNT distances (Figs. \[xueFig6\]-\[xueFig8\](a)).
Length and temperature dependence of the conductance of the metal-SWNT molecule interface
=========================================================================================
Given the electrostatic potential change $\Delta V$ across the metal-SWNT interface, we can caculate the length and temperature dependence of the metal-SWNT-metal junction conductance using Eq. \[GT\]. The length dependence of the junction conductance at room temperature is shown in Fig. \[xueFig14\] for both Au-SWNT-Au and Ti-SWNT-Ti junctions at the three metal-SWNT distances. We have separated the junction conductance into the tunneling and thermal-activation contributions as discussed in sec. \[Theory\].
![\[xueFig14\] Room temperature conductance of the Au-SWNT-Au (upper figure) junction and Ti-SWNT-Ti (lower figure) junction as a function of the SWNT length at three different metal-SWNT distances. ](xueFig14-1.eps "fig:"){height="3.2in" width="3.6in"} ![\[xueFig14\] Room temperature conductance of the Au-SWNT-Au (upper figure) junction and Ti-SWNT-Ti (lower figure) junction as a function of the SWNT length at three different metal-SWNT distances. ](xueFig14-2.eps "fig:"){height="3.2in" width="3.6in"}
![\[xueFig15\] Temperature dependence of the conductance of the Au-SWNT-Au (left figure) junction and Ti-SWNT-Ti (right figure) junction as a function of the SWNT length at metal-SWNT distance of $\Delta L=2.0 (\AA)$. ](xueFig15.eps){height="3.2in" width="4.0in"}
The tunneling conductance (also the zero-temperature conductance) for both junctions decreases exponentially with the SWNT length for SWNT longer than $4.1(nm)$ (Fig. \[xueFig14\]), where the perturbation of the electron states inside the SWNT due to the interface coupling can be neglected. The exponential decay with length for tunneling across a finite molecular wire in contact with two metal electrodes has been analyzed in detail in recent literature using either simple tight-binding theory [@Joachim] or complex band structures calculated from first-principles theory. [@Sankey] But the essential physics can be captured from the simple WKB picture of tunneling through potential barriers with constant barrier height. A separation of the contact and molecule core effect on the tunneling resistance can thus be achieved using the functional relation $R=R_{0}e^{dL}$, where $R_{0}$ is the contact resistance and $d$ is the inverse decay length for tunneling across the SWNT molecule. We find that the Au-SWNT-Au junction has the contact resistance $R_{0}=0.115, 1.88, 2.59 (M\Omega)$ and inverse decay length of $d=1.68,1.68,1.68(1/nm)$ for the Au-SWNT distance of $\Delta L=2.0,2.5,3.0 (\AA)$ respectively. The Ti-SWNT-Ti junction has the contact resistance $R_{0}=0.023, 3.14, 4.95 (M\Omega)$ and inverse decay length of $d=1.51,1.52,1.53(1/nm)$ for the Ti-SWNT distance of $\Delta L=2.0,2.5,3.0 (\AA)$ respectively. Note that the contact resistance increases rapidly with the increasing metal-SWNT distance due to the reduced interface coupling, but the inverse decay length (which is a bulk-related parameter) remains approximately constant. [@Sankey] The total conductance of the metal-SWNT-metal junction at room temperature saturates with increasing SWNT length. This is due to the fact that the potential shift extends over a range comparable to the half of the SWNT length until the SWNT reaches the bulk limit. For longer SWNT, the tunneling is exponentially suppressed while the transport becomes dominated by thermal activation over the potential barrier whose height is approximately constant for all the SWNTs investigated.
The length and temperature dependence of the metal-finite SWNT-metal junction can also be seen more clearly from Fig. \[xueFig15\], where we show the conductance of the SWNT junction as a function of temperature for lengths of $2.0$, $8.4$ and $16.9$ (nm) in both Au-SWNT-Au and Ti-SWNT-Ti junctions and in the strong coupling limit ($\Delta L=2.0\AA$). For the shortest SWNT molecule ($2.0$ nm) studied, both tunneling and thermal contributions to the conductance at room-temperature are significant. So the condutance increases only by a factor of 2 going from $100(K)$ to $250(K)$ for the Ti-SWNT-Ti junction and is almost temperature independent for the Au-SWNT-Au junction. The thermionic-emission contribution begins to dominate over the tunneling contribution at SWNT length of $8.4$ (nm) and longer, correspondingly the increase of conductance with temperature is faster. But overall the temperature dependence is much weaker than the exponential dependence in, e.g., electron transport through the planar metal-semiconductor interfaces. [@MS]
The length and temperature dependence of the SWNT molecule junction can be understood rather straightforwardly using the Breit-Wigner formula, [@Landau] first introduced by Buttiker [@ButtikerRTD] for electron transmission through double-barrier tunneling structures. For electron transmission within the energy gap between the highest-occupied-molecular-orbital (HOMO) and lowest-unoccupied-molecular-orbital (LUMO) of the SWNT molecule, we can approximate the energy dependence of the transmission coefficient as $$\label{BW}
T(E) \approx \sum_{i=HOMO,LUMO} \frac{\Gamma_{i;L}\Gamma_{i;R}}
{(E-E_{i})^{2}+1/4(\Gamma_{i;L}+\Gamma_{i;R})^{2}}$$ where $\Gamma_{i;L(R)}$ (i=HOMO,LUMO) is the partial width of resonant transmission through the HOMO (LUMO) level due to elastic tunneling into the left (right) electrode respectively. Note that as the SWNT molecule reaches the bulk limit, the HOMO and LUMO levels give the valence band and conduction band edge respectively. For given SWNT molecule and metallic electrodes, $\Gamma_{HOMO(LUMO);L(R)}$ is constant. The increase of transmission coefficient with energy from the Fermi-level $E_{f}$ towards the relevant band edge is thus of Lorentzian form, which is also generally true for nanostructures with only a finite number of conduction channels. From Eq. \[GT\], the temperature dependence of the conductance is thus determined by the tail of the Lorentzian around $E_{f}$ averaged over a range $\sim kT$ due to the thermal broading with the corresponding weight $\frac{df}{dE}(E-E_{f})=exp((E-E_{f})/kT)/(kT(exp((E-E_{f})/kT)+1)^{2})$. This leads to much weaker-than-exponential dependence on temperature of the junction conductance, as compared to the metal-semiconductor interface, where the exponential dependence of conductance on temperature is due to the exponential decrease of carrier densities with energy large enough to overcome the interface barrier. [@MSBook] As the length of the SWNT molecule increases, the partial width $\Gamma_{HOMO(LUMO);L(R)}$ due to tunneling into the electrodes decreases exponentially (from the WKB approximation, [@ButtikerRTD]) leading to the exponential dependence on junction length of the tunneling conductance.
Current-voltage characteristics of the metal-finite SWNT interface
==================================================================
![\[xueFig16\] Current-voltage characteristics of the Au-SWNT-Au (1) and the Ti-SWNT-Ti (2) junction for SWNT lengths of $2.0,8.4,16.9(nm)$ and metal-SWNT distance of $2.5(\AA)$. We consider three different models of electrostatic potential profile within the SWNT junction. ](xueFig16-1.eps "fig:"){height="3.2in" width="4.0in"} ![\[xueFig16\] Current-voltage characteristics of the Au-SWNT-Au (1) and the Ti-SWNT-Ti (2) junction for SWNT lengths of $2.0,8.4,16.9(nm)$ and metal-SWNT distance of $2.5(\AA)$. We consider three different models of electrostatic potential profile within the SWNT junction. ](xueFig16-2.eps "fig:"){height="3.2in" width="4.0in"}
In principle, to calculate the current-voltage characteristics of the metal-SWNT-metal junction, a self-consistent calculation of the charge and potential response will be needed at each bias voltage to take into account the screening of the applied electric field within the junctions. [@XueMol2; @Buttiker93] This is computationally demanding even for the self-consistent tight-binding method due to the large size of the SWNT molecule. Therefore, in this section we calculate the current-voltage characteristics using three different models of the electrostatic potential profiles in the metal-finite SWNT-metal junction in order to illustrate qualitatively the importance of the proper modeling of the self-consistent screening of the applied source/drain bias voltage. [@XueMol2; @Xue99Mol; @DattaMol; @RatnerMol] The fully self-consistent current transport is under investigation and will be reported in future publications.
The three potential response models we choose are: (1) We assume all the voltage drop occurs at the metal-SWNT interface with the two interface contributing equally (Model 1); (2) We assume the voltage drop across the metal-SWNT-metal junction is piece-wise linear (Model 2); (3) We assume the voltage drops linearly across the entire metal-SWNT-metal junction (Model 3). The three potential models chosen here represent the source/drain field configuration at three different limits: In the absence of the SWNT molecule, we are left with the bare (planar) source/drain tunnel junction. For ideal infinitely conducting electrodes, the voltage drop will be linear with constant electric field across the source/drain junction. In general, sandwiching the SWNT molecule between the two electrodes leads to screening effect. If we neglect entirely the screening of the applied source/drain field by the SWNT molecule, we arrive at potential model 3. If the nanotube is infinitely conducting, we arrive at potential model 1. In practice, both the electrodes and the SWNT are not infinitely conducting, and the voltage drop can occur both across the metal-SWNT interface and inside the SWNT. Since the potential variation will be the largest close to the interface for the homogeneous SWNT assumed here, for model 2 we assume the potential profile is such that the magnitude of the field across the first unitcell of the SWNT at the two ends is $10$ times of that in the interior of the SWNT molecule. Note that we have neglected the electrostatic potential variation in the direction perpendicular to the source/drain field. For SWNTs with cylindrical structure, this can be important in a fully self-consistent analysis of the nonlinear current-voltage characteristics as we have seen in the previous sections. The three potential models chosen here are merely used to demonstrate the importance of the fully self-consistent study.
![\[xueFig17\] (Color online) Three-dimensional plot of the local density of states at the Au-SWNT-Au (a) and Ti-SWNT-Ti (b) junctions as a function of position along the NT axis for SWNT length of $25.4(nm)$ and metal-SWNT distance of $\Delta L=2.5(\AA)$ at source/drain bias voltage of $0.5 (V)$. We assume the voltage drops linearly across the SWNT junction (potential model 3). ](xueFig17-1.eps "fig:"){height="3.2in" width="5.0in"} ![\[xueFig17\] (Color online) Three-dimensional plot of the local density of states at the Au-SWNT-Au (a) and Ti-SWNT-Ti (b) junctions as a function of position along the NT axis for SWNT length of $25.4(nm)$ and metal-SWNT distance of $\Delta L=2.5(\AA)$ at source/drain bias voltage of $0.5 (V)$. We assume the voltage drops linearly across the SWNT junction (potential model 3). ](xueFig17-2.eps "fig:"){height="3.2in" width="5.0in"}
The calculated current-voltage (I-V) characteristics of the metal-SWNT-metal junctions for SWNT lengths of $2.1,8.4,16.9(nm)$ and metal-SWNT distance of $2.5(\AA)$ are plotted in Fig. \[xueFig16\] for both junctions. For electrostatic potential models 2 and 3, the I-V characteristics are obtained by superposing the assumed electrostatic potential profile onto the Hamiltonian of the equilibrium junction and evaluating its matrix element by direct numerical integration. We find that as the length of SWNT increases, the three different models of electrostatic potential response lead to qualitatively different current-voltage characteristics in both the magnitude of the current and its voltage dependence. This is because current transport is dominated by thermal-activation contribution for all the SWNT molecules investigated except the shortest ones. For the Au-SWNT-Au junction, we find that potential models 2 and 3 give qualitatively similar I-V characteristics, indicating that potential drop within the SWNT bulk is important. But for the Ti-SWNT-Ti junction, we find that potential models 1 and 3 give qualitatively similar I-V characteristics for SWNTs longer than $2.0(nm)$, indicating instead that potential drop across the metal-SWNT interface is important.
The contact dependence of the source/drain field effect can also be seen more clearly by analyzing its effect on the SWNT electronic structure from Fig. \[xueFig17\], where we show the three-dimensional plot of the LDOS of the SWNT within the Au-SWNT-Au and Ti-SWNT-Ti junctions at applied bias voltage of $0.5 (V)$ and assuming potential model 3. Since for the equilibrium SWNT junction, the potential variation is appreciable over a length scale comparable to half of the SWNT length and up to $\sim 10 (nm)$, both the magnitude and the voltage-dependence of the current will be sensitive to the spatial variation of the potential response to the applied voltage over the same length scale, which may have different effects on the SWNT band structure depending on the metallic electrodes used (Fig. \[xueFig17\]). Therefore accurate modeling of this long-range potential variation at the metal-SWNT interface will be critical for evaluating the current-transport mechanism of the nanoscale SWNT devices.
Conclusion
==========
The rapid development of single-wall carbon nanotube-based device technology presents opportunities both for exploring novel device concepts based on atomic-scale nanoengineering techniques and for examining the physical principles of nanoelectronics from the bottom-up atomistic approach. As the first example of the device physics problems raised in this context, we examine electron transport through metal-SWNT interface when the finite SWNT is contacted to the metal surfaces through the dangling bonds at the end, which presents an atomic-scale analogue to the planar metal-semiconductor interface. Due to the quasi-one-dimensional geometry of the SWNTs, a correct understanding of the physical mechanisms involved requires an atomistic analysis of the electronic processes in the configuration of the metal-SWNT-metal junctions.
We have presented in this paper such a microscopic study of electronic and transport properties of metal-SWNT interfaces, as the length of the finite SWNT varies from the molecular limit to the bulk limit and the strength of the interface coupling varies from the strong coupling to the weak coupling limit. Our models are based on a self-consistent tight-binding implementation of the recently developed self-consistent matrix Green’s function (SCMGF) approach for modeling molecular electronic devices, which includes atomistic description of the SWNT electronic structure, the three-dimensional electrostatics of the metal-SWNT interface and is applicable to arbitrary nanostructured devices within the coherent transport regime. We present a bottom-up analysis of the nature of the Schottky barrier formation, the length and temperature dependence of electron transport through the metal-SWNT interfaces, which show quite different behavior compared to the planar metal-semiconductor interfaces, due to the confined cylindrical geometry and the finite number of conduction channels within the SWNT junctions. We find that the current-voltage characteristics of the metal-SWNT-metal junctions depend sensitively on the electrostatic potential profiles across the SWNT junction, which indicates the importance of the self-consistent modeling of the long-range potential variation at the metal-SWNT interface for quantitative evaluation of device characteristics.
Much of current interests on the Schottky barrier effect at metal-SWNT interface are stimulated by the controversial role it plays in the operation of carbon nanotube field-effect transistors (CNTFET), [@AvFET; @DaiFET; @McEuFET] where different contact schemes and metallic electrodes have been used. In general, the operation of CNTFET will be determined by the combined gate and source/drain voltage effect on the Schottky barrier shape at the metal-SWNT interface, which may depend on the details of the metal-SWNT contact geometry, nanotube diameter/chirality and temperature/voltage range. Correspondingly, an atomic-scale understanding of the gate modulation effect within the metal-insulator-SWNT capacitor configuration will also be needed, similar to the planar metal-oxide-semiconductor structure. [@MOSBook] We believe that detailed knowledges of the electronic processes within both the metal-SWNT-metal junction and the metal-insulator-SWNT capacitor are needed before a clear and unambiguous picture on the physical principles governing the operation of CNTFET can emerge. In particular, preliminary theoretical results on the carbon-nanotube field-effect transistors show that for SWNT molecule end-contacted to the electrodes, the nanotube transistor functions through the gate modulation of the Schottky barrier at the metal-SWNT interface (in agreement with recent experiments, [@AvFET]) which becomes more effective as the length of the SWNT molecule increases. Further analysis is thus needed that treat both the gate and source/drain field self-consistently within the SWNT junctions, to achieve a thorough understanding of SWNT-based nanoelectronic devices.
This work was supported by the DARPA Moletronics program, the NASA URETI program, and the NSF Nanotechnology Initiative.
Author to whom correspondence should be addressed. S.M. Sze, *Semiconductor Devices: Physics and Technology*, 2nd edition (Wiley, New York, 2002). L.L. Chang, L. Esaki and R. Tsu, Appl. Phys. Lett. [**24**]{}, 593 (1974). W. Shockley, Proc. IRE [**40**]{}, 1289 (1952). W. Shockley, *Electrons and Holes in Semiconductors* (Van Nostrand, New York, 1950); J.M. Luttinger and W. Kohn, Phys. Rev. [**97**]{}, 869 (1955). S.M. Sze, *Physics of Semiconductor Devices*, 2nd edition (Wiley, New York, 1981). *Hot Carrier in Semiconductor Nanostructures: Physics and Applications*, edited by J. Shah (Academic Press, San Diego, 1992); *Quantum Transport in Semiconductors*, edited by D.K. Ferry and C. Jacoboni (Plenum, New York, 1992). R.H. Dennard, F.H. Gaensslen, H.-N. Yu, V.L. Rideout, E. Bassous, and A.R. Leblanc, IEEE J. Solid-State Circuits [**SC-9**]{}, 256 (1974); D.J. Frank, R.H. Dennard, E. Nowalk, P.M. Solomon, Y. Taur, and H.-S.P. Wong, Proc. IEEE [**89**]{}, 259 (2001). W. Kohn and J.M. Luttinger, Phys. Rev. [**108**]{}, 590 (1957); W. Shockley, Bell Syst. Tech. J. [**28**]{}, 435 (1949); W. van Roosbroeck, *ibid.* [**29**]{}, 560 (1950); R. Stratton, Phys. Rev. [**126**]{}, 2002 (1962). W. H[ä]{}nsch, *The Drift Diffusion Equation and Its Applications in MOSFET Modeling* (Springer, Wien, 1991); M. Lundstrom, *Fundamentals of Carrier Transport*, 2nd edition (Cambridge University Press, Cambridge, 2000). F.S. Khan, J.H. Davies, and J.W. Wilkins, Phys. Rev. B [**36**]{}, 2578 (1987); L. Reggiani, P. Lugli, and A.P. Jauho, *ibid.* [**36**]{}, 6602 (1987); P. Lipavský, F.S. Khan, F. Abdolsalami, J.W. Wilkins, *ibid.* [**43**]{}, 4885 (1991). M. Lundstrom, IEEE Electron Device Lett. [**18**]{}, 361 (1997); S. Datta, F. Assad, and M.S. Lundstrom, Superlatt. Microsctruc. [**23**]{}, 771 (1997). S. Iijima and T. Ichihashi, Nature [**363**]{}, 603 (1993). C. Dekker, Phys. Today [**52**]{}(5), 22 (1999). R. Saito, G. Dresselhaus and M.S. Dresselhaus, *Physical Properties of Carbon Nanotubes* (Imperial College Press, London, 1998); C.T. White and J.W. Mintmire, Nature [**394**]{}, 29 (1998). P.L. McEuen, M.S. Fuhrer, and H. Park, IEEE Trans. Nanotech. [**1**]{}, 78 (2002); Ph. Avouris, J. Appenzeller, R. Martel, and S.J. Wind, Proc. IEEE [**91**]{}, 1772 (2003). S.J. Tans, A.R.M. Verschuen and C. Dekker, Nature [**393**]{}, 49 (1998); A. Bachtold, P. Hadley, T. Nakanishi and C. Dekker, Science [**294**]{}, 1317 (2001); S.G. Lemay, J.W. Janssen, M. van den Hout, M. Mooij, M.J. Broikowski, P.A. Willis, R.E. Smalley, L.P. Kouwenhoven and C. Dekker, Nature [**412**]{}, 617 (2001). R. Martel, T. Schmidt, H.R. Shea, T. Hertel and Ph. Avouris, Appl. Phys. Lett. [**73**]{}, 2447 (1998); R. Martel, V. Derycke, C. Lavoie, J. Appenzeller, K.K. Chan, J. Tersoff and Ph. Avouris, Phys. Rev. Lett. [**87**]{}, 256805 (2001). M. Bockrath, D.H. Cobden, J. Lu, A.G. Rinzler, R.E. Smalley, L. Balents and P.L. McEuen, Nature [**397**]{}, 598 (1999); J. Park and P. McEuen, Appl. Phys. Lett. [**79**]{}, 1363 (2001). J. Kong, H.T. Soh, A. Cassell, C.F. Quate, and H. Dai, Nature [**395**]{}, 878 (1998); C. Zhou, J. Kong, and H. Dai, Appl. Phys. Lett. [**76**]{}, 1597 (2000); C. Zhou, J. Kong, E. Yenilmez, and H. Dai, Science [**290**]{}, 1552 (2000). J. Hone, B. Batlogg, Z. Benes, A.T. Johson, and J.E. Fischer, Science [**289**]{}, 1730 (2000); M. Freitag, M. Radosavljevic, Y. Zhou, A.T. Johnson and W.F. Smith, Appl. Phys. Lett. [**79**]{}, 3326 (2001). M. S. Gudiksen, L. J. Lauhon, J. Wang, D. Smith, and C. M. Lieber, Nature [**415**]{}, 617 (2002); Y. Xia, P. Yang, Y. Sun, Y. Wu, B. Mayers, B. Gates, Y. Yin, F. Kim, and H. Yan, Adv. Mater. [**15**]{}, 353 (2003). Y. Xue and S. Datta, Phys. Rev. Lett. [83]{}, 4844 (1999); Y. Xue and S. Datta, in *Science and Applications of Nanotubes*, edited by D. Tománek and R. Enbody (Kluwer, New York, 2000). F. Le[ó]{}nard and J. Tersoff, Phys. Rev. Lett. [**83**]{}, 5174 (1999); [**84**]{}, 4693 (2000). A.A. Odintsov, Phys. Rev. Lett. [85]{}, 150 (2000). C.L. Kane and E.J. Mele, Appl. Phys. Lett. [**78**]{}, 114 (2001). T. Nakanishi, A. Bachtold and C. Dekker, Phys. Rev. B [**66**]{}, 73307 (2002). E.H. Rhoderick and R.H. Williams, *Metal-Semiconductor Contacts*, 2nd edition (Clarendon Press, Oxford, 1988); H.K. Henisch, *Semiconductor Contacts* (Clarendon Press, Oxford, 1984). P.-W. Chiu, M. Kaempgen, and S. Roth, Phys. Rev. Lett. [**92**]{}, 246802 (2004); G. Chen, S. Bandow, E.R. Margine, C. Nisoli, A.N. Kolmogorov, V.H. Crespi, R. Gupta, G.U. Sumanasekera, S. Iijima, and P.C. Eklund, *ibid.* [**90**]{}, 257403 (2003). E. D. Minot, Y. Yaish, V. Sazonova, J.-Y. Park, M. Brink, and P.L. McEuen, Phys. Rev. Lett. [**90**]{}, 156401 (2003); J. Cao, Q. Wang, and H. Dai, *ibid.* [**90**]{}, 157601 (2003). V. Derycke, R. Martel, J. Appenzeller and Ph. Avouris, Appl. Phys. Lett. [**80**]{}, 2773 (2002); S. Heinze, J. Tersoff, R. Martel, V. Derycke, J. Appenzeller, and Ph. Avouris, Phys. Rev. Lett. [**89**]{}, 106801 (2002); S.J. Wind, J. Appenzeller, and Ph. Avouris, *ibid.* [**91**]{}, 58301 (2003). A. Javey, J. Guo, Q. Wang, M. Lundstrom, and H. Dai, Nature [**424**]{}, 654 (2003); J. Tersoff, *ibid.* [**424**]{}, 622 (2003). Y. Yaish, J.-Y. Park, S. Rosenblatt, V. Sazonova, M. Brink, and P.L. McEuen, Phys. Rev. Lett. [**92**]{}, 46401 (2004). W. M[ö]{}nch, Rep. Prog. Phys. [**53**]{}, 221 (1990); See also W. M[ö]{}nch, *Semiconductor surfaces and interfaces*, 2nd edition (Springer, Berlin, 1995). G. Margaritondo, Rep. Prog. Phys. [**62**]{}, 765 (1999); M. Peressi, N. Binggeli, and A. Baldereschi, J. Phys. D [**31**]{}, 1273 (1998). W. M[ö]{}nch, J. Vac. Sci. Technol. B [**17**]{}, 1867 (1999); R.T. Tung, Phys. Rev. Lett. [**84**]{}, 6078 (2000). F. Berz, Solid-State Electron. [**28**]{}, 1007 (1985); C.M. Maziar and M.S. Lundstrom, Electron. Lett. [**23**]{}, 61 (1987). R. Landauer, Physica Scripta [**T42**]{}, 110 (1992); J. Phys.: Condens. Matter [**1**]{}, 8099 (1989). T. Ando and T. Nakanishi, J. Phys. Soc. Jpn. [**67**]{}, 1704 (1998); T. Ando, T. Nakanishi, and R. Saito, *ibid.* [**67**]{}, 2857 (1998). C.L. Kane and E.J. Mele, Phys. Rev. Lett. [**78**]{}, 1932 (1997); Z. Yao, C.L. Kane, and C. Dekker, *ibid.* [**84**]{}, 2941 (2000). C.T. White and T.N. Todorov, Nature [**393**]{}, 240 (1998). J.-Y. Park, S. Rosenblatt, Y. Yaish, V.Sazonova, H. Ustunel, S. Braig, T. A. Arias, P. Brouwer and P.L. McEuen, Nano Lett. [**4**]{}, 517 (2004); A. Javey, J. Guo, M. Paulsson, Q. Wang, D. Mann, M. Lundstrom and H. Dai, Phys. Rev. Lett. [**92**]{}, 106804 (2004). R.W. Keyes, Proc. IEEE [**89**]{}, 227 (2001); [**63**]{}, 740 (1975). J. Guo, S. Datta, M. Lundstrom, M. Brink, P. McEuen, A. Javey, H. Dai, H. Kim, and P. McIntyre, in *IEDM Tech. Dig.*, Dec. 2002, p. 29.3. D. Orlikowski, H. Mehrez, J. Taylor, H. Guo, J. Wang, and C. Roland, Phys. Rev. B [**63**]{}, 155412 (2001). Y. Xue and M.A. Ratner, Appl. Phys. Lett. [**83**]{}, 2429 (2003). Y. Xue and M.A. Ratner, Phys. Rev. B. [**69**]{}, 161402(R) (2004). S. Dag, O. G[ü]{}lseren, S. Ciraci, and T. Yildirim, Appl. Phys. Lett. [**83**]{}, 3180 (2003). For recent reviews, see M.A. Reed, Proc. IEEE [**97**]{}, 652 (1999); C. Joachim, J.K. Gimzewski and A. Aviram, Nature [**408**]{}, 541 (2000); A. Nitzan and M.A. Ratner, Science [**300**]{}, 1384 (2003); J.R. Heath and M.A. Ratner, Phys. Today [**56**]{}(5), 43 (2003) and references thererein. Y. Xue, S. Datta and M. A. Ratner, J. Chem. Phys. [**115**]{}, 4292 (2001). Y. Xue and M.A. Ratner, Phys. Rev. B [**68**]{}, 115406 (2003); [**68**]{}, 115407 (2003); [**69**]{}, 85403 (2004). C.W.J. Beenakker and H. van Houten, Solid State Phys. [**44**]{}, 1 (1991); Y. Imry and R. Landauer, Rev. Mod. Phys. [**71**]{}, S306 (1999); Y. Imry, *Introduction to Mesoscopic Physics*, 2nd edition (Oxford University Press, Oxford, 2002). M. B[ü]{}ttiker, Phys. Rev. Lett. 57, 1761-1764 (1986). R. Landauer, IBM J. Res. Develop. [**32**]{}, 306 (1988); M. B[ü]{}ttiker, *ibid.* [**32**]{}, 317 (1988). Y. Xue, S. Datta and M. A. Ratner, J. Chem. Phys. [**115**]{}, 4292 (2001); Chem. Phys. [**281**]{}, 151 (2002); Y. Xue, Ph.D. thesis, School of Electrical and Computer Engineering, Purdue University (2000). J. Taylor, H. Guo, and J. Wang, Phys. Rev. B [**63**]{}, 245407 (2001); C.-C. Kaun, B. Larade, H. Mehrez, J. Taylor, and H. Guo, Phys. Rev. B [**65**]{}, 205416 (2002). S. Datta, *Electron Transport in Mesoscopic Systems* (Cambridge University Press, Cambridge, 1995); D.K. Ferry and S.M. Goodnick, *Transport in Nanostructures* (Cambridge University Press, Cambridge, 1997). Y. Meir and N. S. Wingreen, Phys. Rev. Lett. [**68**]{}, 2512 (1992); A.P. Jauho, N.S. Wingreen and Y. Meir, Phys. Rev. B [**50**]{}, 5528 (1994); H. Haug and A-P. Jauho, *Quantum Kinetics in Transport and Optics of Semiconductors* (Springer-Verlag, Berlin, 1996). *Theory of The Inhomogeneous Electron Gas*, edited by S. Lundqvist and N.H. March (Plenum Press, New York, 1983); R.M. Dreizler and E.K.U. Gross, *Density Functional Theory: An Approach to the Quantum Many-Body Problem* (Springer-Verlag, Berlin, 1990). C. M. Goringe, D. R. Bowler and E. Hern[á]{}ndez, Rep. Prog. Phys. [**60**]{}, 1447 (1997). R. Lake, G. Klimeck, R. C. Bowen and D. Jovanovic, J. Appl. Phys. [**81**]{}, 7845 (1997). Y. Xue, S. Datta, S. Hong, R. Reifenberger, J.I. Henderson, and C.P. Kubiak, Phys. Rev. B [**59**]{}, 7852 (1999). C. Joachim and J.F. Vinuesa, Europhys. Lett. [**33**]{}, 635 (1996); M. Magoga and C. Joachim, Phys. Rev. B [**59**]{}, 16011 (1999). W. Tian, S. Datta, S. Hong, R. Reifenberger, J.J. Henderson and C.P. Kubiak, J. Chem. Phys. [**109**]{}, 2874 (1998). V. Mujica, A.E. Roitberg, and M.A. Ratner, J. Chem. Phys. [**112**]{}, 6834 (2000). M. Elstner, D. Porezag, G. Jungnickel, J. Elsner, M. Haugk, T. Frauenheim, S. Suhai and G. Seifert, Phys. Rev. B [**58**]{}, 7260 (1998); T. Frauenheim, G. Seifert, M. Elstner, T. Niehaus, C. K[ö]{}hler,M. Amkreutz, M. Sternberg, Z. Hajnal, A.D. Carlo, and S. Suhai, J. Phys.: Condens. Matter [**14**]{}, 3015 (2002). Y. Xue and M.A. Ratner, Mater. Res. Soc. Symp. Proc. [**734**]{}, B6.8 (2003). J.D. Jackson, *Classical Electrodynamics*, 2nd edition (Wiley, New York, 1975). The image plane is chosen to lie at the second atomic layer under the metal surface to avoid the divergence very close to the surface, as suggested by Lang and Kohn in their classical treatment of metal surfaces. See N.D. Lang and W. Kohn, Phys. Rev. B [**1**]{}, 4555 (1970); *ibid.* [**7**]{}, 3541 (1973). A. R. Williams, P. J. Feibelman and N. D. Lang, Phys. Rev. B [**26**]{}, 5433 (1982); R. Zeller, J. Deuta and P. H. Dederichs, Solid State Commun. [**44**]{}, 993 (1982). The work functions of gold and titanium used here are those of polycrystalline materials since the metallic electrodes used in most experiments are almost never close to being single-crystalline. *CRC Handbook of Chemistry and Physics* (CRC Press, Boca Raton, 1994). R. Hoffmann, Rev. Mod. Phys. [**60**]{}, 601 (1988) and references thererein. It has been shown by A. Rochefort, D.R. Salahub, and Ph. Avouris (J. Phys. Chem. B [**103**]{}, 641 (1999)) that EHT gives a good description of SWNT band structure as compared to the local-density-functional theory with similar minimal valence basis sets. D. A. Papaconstantopoulos, *Handbook of the Band Structure of Elemental Solids* (Plenum Press, New York, 1986). D.J. Chadi and M.L. Cohen, Phys. Stat. Sol. (b) [**68**]{}, 405 (1975). J. Tersoff and W.A. Harrison, Phys. Rev. Lett. [**58**]{}, 2367 (1987); I. Lefebvre, M. Lannoo, C. Priester, G. Allan, and C. Delerue, Phys. Rev. B [**36**]{}, 1336 (1987). J. Tersoff, Phys. Rev. Lett. [**52**]{}, 465 (1984); J. Tersoff, in *Heterojunction Band Structure Discontinuities: Physics and Device Applications*, edited by F. Capasso and G. Margaritondo (North-Holland, Amsterdam, 1987). J. Friedel, Phil. Mag. [**43**]{}, 153 (1952). J. Bardeen, Phys. Rev. [**71**]{}, 717 (1947); V. Heine, Phys. Rev. [**138**]{}, A1689 (1965); C. Tejedor, F. Flores, and E. Louis, J. Phys. C [**10**]{}, 2163 (1977). S.G. Louie and M.L. Cohen, Phys. Rev. B [**13**]{}, 2461 (1976); S.G. Louie, J.R. Chelikowsky, and M.L. Cohen, *ibid.* [**15**]{}, 2154 (1977). N.D. Lang and Ph. Avouris, Phys. Rev. Lett. [**84**]{}, 358 (2000). Note that for the metal-SWNT distance of $\Delta L=2.5(\AA)$, there is a small difference between the calculated transfered charge and potential change reported here and those in Ref. because we use a refined integration grid here. None of the conclusions reached there is affected by this difference, which also provides a numerical test for the SCTB model. F. Le[ó]{}nard and J. Tersoff, Appl. Phys. Lett. [**81**]{}, 4835 (2002). M. Magoga and C. Joachim, Phys. Rev. B [**57**]{}, 1820 (1998). J.K. Tomfohr and O.F. Sankey, Phys. Rev. B [**65**]{}, 245105 (2002). Although this paper was intended for devices based on short organic molecules, it is more appropriate for the SWNT junctions considered here, which are composed of repeating units. See also G. Fagas and A. Kambili, cond-mat/0403694. C.R. Crowell and S.M. Sze, Solid-St. Electron. [**9**]{}, 1035 (1966); F.A. Padovani and R. Stratton, *ibid.* [**9**]{}, 695 (1966); K. Shenai and R.W. Dutton, IEEE Trans. Electron Devices [**35**]{}, 468 (1988). L.D. Landau and E.M. Lifshitz, *Quantum Mechanics: Non-Relativistic Theory* (Pergamon Press, Oxford, 1977). M. B[ü]{}ttiker, IBM J. Res. Develop. [**32**]{}, 63 (1988). M. B[ü]{}ttiker, J. Phys.: Condens. Matter [**5**]{}, 9361 (1993); T. Christen and M. Büttiker, Europhys. Lett. [**35**]{}, 523 (1996). E.H. Nicollian and J.R. Brews, *MOS (Metal Oxide Semiconductor) Physics and Technology* (Wiley, New York, 1982).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The accuracy of different transfer matrix approaches, widely used to solve the stationary effective mass Schrödinger equation for arbitrary one-dimensional potentials, is investigated analytically and numerically. Both the case of a constant and a position dependent effective mass are considered. Comparisons with a finite difference method are also performed. Based on analytical model potentials as well as self-consistent Schrödinger-Poisson simulations of a heterostructure device, it is shown that a symmetrized transfer matrix approach yields a similar accuracy as the Airy function method at a significantly reduced numerical cost, moreover avoiding the numerical problems associated with Airy functions.'
author:
- 'Christian Jirauschek (Dated: 16 June 2011, published as IEEE J. Quantum Electron. 45, 1059–-1067 (2009)) ©2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. [^1] [^2]'
title: Accuracy of Transfer Matrix Approaches for Solving the Effective Mass Schrödinger Equation
---
Quantum effect semiconductor devices, Quantum well devices, Quantum theory, Semiconductor heterojunctions, Eigenvalues and eigenfunctions, Numerical analysis, Tunneling, MOS devices
Introduction {#Intro}
============
matrix methods provide an important tool for investigating bound and scattering states in quantum structures. They are mainly used to solve the one-dimensional Schrödinger or effective mass equation, e.g., to obtain the quantized eigenenergies in quantum well heterostructures and metal-oxide-semiconductor structures or the transmission coefficient of potential barriers [@1987JAP....61.1497A; @1990IJQE...26.2025J; @2000JAP....87.7931C; @1997plds.book.....D]. Analytical expressions for the transfer matrices are only available in certain cases, as for constant or linear potential sections and potential steps [@1997plds.book.....D]. An arbitrary potential can then be treated by approximating it for example in terms of piecewise constant or linear segments, for which analytical transfer matrices exist. For constant potential segments, the matrices are based on complex exponentials [@1987JAP....61.1497A; @1990IJQE...26.2025J], while the linear potential approximation requires the evaluation of Airy functions [@1990IJQE...26.2025J].
Many applications call for highly accurate methods, e.g., quantum cascade laser structures where layer thickness changes by a few Å already lead to significantly modified wavefunctions, resulting in altered device properties [@2005ApPhL..86k1115V; @2007JAP...101h6109J]. Also numerical efficiency is crucial, especially in cases where the Schrödinger equation has to be solved repeatedly. Examples are the shooting method where the eigenenergies of bound states are found by energy scans, or Schrödinger-Poisson solvers working in an iterative manner [@2000JAP....87.7931C]. Besides providing accurate results at moderate computational cost, an algorithm is expected to be numerically robust, and a straightforward implementation is also advantageous.
Besides transfer matrices, also other methods are frequently used, in particular finite difference or finite element schemes [@1998hqd.book.....F; @1990PhRvB..4112047J]. For scattering state calculations, they are complimented by suitable transparent boundary conditions, resulting in the Quantum Transmitting Boundary Method (QTBM) [@1998hqd.book.....F; @1990JAP....67.6353L]. The transfer matrix method tends to be less numerically stable than the QTBM, since for multiple or extended barriers, numerical instabilities can arise due to an exponential blowup caused by roundoff errors [@1998hqd.book.....F]. This issue can however be overcome, for example by using a somewhat modified matrix approach, the scattering matrix method [@1988PhRvB..38.9945K]. In this case, the transfer matrices of the individual segments are not used to compute the overall transfer matrix, but rather the scattering matrix of the structure. In addition, transfer matrices have many practical properties, such as their intuitiveness particularly for scattering states, the intrinsic current conservation, and the exact treatment of potential steps, which arise at the interfaces of differing materials. This makes them especially suitable and popular for 1-D heterostructures or metal-oxide-semiconductor structures, providing a simple, accurate and efficient simulation method [@1990IJQE...26.2025J].
As mentioned above, transfer matrices are usually based on a piecewise constant or piecewise linear approximation of an arbitrary potential, giving rise to exponential and Airy function solutions, respectively. The main strength of the Airy function approach is that it provides an exact solution for structures consisting of piecewise linear potentials, and hence only requires few segments for approximating almost linear potentials with sufficient accuracy. On the other hand, Airy functions are much more computationally demanding than exponentials, and also prone to numerical overflow for regions with nearly flat potential [@1996IJQE...32.1093V]. Thus, great care has to be taken to avoid these problems, and to evaluate the Airy functions in an efficient way [@2001JAP....90.6120D].
It would be desirable to combine the advantages of both methods, namely the accuracy of the piecewise linear approximation and the computational convenience of the exponential transfer matrix scheme. In this paper, we evaluate the accuracy and efficiency of the different transfer matrix approaches, taking into account both bound and scattering states. In this context, analytical expressions for the corresponding local discretization error are derived. We furthermore evaluate the different approaches numerically on the basis of an analytically solvable model potential, and also draw comparisons to the QTBM. In particular, we demonstrate that a symmetrized exponential matrix approach is able to provide an accuracy comparable to that of the Airy function method, without having its problems and drawbacks. In our investigation, we will consider both the case of a constant effective mass and the more general case of a position dependent effective mass.
Transfer matrix approach {#sec:tra}
========================
In a single-band approximation, the wavefunction $\psi$ of an electron with energy $E$ in a one-dimensional quantum structure can be described by the effective mass equation $$\left[ -\frac{\hbar^{2}}{2}\partial_{z}\frac{1}{m^{\ast}\left( z\right)
}\partial_{z}+V(z)-E\right] \psi(z)=0. \label{effm}%$$ Here, the effective mass $m^{\ast}$ and the potential $V$ generally depend on the position $z$ in the structure. For applying the transfer matrix scheme, we divide the structure into segments, see Fig. \[transfer\], which can vary in length. Potential and effective mass discontinuities can be treated exactly in transfer matrix approaches by applying corresponding matching conditions. To take advantage of this fact and obtain optimum accuracy, the segments should be chosen so that band edge discontinuities, as introduced by heterostructure interfaces, do not lie within a segment, but rather at the border between two segments.
![Various transfer matrix schemes applied to segmented potential. Shown is the exact (solid line) and approximated (dashed line) potential. (a) Piecewise constant potential approximation. (b) Piecewise linear approximation. (c) Piecewise contant approximation for symmetrized transfer matrix.[]{data-label="transfer"}](jirau1.eps){width="3.5in"}
Conventional transfer matrices {#sub:tra1}
------------------------------
For the piecewise constant potential approach (Fig. \[transfer\](a)), the potential and effective mass in each segment $j$ are approximated by constant values, e.g., $V_{j}=V\left( z_{j}\right) $, $m_{j}^{\ast}=m^{\ast}\left(
z_{j}\right) $ for $z_{j}\leq z<z_{j}+\Delta_{j}=z_{j+1}$, and a jump $V_{j}\rightarrow V_{j+1}$, $m_{j}^{\ast}\rightarrow m_{j+1}^{\ast}$ at the end of the segment [@1990IJQE...26.2025J]. The solution of (\[effm\]) is for $z_{j}\leq z<z_{j+1}$ then given by$$\psi\left( z\right) =A_{j}\exp\left[ \mathrm{i}k_{j}\left( z-z_{j}\right)
\right] +B_{j}\exp\left[ -\mathrm{i}k_{j}\left( z-z_{j}\right) \right] ,
\label{psi}%$$ where $k_{j}=\sqrt{2m_{j}^{\ast}\left( E-V_{j}\right) }/\hbar$ is the wavenumber (for $E<V_{j}$, we obtain $k_{j}=\mathrm{i}\kappa_{j}%
=\mathrm{i}\sqrt{2m_{j}^{\ast}\left( V_{j}-E\right) }/\hbar$) [@1990IJQE...26.2025J]. The matching conditions for the wavefunction at the potential step read$$\begin{aligned}
\psi\left( z_{0}+\right) & =\psi\left( z_{0}-\right) ,\nonumber\\
\left[ \partial_{z}\psi\left( z_{0}+\right) \right] /m^{\ast}\left(
z_{0}+\right) & =\left[ \partial_{z}\psi\left( z_{0}-\right) \right]
/m^{\ast}\left( z_{0}-\right) , \label{match}%\end{aligned}$$ where $z_{0}+$ and $z_{0}-$ denote the positions directly to the right and left of the step, here located at $z_{0}=z_{j+1}$ [@1997plds.book.....D]. The amplitudes $A_{j+1}$ and $B_{j+1}$ are related to $A_{j}$ and $B_{j}$ by$$\left(
\begin{array}
[c]{c}%
A_{j+1}\\
B_{j+1}%
\end{array}
\right) =T_{j,j+1}\left(
\begin{array}
[c]{c}%
A_{j}\\
B_{j}%
\end{array}
\right) , \label{mat}%$$ with the transfer matrix $$\begin{aligned}
T_{j,j+1} & =T_{j\rightarrow j+1}T_{j}\left( \Delta_{j}\right) \nonumber\\
& =\left(
\begin{array}
[c]{cc}%
\frac{\beta_{j+1}+\beta_{j}}{2\beta_{j+1}}e^{\mathrm{i}k_{j}\Delta_{j}} &
\frac{\beta_{j+1}-\beta_{j}}{2\beta_{j+1}}e^{-\mathrm{i}k_{j}\Delta_{j}}\\
\frac{\beta_{j+1}-\beta_{j}}{2\beta_{j+1}}e^{\mathrm{i}k_{j}\Delta_{j}} &
\frac{\beta_{j+1}+\beta_{j}}{2\beta_{j+1}}e^{-\mathrm{i}k_{j}\Delta_{j}}%
\end{array}
\right) . \label{T1}%\end{aligned}$$ Equation (\[T1\]) is the product of the transfer matrix for a flat potential$$T_{j}\left( \Delta_{j}\right) =\left(
\begin{array}
[c]{cc}%
e^{\mathrm{i}k_{j}\Delta_{j}} & 0\\
0 & e^{-\mathrm{i}k_{j}\Delta_{j}}%
\end{array}
\right) ,$$ obtained from (\[psi\]), and the potential step matrix$$T_{j\rightarrow j+1}=\frac{1}{2\beta_{j+1}}\left(
\begin{array}
[c]{cc}%
\beta_{j+1}+\beta_{j} & \beta_{j+1}-\beta_{j}\\
\beta_{j+1}-\beta_{j} & \beta_{j+1}+\beta_{j}%
\end{array}
\right) \label{Tst}%$$ with $\beta_{j}=k_{j}/m_{j}^{\ast}$, derived from (\[match\]) [@1997plds.book.....D]. The relation between the amplitudes at the left and right boundaries of the structure, $A_{0},B_{0}$ and $A_{N},B_{N}$, can be obtained from$$\begin{aligned}
\left(
\begin{array}
[c]{c}%
A_{N}\\
B_{N}%
\end{array}
\right) & =T_{N-1,N}T_{N-2,N-1}\dots T_{0,1}\left(
\begin{array}
[c]{c}%
A_{0}\\
B_{0}%
\end{array}
\right) \nonumber\\
& =\left(
\begin{array}
[c]{cc}%
T_{11} & T_{12}\\
T_{21} & T_{22}%
\end{array}
\right) \left(
\begin{array}
[c]{c}%
A_{0}\\
B_{0}%
\end{array}
\right) , \label{mat2}%\end{aligned}$$ where $N$ is the total number of segments. For bound states, this equation must be complemented by suitable boundary conditions. One possibility is to enforce decaying solutions at the boundaries, $A_{0}=B_{N}=0$, corresponding to $T_{22}=0$ in (\[mat2\]), which is satisfied only for specific energies $E$, the eigenenergies of the bound states [@1990IJQE...26.2025J].
For the piecewise linear potential approach (Fig. 1(b)), the potential in each segment $j$ is linearly interpolated, $V\left( z\right) =V_{j}%
+V_{z,j}\left( z-z_{j}\right) $ for $z_{j}\leq z\leq z_{j}+\Delta
_{j}=z_{j+1}$, with $V_{z,j}=\left( V_{j+1}-V_{j}\right) /\Delta_{j}$. Equation (\[effm\]) can then be solved analytically in terms of the Airy functions $\mathrm{Ai}$ and $\mathrm{Bi}$ [@1990IJQE...26.2025J],$$\psi\left( z\right) =\mathcal{A}_{j}\mathrm{Ai}\left( s_{j}+\frac{z-z_{j}%
}{\ell_{j}}\right) +\mathcal{B}_{j}\mathrm{Bi}\left( s_{j}+\frac{z-z_{j}%
}{\ell_{j}}\right) \label{airy}%$$ for $z_{j}\leq z\leq z_{j+1}$, with $s_{j}=\left( V_{j}-E\right)
/\varepsilon_{j}$ and $\ell_{j}=\varepsilon_{j}/V_{z,j}$, where $\varepsilon
_{j}=\sqrt[3]{\hbar^{2}V_{z,j}^{2}/\left( 2m_{j}^{\ast}\right) }$. We obtain$$\begin{aligned}
\psi_{j+1} & =\mathcal{A}_{j}\mathrm{Ai}(s_{j}+\frac{\Delta_{j}}{\ell_{j}%
})+\mathcal{B}_{j}\mathrm{Bi}(s_{j}+\frac{\Delta_{j}}{\ell_{j}}),\nonumber\\
\psi_{j+1}^{\prime} & =\ell_{j}^{-1}\mathcal{A}_{j}\mathrm{Ai}^{\prime
}(s_{j}+\frac{\Delta_{j}}{\ell_{j}})+\ell_{j}^{-1}\mathcal{B}_{j}%
\mathrm{Bi}^{\prime}(s_{j}+\frac{\Delta_{j}}{\ell_{j}}), \label{airy1}%\end{aligned}$$ and$$\begin{aligned}
\mathcal{A}_{j} & =D_{j}^{-1}\mathrm{Bi}^{\prime}(s_{j})\psi_{j}-D_{j}%
^{-1}\ell_{j}\mathrm{Bi}(s_{j})\psi_{j}^{\prime},\nonumber\\
\mathcal{B}_{j} & =-D_{j}^{-1}\mathrm{Ai}^{\prime}(s_{j})\psi_{j}+D_{j}%
^{-1}\ell_{j}\mathrm{Ai}(s_{j})\psi_{j}^{\prime}, \label{airy2}%\end{aligned}$$ with $D_{j}=\mathrm{Ai}(s_{j})\mathrm{Bi}^{\prime}(s_{j})-\mathrm{Ai}^{\prime
}(s_{j})\mathrm{Bi}(s_{j})$. Here a prime denotes a derivative with respect to the argument of the Airy function (for $\mathrm{Ai}^{\prime}$, $\mathrm{Bi}%
^{\prime}$) or the position $z$ (in all other cases). A position dependent effective mass is treated by assigning a constant value to each segment $j$, for example $m^{\ast}\left( z_{j}\right) $ or preferably $\left[ m^{\ast
}\left( z_{j}\right) +m^{\ast}\left( z_{j+1}\right) \right] /2$ (see appendix), and using the matching conditions (\[match\]) at the boundary between two adjacent segments [@1990IJQE...26.2025J]. A piecewise linear interpolation of $m^{\ast}$ as for the potential is not feasible, since then the solutions of (\[effm\]) cannot be expressed in terms of Airy functions anymore. Equations (\[airy1\]), (\[airy2\]) can again be rewritten as a matrix equation of the form (\[mat\]), allowing us to treat the quantum structure using (\[mat2\]) in a similar manner as described above [@1990IJQE...26.2025J]. Interfaces introducing abrupt potential changes in the quantum structure must be taken into account explicitly in the Airy function approach by employing the matching conditions (\[match\]).
Symmetrized matrix {#sub:tra2}
------------------
In the transfer matrix approach, the amplitudes $A_{N}$ and $B_{N}$ at the right boundary of the structure are related to the values $A_{0}$ and $B_{0}$ at the left boundary by repeatedly applying the transfer matrix. Due to the segmentation of the potential, an error is introduced in (\[mat2\]) for every propagation step from a position $z_{j}$ to $z_{j+1}$, which is typically characterized in terms of the local discretization error (LDE). The LDE is defined as the difference between the exact and computed solution at a position $z_{j+1}$ obtained from a given function value at $z_{j}$. In the appendix, the LDE with respect to the amplitudes $A_{j}$ and $B_{j}$ for the transfer matrix (\[T1\]) is found to be $\mathcal{O}\left( \Delta_{j}%
^{2}\right) $. It can be improved to $\mathcal{O}\left( \Delta_{j}%
^{3}\right) $ by symmetrizing the matrix, i.e., placing the potential step in the middle of the segment, see Fig. 1(c). The resulting transfer matrix is then with $k_{j}^{\pm}=\left( k_{j}\pm k_{j+1}\right) /2$ given by $$\begin{aligned}
T_{j,j+1} & =T_{j+1}\left( \frac{\Delta_{j}}{2}\right) T_{j\rightarrow
j+1}T_{j}\left( \frac{\Delta_{j}}{2}\right) \nonumber\\
& =\left(
\begin{array}
[c]{cc}%
\frac{\beta_{j+1}+\beta_{j}}{2\beta_{j+1}}e^{\mathrm{i}k_{j}^{+}\Delta_{j}} &
\frac{\beta_{j+1}-\beta_{j}}{2\beta_{j+1}}e^{-\mathrm{i}k_{j}^{-}\Delta_{j}}\\
\frac{\beta_{j+1}-\beta_{j}}{2\beta_{j+1}}e^{\mathrm{i}k_{j}^{-}\Delta_{j}} &
\frac{\beta_{j+1}+\beta_{j}}{2\beta_{j+1}}e^{-\mathrm{i}k_{j}^{+}\Delta_{j}}%
\end{array}
\right) ,\label{T2}%\end{aligned}$$ where again $k_{j}=\sqrt{2m_{j}^{\ast}\left( E-V_{j}\right) }/\hbar$, $\beta_{j}=k_{j}/m_{j}^{\ast}$. As in the Airy function approach, interfaces introducing abrupt potential changes in the quantum structure must be dealt with separately by applying the matching conditions; here, the corresponding transfer matrix (\[Tst\]) can be used.
Comparison {#sec:com}
==========
The improved transfer matrix (\[T2\]) can be evaluated at a comparable computational cost as the matrix (\[T1\]), but exhibits a superior accuracy. As shown in the appendix, the local discretization error with respect to the amplitudes $A_{j}$ and $B_{j}$ is improved from $\mathcal{O}\left( \Delta
_{j}^{2}\right) $ to $\mathcal{O}\left( \Delta_{j}^{3}\right) $ for arbitrary potentials and effective masses, i.e., the same order as for the Airy function approach, which however involves a significantly higher computational effort.
![Exponential model potential with $d=10\,\mathrm{nm}$ and $K=-1/d$, used for evaluating the accuracy of various methods. (a) Barrier. (b) Quantum well.[]{data-label="pot"}](jirau2.eps){width="3.5in"}
In the following, we compare the accuracy of the different methods for an analytically solvable model potential. Here, polynomial test potentials are not suitable for a general discussion since their higher order derivatives identically vanish, which can lead to an increased accuracy in such special cases. Especially triangular or other piecewise linear potentials are obviously inadequate since the Airy function approach then becomes exact. Instead, we choose the exponential ansatz $$V(z)=V_{0}+V_{1}\exp\left( Kz\right) , \label{exppot}%$$ $0\leq z\leq d$ (see Fig. \[pot\]), approaching a linear function for $K\rightarrow0$. Such a potential can for example serve as a model for the effective potential profile in the presence of space charges [@2002IEDL...23..348S; @2005JAP....97f4107N].
Position independent effective mass
-----------------------------------
For now, we assume a constant effective mass $m^{\ast}$. Then, analytical solutions of the form $$\psi=c_{1}\mathrm{J}_{\mu}\left( a\right) +c_{2}\mathrm{Y}_{\mu}\left(
a\right) \label{bess}%$$ exist for the potential (\[exppot\]), with constants $c_{1}$ and $c_{2}$. Here, $\mathrm{J}_{\mu}$ and $\mathrm{Y}_{\mu}$ are Bessel functions of the first and second kind, and the parameters are given by$$\begin{aligned}
\mu & =2\frac{\sqrt{2m^{\ast}\left( V_{0}-E\right) }}{\hbar K},\nonumber\\
a(z) & =2\frac{\sqrt{-2m^{\ast}V_{1}}}{\hbar K}\exp\left( \frac{1}%
{2}Kz\right) .\end{aligned}$$
![Relative error $\varepsilon_{T}=\left| 1-T_{\mathrm{num}}/T\right| $ of the numerically obtained transmission coefficient $T_{\mathrm{num}}$ as a function of the number of segments $N$. The corresponding barrier is shown in Fig. \[pot\](a), the effective mass is assumed to be constant.[]{data-label="Tran"}](jirau3.eps){width="3.5in"}
For our simulations, the different transfer matrix approaches discussed in Section \[sec:tra\] are used to compute an overall matrix based on (\[mat2\]), from which the required quantities can be extracted. First, we investigate the barrier structure shown in Fig. \[pot\](a), which can be characterized in terms of a transmission coefficient $T$, giving the tunneling probability of an electron [@1997plds.book.....D]. The unsymmetrized, symmetrized and Airy function transfer matrix approaches are evaluated, based on the expressions (\[T1\]), (\[T2\]) and (\[airy1\]), respectively; for comparison, also the QTBM result is computed. Assuming an electron energy of $E=0$ and a constant effective mass of $m^{\ast}=0.067\,m_{e}$ corresponding to GaAs, where $m_{e}$ is the electron mass, the exact value obtained by evaluating (\[bess\]) is $T=1.749\times10^{-4}$. Fig. \[Tran\] shows the relative error $\varepsilon_{T}\left( N\right) =\left| 1-T_{\mathrm{num}%
}\left( N\right) /T\right| $ as a function of $N\propto\Delta^{-1}$. Here, $T_{\mathrm{num}}\left( N\right) $ is the numerical result for the transmission coefficient, as obtained by the different methods for a subdivision of the structure into $N$ segments of equal length $\Delta
=d/N\propto N^{-1}$. As can be seen from Fig. \[Tran\], the error scales with $N^{-1}\propto\Delta$ for the unsymmetrized transfer matrix approach and with $N^{-2}\propto\Delta^{2}$ for the other methods. This can easily be understood by means of the local discretization error, which is $\mathcal{O}%
\left( \Delta^{3}\right) $ for the Airy function approach and the symmetrized transfer matrix, and $\mathcal{O}\left( \Delta^{2}\right) $ for the unsymmetrized matrix, as discussed above and in the appendix. When the overall transfer matrix of the structure is computed from (\[mat2\]), the individual LDEs arising for each of the $N$ segments accumulate, thus resulting in a total error $N\mathcal{O}\left( \Delta^{2}\right)
=\mathcal{O}\left( \Delta\right) $ for the unsymmetrized approach and $N\mathcal{O}\left( \Delta^{3}\right) =\mathcal{O}\left( \Delta^{2}\right)
$ for the other schemes.
The symmetrized transfer matrix approach and the Airy function method are the most accurate, both exhibiting a comparable error $\varepsilon_{T}\left(
N\right) $. However, the symmetrized matrix approach is much more computationally efficient, being over $20$ times faster than the Airy function method in our MATLAB implementation. For a given $N$, the QTBM is even three times faster than the symmetrized matrix approach, but also 40 times less accurate, meaning that it requires $\sqrt{40}\approx6$ times as many grid points as the symmetrized matrix approach to achieve the same accuracy.
![Relative error $\varepsilon_{T}=\left| 1-T_{\mathrm{num}}/T\right| $ of the numerically obtained transmission coefficient $T_{\mathrm{num}}$ as a function of $Kd$. Here, $N=1000$ segments are used. The corresponding barrier is shown in Fig. \[pot\](a), the effective mass is assumed to be constant.[]{data-label="Kd"}](jirau4.eps){width="3.5in"}
Fig. (\[Kd\]) shows again the relative error $\varepsilon_{T}$, but now for a fixed number of segments $N=1000$. Instead, the shape of the potential is modified by varying $K$ in (\[exppot\]), and also adapting $V_{0}$ and $V_{1}$ so that $V\left( z\right) $ remains constant at $z=0$ and $z=d$ and only the curvature of the potential changes. The symmetrized matrix approach and the Airy function method exhibit a superior accuracy especially for small $K$, corresponding to a weak curvature of the potential. While the error of the unsymmetrized matrix approach and the QTBM show only a weak dependence on $K$, the Airy function method becomes exact for $K\rightarrow0$, where the potential becomes piecewise linear. Interestingly, also the symmetrized transfer matrix approach has a vanishing error $\varepsilon_{T}$ for a specific value of $K$, at $Kd\approx0.167$.
![Relative error $\varepsilon_{E}=\left| 1-E_{\mathrm{num}}/E\right| $ of the numerically obtained eigenenergy $E_{\mathrm{num}}$ for the (a) first and (b) second bound state as a function of the number of segments $N$. The corresponding well is shown in Fig. \[pot\](b), the effective mass is assumed to be constant.[]{data-label="E"}](jirau5.eps){width="3.5in"}
Now we apply the different numerical methods to the bound states of the potential well shown in Fig. \[pot\](b). Again assuming a constant effective mass of $m^{\ast}=0.067\,m_{e}$, evaluation of (\[bess\]) yields two bound states with eigenvalues $E_{1}=-0.1343$$\mathrm{eV}$ and $E_{2}=-0.0129\,\mathrm{eV}$, respectively. In the following, we compare the accuracy of the numerically found eigenenergies $E_{\mathrm{num}}$, as obtained by the unsymmetrized and the symmetrized transfer matrix approach and the Airy function method, corresponding to the expressions (\[T1\]), (\[T2\]) and (\[airy1\]), respectively. Here, we again divide the structure into $N$ segments of equal length $\Delta=d/N\propto N^{-1}$. Fig. \[E\] shows the relative error $\varepsilon_{E}\left( N\right) =\left|
1-E_{\mathrm{num}}\left( N\right) /E\right| $ for the first and the second bound state as a function of $N$. As for the transmission coefficient in Fig. \[Tran\], the error scales with $N^{-1}\propto\Delta$ for the unsymmetrized matrix approach and with $N^{-2}\propto\Delta^{2}$ for the other methods. Again, the symmetrized matrix approach and the Airy function method exhibit a comparable value of $\varepsilon_{E}\left( N\right) $, being far superior to the unsymmetrized matrix approach.
Position dependent effective mass
---------------------------------
Now we compare the accuracy of the different methods for a position dependent effective mass $m^{\ast}(z)$. Here, we choose the same exponential ansatz for the potential as above, see (\[exppot\]) and Fig. \[pot\]. For an effective mass of the form $m^{\ast}=m_{0}^{\ast}\exp\left( -Kz\right) $, again an analytical solution exists: $$\psi=c_{1}\mathrm{J}_{\mu}\left( a\right) \exp\left( -Kz/2\right)
+c_{2}\mathrm{Y}_{\mu}\left( a\right) \exp\left( -Kz/2\right) ,
\label{bess2}%$$ with$$\begin{aligned}
\mu & =-\sqrt{1+8\frac{m_{0}V_{1}}{K^{2}\hbar^{2}}},\nonumber\\
a(z) & =2\frac{\sqrt{-2m_{0}\left( V_{0}-E\right) }}{\hbar K}\exp\left(
-\frac{1}{2}Kz\right) .\end{aligned}$$ The transfer matrix definitions (\[T1\]), (\[T2\]) are also valid for position dependent effective masses. In the Airy function approach (\[airy1\]), a position dependent effective mass can be accounted for by assuming a constant value within each segment, as discussed at the end of Section \[sub:tra1\]. Here, we assign the averaged mass $\left( m_{j}%
^{\ast}+m_{j+1}^{\ast}\right) /2$ rather than $m_{j}^{\ast}$ to each segment, since then the third order LDE, found for the amplitudes $\mathcal{A}$ and $\mathcal{B}$ in the case of position independent masses, is also preserved for the position dependent case, see the appendix.
![Relative error $\varepsilon_{T}=\left| 1-T_{\mathrm{num}}/T\right| $ of the numerically obtained transmission coefficient $T_{\mathrm{num}}$ as a function of the number of segments $N$. The corresponding barrier is shown in Fig. \[pot\](a), the effective mass is assumed to be position dependent.[]{data-label="Tranm"}](jirau6.eps){width="3.5in"}
Fig. \[Tranm\] corresponds to Fig. \[Tran\], but now for a position dependent effective potential with $m^{\ast}=0.2\,m_{e}\exp\left( -Kz\right)
$ for $0\leq z\leq d$ and $m^{\ast}=0.067\,m_{e}$ otherwise. The exact transmission coefficient for an electron with energy $E=0$, as obtained by evaluating (\[bess2\]), is now $T=5.376\times10^{-10}$. From Fig. \[Tranm\] we can see that also here the error scales with $N^{-1}%
\propto\Delta$ for the unsymmetrized matrix approach and with $N^{-2}%
\propto\Delta^{2}$ for the other methods, compare Fig. \[Tran\]. Again, the symmetrized matrix approach and the Airy function method are the most accurate, with the symmetrized matrix approach being numerically much more efficient.
![Relative error $\varepsilon_{T}=\left| 1-T_{\mathrm{num}}/T\right| $ of the numerically obtained transmission coefficient $T_{\mathrm{num}}$ as a function of $Kd$. Here, $N=1000$ segments are used. The corresponding barrier is shown in Fig. \[pot\](a), the effective mass is assumed to be position dependent.[]{data-label="Kd_m"}](jirau7.eps){width="3.5in"}
For the sake of completeness, Fig. (\[Kd\_m\]) is shown as the counterpart of Fig. \[Kd\], but now taking into account a position dependent effective mass as above. Again, the symmetrized matrix approach and the Airy function method have a superior accuracy especially for small values of $K$, corresponding to a weak curvature of the potential.
Example: Schrödinger-Poisson solver {#sec:ex}
===================================
In the following, we apply the transfer matrices discussed above to a real-world example, namely finding the wavefunctions and eigenenergies of the quantum cascade laser (QCL) structure described in [@2001ApPhL..78.3529P]. The goal is to evaluate and compare the performance of the different approaches for a practical problem, and to discuss the inclusion of additional important effects. Specifically, we here also account for energy-band nonparabolicity, and complement the Schrödinger equation by the Poisson equation to take into account space charge effects. In practice, extensive parameter scans have to be performed for QCL design optimization. Thus, the simulation of QCLs calls for especially efficient methods, the more so as the self-consistent solution of the Schrödinger-Poisson system results in a further increase of the numerical effort.
In simulations, the QCL structure is defined by an infinitely repeated elementary sequence of multiple wells and barriers (called a period). For such a structure under bias, it is sufficient to compute the eigenenergies and corresponding wave functions for a single energy interval given by the bias across one period; the solutions of the other periods are then obtained by appropriate shifts in position and energy. We solve the Schrödinger equation using the approaches defined by (\[T1\]), (\[T2\]) and (\[airy1\]), respectively. For all three methods, we treat band edge discontinuities at the barrier-well interfaces explicitly using the matching conditions (\[match\]), to obtain an optimum accuracy. We use a simulation window of four periods to keep the influence of the boundaries negligible, and determine the bound states similarly as in Section \[sec:com\]. To combine reasonable numerical efficiency with a good accuracy, we choose a segment length of $\Delta=2$$\mathrm{nm}$ (the last segment of each well or barrier is $\Delta\leq2$$\mathrm{nm}$).
Various models are available for including nonparabolicity [@1987PhRvB..35.7770N]; here, we use an energy dependent effective mass $m_{j}^{\ast}\left( E\right) =m_{j}^{\ast}\left[ 1+\left( E-V_{j}\right)
/E_{g,j}\right] $ (with band gap energy $E_{g,j}$ at position $z_{j}$), which can straightforwardly be implemented into the transfer matrices. The Poisson equation is given by [@2000JAP....87.7931C; @2008SeScT..23l5040L]$$-\partial_{z}\left[ \epsilon\left( z\right) \partial_{z}\varphi\left( z\right)
\right] =e\left[ N\left( z\right) -\sum_{n}n_{\mathrm{2D},n}\left|
\psi_{n}\left( z\right) \right| ^{2}\right] ,\label{poisson}%$$ leading to an additional potential -e$\varphi$ in (\[effm\]). Here, $\epsilon\left( z\right) $ is the permittivity, $e$ is the elementary charge, $N\left( z\right) $ is the doping concentration, and $n_{\mathrm{2D}%
,n}$ is the electron sheet density of level $n$ with wave function $\psi
_{n}\left( z\right) $. While for an operating QCL, $n_{\mathrm{2D},n}$ can only be exactly determined by detailed carrier transport simulations [@2007JAP...101h6109J], this is prohibitive for design optimizations of experimental QCL structures over an extended parameter range. Thus, for solving the Schrödinger-Poisson system, simpler and much faster approaches are commonly adopted, such as applying Fermi-Dirac statistics [@2000JAP....87.7931C; @2008SeScT..23l5040L] $$n_{\mathrm{2D},n}=\frac{m^{\ast}}{\pi\hbar^{2}}k_{\mathrm{B}}T\ln\left(
1+\exp\left[ \left( \mu-\tilde{E}_{n}\right) /\left( k_{\mathrm{B}%
}T\right) \right] \right) ,\label{fermi}%$$ where $\mu$ is the chemical potential, $k_{\mathrm{B}}$ is the Boltzmann constant, $T$ is the lattice temperature, and $m^{\ast}$ is the effective mass, here taken to be the value of the well material. In (\[fermi\]), we use the energy of a state relative to the conduction band edge $\tilde{E}%
_{n}=E_{n}-\int V\left| \psi_{n}\right| ^{2}\mathrm{d}z$ rather than $E_{n}$ itself to correctly reflect the invariance properties of the biased structure. Especially, this ensures that the simulation results do not depend on the choice of the elementary period in the structure. The chemical potential $\mu$ is found from the charge neutrality condition within one period. The Schrödinger and Poisson equations are iteratively solved until self-consistency is achieved. For the Poisson equation (\[poisson\]), we employ a finite difference scheme on a $1$-grid, where we use (\[psi\]) and (\[airy\]) to appropriately interpolate the eigenfunctions obtained from the Schrödinger solver.
![Self-consistent band profile (grey line), energy levels and wave functions squared for the QCL in [@2001ApPhL..78.3529P] at a bias of $48$$\mathrm{kV/cm}$ and a temperature of $300$$\mathrm{K}$. Shown are the results as obtained with the three transfer matrix methods for $\Delta=2$$\mathrm{nm}$, and also the symmetrized matrix result for $\Delta
=0.1$$\mathrm{nm}$, which practically coincides with the symmetrized matrix and Airy function results for $\Delta=2$$\mathrm{nm}$.[]{data-label="QCL"}](jirau8.eps){width="3.5in"}
Simulations of the QCL in [@2001ApPhL..78.3529P] have been performed at various temperatures, considering the seven lowest levels (i.e, with lowest energies $\tilde{E}_{n}$) within each period. In Fig. (\[QCL\]), the obtained energy levels and wave functions squared of a single period are shown for the unsymmetrized, symmetrized and Airy function matrix approach at $T=300$$\mathrm{K}$, using a segment length of $\Delta=2$$\mathrm{nm}$. Also the symmetrized transfer matrix result for $\Delta=0.1$$\mathrm{nm}$ is plotted for reference. The symmetrized matrix and Airy function results exhibit a similar accuracy, with deviations in eigenenergies of around $0.1$$\mathrm{meV}$ from the high-accuracy result obtained with $\Delta=0.1$$\mathrm{nm}$. In Fig. (\[QCL\]), those three curves are practically indistinguishable. On the other hand, the unsymmetrized matrix method produces deviations of around $5$$\mathrm{meV}$. The unsymmetrized and symmetrized matrix approach require approximately the same computation time for obtaining the self-consistent result in Fig. (\[QCL\]), while the Schrödinger-Poisson solver based on the Airy functions is about $10$ times slower. This confirms that the symmetrized transfer matrix method combines high numerical efficiency with excellent accuracy for practical applications.
Conclusion
==========
In conclusion, we have compared the accuracy of different transfer matrix approaches, as used for solving the effective mass Schrödinger equation with an arbitrary one-dimensional potential and a constant or position dependent effective mass. In particular, the local discretization error has been derived for the Airy function approach resulting from a piecewise linear approximation of the potential, and for unsymmetrized and symmetrized transfer matrices based on a piecewise constant potential approximation. Furthermore, numerical simulations have been performed to evaluate the numerical accuracy of the different approaches for scattering and bound states, employing exponential test potentials. Comparisons to the finite difference method, specifically the QTBM, have also been carried out. Additionally, self-consistent Schrödinger-Poisson device simulations are presented.
The symmetrized transfer matrix approach and the Airy function method exhibit a comparable accuracy, being superior to the other methods investigated. However, the symmetrized matrix approach achieves this at a significantly reduced numerical cost, moreover avoiding the numerical problems associated with Airy functions. All in all, the symmetrized transfer matrix approach is shown to combine the numerical efficiency and straightforwardness of its unsymmetrized counterpart with the superior accuracy of the Airy function method.
Local discretization error
==========================
In the following, we derive the local discretization error (LDE) for the different types of transfer matrices. In this context, we investigate the piecewise constant potential approximation based on matrix (\[T1\]) and its symmetrized version (\[T2\]), as well as the piecewise linear potential scheme (\[airy1\]). As mentioned in Section \[sec:tra\], the segments are chosen so that band edge discontinuities in the structure coincide with the borders between two segments, enabling an exact treatment in terms of the matching conditions (\[match\]). Thus, for our error analysis we imply that the potential and effective mass vary smoothly within each segment, i.e., have a sufficient degree of differentiability. Otherwise, no further assumptions about the potential shape and effective mass are made. The local discretization error for $\psi$ at $z=z_{j+1}$ is$$\tau_{j+1}^{\psi}=\psi_{j+1}-\psi\left( z_{j+1}\right) , \label{tau}%$$ where $\psi_{j+1}$ is the approximate wavefunction value at $z_{j+1}$ obtained by the transfer matrix approach from a given value $\psi\left( z_{j}\right)
=\psi_{j}$ at $z_{j}$, while $\psi\left( z_{j+1}\right) $ is the exact solution. For evaluating the LDE, it is helpful to express $\psi\left(
z_{j+1}\right) $ in terms of a Taylor series, $$\begin{aligned}
\psi\left( z_{j+1}\right) & =\psi_{j}+\psi_{j}^{\prime}\Delta_{j}+\frac
{1}{2}\psi_{j}^{\prime\prime}\Delta_{j}^{2}+\frac{1}{6}\psi_{j}^{\left(
3\right) }\Delta_{j}^{3}\nonumber\\
& +\frac{1}{24}\psi_{j}^{\left( 4\right) }\Delta_{j}^{4}+\mathcal{O}\left(
\Delta_{j}^{5}\right) . \label{taylor}%\end{aligned}$$ Analogously, we can define an LDE for the derivative $\psi^{\prime}$,$$\tau_{j+1}^{\psi^{\prime}}=\psi_{j+1}^{\prime}-\psi^{\prime}\left(
z_{j+1}\right) , \label{taus}%$$ and express $\psi^{\prime}\left( z_{j+1}\right) $ as$$\psi^{\prime}\left( z_{j+1}\right) =\psi_{j}^{\prime}+\psi_{j}^{\prime
\prime}\Delta_{j}+\frac{1}{2}\psi_{j}^{\left( 3\right) }\Delta_{j}^{2}%
+\frac{1}{6}\psi_{j}^{\left( 4\right) }\Delta_{j}^{3}+\mathcal{O}\left(
\Delta_{j}^{4}\right) . \label{taylors}%$$
Piecewise constant potential approximation
------------------------------------------
Using (\[psi\]), we can relate $A_{j}$ and $B_{j}$ to the wavefunction at position $z_{j}$, $$\begin{aligned}
A_{j} & =\frac{1}{2}\left( \psi_{j}-\mathrm{i}\frac{1}{k_{j}}\psi
_{j}^{\prime}\right) ,\nonumber\\
B_{j} & =\frac{1}{2}\left( \psi_{j}+\mathrm{i}\frac{1}{k_{j}}\psi
_{j}^{\prime}\right) , \label{AB}%\end{aligned}$$ and express the LDEs for the amplitudes $A$ and $B$ in terms of $\tau
_{j+1}^{\psi}$ and $\tau_{j+1}^{\psi^{\prime}}$,$$\begin{aligned}
\tau_{j+1}^{A} & =A_{j+1}-A\left( z_{j+1}\right) =\frac{1}{2}\left(
\tau_{j+1}^{\psi}-\mathrm{i}\frac{1}{k_{j+1}}\tau_{j+1}^{\psi^{\prime}%
}\right) ,\nonumber\\
\tau_{j+1}^{B} & =B_{j+1}-B\left( z_{j+1}\right) =\frac{1}{2}\left(
\tau_{j+1}^{\psi}+\mathrm{i}\frac{1}{k_{j+1}}\tau_{j+1}^{\psi^{\prime}%
}\right) . \label{tauAB}%\end{aligned}$$ For the unsymmetrized transfer matrix, we obtain from (\[psi\]) with the expressions (\[mat\]) and (\[T1\])$$\begin{aligned}
\psi_{j+1} & =A_{j+1}+B_{j+1}=\exp\left( \mathrm{i}k_{j}\Delta_{j}\right)
A_{j}+\exp\left( -\mathrm{i}k_{j}\Delta_{j}\right) B_{j},\nonumber\\
\psi_{j+1}^{\prime} & =\mathrm{i}k_{j+1}\left( A_{j+1}-B_{j+1}\right)
\nonumber\\
& =\mathrm{i}k_{j+1}\frac{\beta_{j}}{\beta_{j+1}}\left[ A_{j}\exp\left(
\mathrm{i}k_{j}\Delta_{j}\right) -B_{j}\exp\left( -\mathrm{i}k_{j}\Delta
_{j}\right) \right] . \label{psip}%\end{aligned}$$ For calculating the LDE, we insert the expressions (\[psip\]) and (\[taylor\]) into (\[tau\]), where we express $A_{j}$ and $B_{j}$ by (\[AB\]) and use (\[effm\]) to rewrite the derivatives $\psi_{j}^{\left(
n\right) }$ in (\[taylor\]) as $$\begin{aligned}
\psi_{j}^{\prime\prime} & =-k_{j}^{2}\psi_{j}+\frac{m_{j}^{\ast\prime}%
}{m_{j}^{\ast}}\psi_{j}^{\prime},\nonumber\\
\psi_{j}^{\left( 3\right) } & =-\frac{\left( m_{j}^{\ast}k_{j}%
^{2}\right) ^{\prime}}{m_{j}^{\ast}}\psi_{j}-k_{j}^{2}\psi_{j}^{\prime}%
+\frac{m_{j}^{\ast\prime\prime}}{m_{j}^{\ast}}\psi_{j}^{\prime},\end{aligned}$$ with $k_{j}=\sqrt{2m_{j}^{\ast}\left( E-V_{j}\right) }/\hbar$. A Taylor expansion then yields$$\begin{aligned}
\tau_{j+1}^{\psi} & =\cos\left( k_{j}\Delta_{j}\right) \psi_{j}%
+\sin\left( k_{j}\Delta_{j}\right) k_{j}^{-1}\psi_{j}^{\prime}\nonumber\\
& -\psi_{j}-\psi_{j}^{\prime}\Delta_{j}-\frac{1}{2}\psi_{j}^{\prime\prime
}\Delta_{j}^{2}-\frac{1}{6}\psi_{j}^{\left( 3\right) }\Delta_{j}%
^{3}+\mathcal{O}\left( \Delta_{j}^{4}\right) \nonumber\\
& =-\frac{1}{2}\frac{m_{j}^{\ast\prime}}{m_{j}^{\ast}}\psi_{j}^{\prime}%
\Delta_{j}^{2}+\frac{1}{6}\left( \frac{\left( m_{j}^{\ast}k_{j}^{2}\right)
^{\prime}}{m_{j}^{\ast}}\psi_{j}-\frac{m_{j}^{\ast\prime\prime}}{m_{j}^{\ast}%
}\psi_{j}^{\prime}\right) \Delta_{j}^{3}\nonumber\\
& +\mathcal{O}\left( \Delta_{j}^{4}\right) .\end{aligned}$$ Analogously, by inserting the expressions (\[psip\]) and (\[taylors\]) into (\[taus\]) we obtain$$\begin{aligned}
\tau_{j+1}^{\psi^{\prime}} & =\frac{m_{j+1}^{\ast}}{m_{j}^{\ast}}\left(
\psi_{j}^{\prime}\cos\left( k_{j}\Delta_{j}\right) -k_{j}\sin\left(
k_{j}\Delta_{j}\right) \psi_{j}\right) \nonumber\\
& -\psi_{j}^{\prime}-\psi_{j}^{\prime\prime}\Delta_{j}-\frac{1}{2}\psi
_{j}^{\left( 3\right) }\Delta_{j}^{2}+\mathcal{O}\left( \Delta_{j}%
^{3}\right) \nonumber\\
& =\left( \frac{m_{j+1}^{\ast}}{m_{j}^{\ast}}-1-\frac{m_{j}^{\ast\prime}%
}{m_{j}^{\ast}}\Delta_{j}\right) \psi_{j}^{\prime}+\frac{m_{j}^{\ast}%
-m_{j+1}^{\ast}}{m_{j}^{\ast}}k_{j}^{2}\psi_{j}\Delta_{j}\nonumber\\
& +\frac{1}{2}\left( \frac{\left( m_{j}^{\ast}k_{j}^{2}\right) ^{\prime}%
}{m_{j}^{\ast}}\psi_{j}-\frac{m_{j}^{\ast\prime\prime}}{m_{j}^{\ast}}\psi
_{j}^{\prime}+\frac{m_{j}^{\ast}-m_{j+1}^{\ast}}{m_{j}^{\ast}}k_{j}^{2}%
\psi_{j}^{\prime}\right) \Delta_{j}^{2}\nonumber\\
& +\mathcal{O}\left( \Delta_{j}^{3}\right) \nonumber\\
& =\left( \frac{\left( m_{j}^{\ast}k_{j}^{2}\right) ^{\prime}}%
{2m_{j}^{\ast}}-\frac{m_{j}^{\ast\prime}}{m_{j}^{\ast}}k_{j}^{2}\right)
\psi_{j}\Delta_{j}^{2}+\mathcal{O}\left( \Delta_{j}^{3}\right) ,
\label{tau1s}%\end{aligned}$$ where we use $m_{j+1}^{\ast}=m_{j}^{\ast}+m_{j}^{\ast\prime}\Delta_{j}%
+\frac{1}{2}m_{j}^{\ast\prime\prime}\Delta_{j}^{2}+\mathcal{O}\left(
\Delta_{j}^{3}\right) $ to obtain the last line of (\[tau1s\]). Thus, $\tau_{j+1}^{\psi}\ $is $\mathcal{O}\left( \Delta_{j}^{2}\right) $ ($\mathcal{O}\left( \Delta_{j}^{3}\right) $ for a constant effective mass), and $\tau_{j+1}^{\psi^{\prime}}=\mathcal{O}\left( \Delta_{j}^{2}\right) $. With (\[tauAB\]), we see that both $\tau_{j+1}^{A}$ and $\tau_{j+1}^{B}$ are $\mathcal{O}\left( \Delta_{j}^{2}\right) $.
In a similar manner, we obtain for the symmetrized matrix (\[T2\]) $\tau_{j+1}^{\psi}=\mathcal{O}\left( \Delta_{j}^{3}\right) $ and $\tau
_{j+1}^{\psi^{\prime}}=\mathcal{O}\left( \Delta_{j}^{3}\right) $. More precisely, the calculation yields for a constant $m^{\ast}$ $$\begin{aligned}
\tau_{j+1}^{\psi} & =\frac{1}{24}\left( k_{j}^{2}\right) ^{\prime}\psi
_{j}\Delta_{j}^{3}+\mathcal{O}\left( \Delta_{j}^{4}\right) ,\nonumber\\
\tau_{j+1}^{\psi^{\prime}} & =-\frac{1}{12}\left( k_{j}^{2}\right)
^{\prime\prime}\psi_{j}\Delta_{j}^{3}-\frac{1}{24}\left( k_{j}^{2}\right)
^{\prime}\psi_{j}^{\prime}\Delta_{j}^{3}+\mathcal{O}\left( \Delta_{j}%
^{4}\right)\end{aligned}$$ (and a somewhat more complicated expression for a position dependent effective mass). This means that $\tau_{j+1}^{A}$ and $\tau_{j+1}^{B}$ are now $\mathcal{O}\left( \Delta_{j}^{3}\right) $.
Piecewise linear potential approximation
----------------------------------------
For computing the LDEs (\[tau\]) and (\[taus\]) of the Airy function approach, we proceed in a manner similar as above. Equations (\[airy1\]) and (\[airy2\]) give the relation between values $\psi_{j}$, $\psi_{j}^{\prime}$ at $z_{j}$ and the numerical result $\psi_{j+1}$, $\psi_{j+1}^{\prime}$ obtained at $z_{j+1}$ from the Airy function approach: $$\begin{aligned}
\psi_{j+1} & =\frac{1}{D_{j}}\left[ \mathrm{Ai}(s_{j}+\frac{\Delta_{j}%
}{\ell_{j}})\mathrm{Bi}^{\prime}(s_{j})-\mathrm{Ai}^{\prime}(s_{j}%
)\mathrm{Bi}(s_{j}+\frac{\Delta_{j}}{\ell_{j}})\right] \psi_{j}\nonumber\\
& +\frac{\ell_{j}}{D_{j}}\left[ \mathrm{Ai}(s_{j})\mathrm{Bi}(s_{j}%
+\frac{\Delta_{j}}{\ell_{j}})-\mathrm{Ai}(s_{j}+\frac{\Delta_{j}}{\ell_{j}%
})\mathrm{Bi}(s_{j})\right] \psi_{j}^{\prime}\nonumber\\
& =\left( 1+\frac{1}{2}\frac{s_{j}}{\ell_{j}^{2}}\Delta_{j}^{2}+\frac{1}%
{6}\frac{\Delta_{j}^{3}}{\ell_{j}^{3}}+\frac{1}{24}\frac{s_{j}^{2}}{\ell
_{j}^{4}}\Delta_{j}^{4}\right) \psi_{j}\nonumber\\
& +\left( \Delta_{j}+\frac{1}{6}\frac{s_{j}}{\ell_{j}^{2}}\Delta_{j}%
^{3}+\frac{1}{12}\frac{\Delta_{j}^{4}}{\ell_{j}^{3}}\right) \psi_{j}^{\prime
}+\mathcal{O}\left( \Delta_{j}^{5}\right) , \label{airy3}%\end{aligned}$$$$\begin{aligned}
\psi_{j+1}^{\prime} & =\frac{1}{D_{j}\ell_{j}}\left[ \mathrm{Ai}^{\prime
}(s_{j}+\frac{\Delta_{j}}{\ell_{j}})\mathrm{Bi}^{\prime}(s_{j})\right.
\nonumber\\
& \left. -\mathrm{Bi}^{\prime}(s_{j}+\frac{\Delta_{j}}{\ell_{j}}%
)\mathrm{Ai}^{\prime}(s_{j})\right] \psi_{j}\nonumber\\
& +\frac{1}{D_{j}}\left[ \mathrm{Ai}(s_{j})\mathrm{Bi}^{\prime}(s_{j}%
+\frac{\Delta_{j}}{\ell_{j}})-\mathrm{Ai}^{\prime}(s_{j}+\frac{\Delta_{j}%
}{\ell_{j}})\mathrm{Bi}(s_{j})\right] \psi_{j}^{\prime}\nonumber\\
& =\left( s_{j}\frac{\Delta_{j}}{\ell_{j}^{2}}+\frac{1}{2}\frac{1}{\ell
_{j}^{3}}\Delta_{j}^{2}+\frac{1}{6}\frac{s_{j}^{2}}{\ell_{j}^{4}}\Delta
_{j}^{3}\right) \psi_{j}\nonumber\\
& +\left( 1+\frac{1}{2}\frac{s_{j}}{\ell_{j}^{2}}\Delta_{j}^{2}+\frac{1}%
{3}\frac{\Delta_{j}^{3}}{\ell_{j}^{3}}\right) \psi_{j}^{\prime}%
+\mathcal{O}\left( \Delta_{j}^{4}\right) , \label{airy4}%\end{aligned}$$ with $D_{j}=\mathrm{Ai}(s_{j})\mathrm{Bi}^{\prime}(s_{j})-\mathrm{Ai}^{\prime
}(s_{j})\mathrm{Bi}(s_{j})$. The exact results $\psi\left( z_{j+1}\right) $ and $\psi^{\prime}\left( z_{j+1}\right) $ are again expressed by the Taylor series expansions (\[taylor\]) and (\[taylors\]), respectively, where we rewrite the derivatives $\psi_{j}^{\left( n\right) }$ in terms of $\psi_{j}$ and $\psi_{j}^{\prime}$. For a constant effective mass, we have $$\begin{aligned}
\psi_{j}^{\prime\prime} & =\ell_{j}^{-2}s_{j}\psi_{j},\nonumber\\
\psi_{j}^{\left( 3\right) } & =\ell_{j}^{-2}s_{j}\psi_{j}^{\prime}%
+\ell_{j}^{-3}\frac{V_{j}^{\prime}}{V_{z,j}}\psi_{j},\nonumber\\
\psi_{j}^{\left( 4\right) } & =\ell_{j}^{-4}s_{j}^{2}\psi_{j}+2\ell
_{j}^{-3}\frac{V_{j}^{\prime}}{V_{z,j}}\psi_{j}^{\prime}+\ell_{j}^{-3}%
\frac{V_{j}^{\prime\prime}}{V_{z,j}}\psi_{j},\end{aligned}$$ with $V_{z,j}=\left( V_{j+1}-V_{j}\right) /\Delta_{j}$, and obtain with the expressions (\[tau\]), (\[taylor\]) and (\[airy3\])$$\tau_{j+1}^{\psi}=\psi_{j+1}-\psi\left( z_{j+1}\right) =-\frac{1}{24}\left(
k_{j}^{2}\right) ^{\prime\prime}\psi_{j}\Delta_{j}^{4}+\mathcal{O}\left(
\Delta_{j}^{5}\right) ,$$ and with (\[taus\]), (\[taylors\]) and (\[airy4\])$$\tau_{j+1}^{\psi^{\prime}}=\psi_{j+1}^{\prime}-\psi^{\prime}\left(
z_{j+1}\right) =-\frac{1}{12}\left( k_{j}^{2}\right) ^{\prime\prime}%
\psi_{j}\Delta_{j}^{3}+\mathcal{O}\left( \Delta_{j}^{4}\right) ,$$ where $k_{j}=\sqrt{2m_{j}^{\ast}\left( E-V_{j}\right) }/\hbar$. Using (\[airy2\]), we can express the LDEs for the amplitudes $\mathcal{A}$ and $\mathcal{B}$ in terms of $\tau_{j+1}^{\psi}$ and $\tau_{j+1}^{\psi^{\prime}%
}$, and obtain $\tau_{j+1}^{\mathcal{A}}=\mathcal{O}\left( \Delta_{j}%
^{3}\right) $, $\tau_{j+1}^{\mathcal{B}}=\mathcal{O}\left( \Delta_{j}%
^{3}\right) $.
As described in Section \[sub:tra1\], a position dependent effective mass can in the Airy function approach be treated by assuming a constant value within each segment $j$, e.g., $m_{j}^{\ast}=m^{\ast}\left( z_{j}\right) $, and applying the matching conditions (\[match\]) at the section boundaries [@1990IJQE...26.2025J]. The result for $\psi_{j+1}^{\prime}$ in (\[airy4\]) has thus to be multiplied by $m_{j+1}^{\ast}/m_{j}^{\ast}$ before inserting it into (\[taus\]). While $\tau_{j+1}^{\psi^{\prime}}$ is still $\mathcal{O}\left( \Delta_{j}^{3}\right) $, $\tau_{j+1}^{\psi}$ drops to $\mathcal{O}\left( \Delta_{j}^{2}\right) $, now yielding $\tau
_{j+1}^{\mathcal{A}}=\mathcal{O}\left( \Delta_{j}^{2}\right) $, $\tau
_{j+1}^{\mathcal{B}}=\mathcal{O}\left( \Delta_{j}^{2}\right) $. The error analysis also shows that $\tau_{j+1}^{\psi}$ and thus $\tau_{j+1}%
^{\mathcal{A}},\tau_{j+1}^{\mathcal{B}}$ can be improved to $\mathcal{O}%
\left( \Delta_{j}^{3}\right) $ by assigning an averaged mass $\left(
m_{j}^{\ast}+m_{j+1}^{\ast}\right) /2$ rather than $m_{j}^{\ast}$ to each segment, and applying the matching conditions correspondingly.
[9]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{}
Y. [Ando]{} and T. [Itoh]{}, “[Calculation of transmission tunneling current across arbitrary potential barriers]{},” *J. Appl. Phys.*, vol. 61, pp. 1497–1502, Feb. 1987.
B. [Jonsson]{} and S. T. [Eng]{}, “[Solving the Schr[ö]{}dinger equation in arbitrary quantum-well potential profiles using the transfer matrix method]{},” *IEEE J. Quantum Electron.*, vol. 26, pp. 2025–2035, 1990.
E. [Cassan]{}, “[On the reduction of direct tunneling leakage through ultrathin gate oxides by a one-dimensional Schr[ö]{}dinger-Poisson solver]{},” *J. Appl. Phys.*, vol. 87, pp. 7931–7939, Jun. 2000.
J. H. [Davies]{}, *[The Physics of Low-dimensional Semiconductors]{}*.1em plus 0.5em minus 0.4emCambridge University Press, Dec. 1997.
M. S. [Vitiello]{}, G. [Scamarcio]{}, V. [Spagnolo]{}, B. S. [Williams]{}, S. [Kumar]{}, Q. [Hu]{}, and J. L. [Reno]{}, “[Measurement of subband electronic temperatures and population inversion in THz quantum-cascade lasers]{},” *Appl. Phys. Lett.*, vol. 86, no. 11, pp. 111115–1–3, Mar. 2005.
C. [Jirauschek]{}, G. [Scarpa]{}, P. [Lugli]{}, M. S. [Vitiello]{}, and G. [Scamarcio]{}, “[Comparative analysis of resonant phonon THz quantum cascade lasers]{},” *J. Appl. Phys.*, vol. 101, no. 8, pp. 086109–1–3, Apr. 2007.
W. R. [Frensley]{}, “[Quantum transport]{},” in *Heterostructures and Quantum Devices*, ser. VLSI Electronics: Microstructure Science, W. R. [Frensley]{} and N. G. [Einspruch]{}, Eds.1em plus 0.5em minus 0.4emAcademic Press, 1994.
C. [Juang]{}, K. J. [Kuhn]{}, and R. B. [Darling]{}, “[Stark shift and field-induced tunneling in Al$_{x}$Ga$_{1-x}$As/GaAs quantum-well structures]{},” *Phys. Rev. B*, vol. 41, pp. 12047–12053, Jun. 1990.
C. S. [Lent]{} and D. J. [Kirkner]{}, “[The quantum transmitting boundary method]{},” *J. Appl. Phys.*, vol. 67, pp. 6353–6359, May 1990.
D. Y. [Ko]{} and J. C. [Inkson]{}, “[Matrix method for tunneling in heterostructures: Resonant tunneling in multilayer systems]{},” *Phys. Rev. B*, vol. 38, pp. 9945–9951, Nov. 1988.
S. [Vatannia]{} and G. [Gildenblat]{}, “[Airy’s functions implementation of the transfer-matrix method for resonant tunneling in variably spaced finite superlattices.]{}” *IEEE J. Quantum Electron.*, vol. 32, pp. 1093–1105, Jun. 1996.
J.-G. S. [Demers]{} and R. [Maciejko]{}, “[Propagation matrix formalism and efficient linear potential solution to Schr[ö]{}dinger’s equation]{},” *J. Appl. Phys.*, vol. 90, pp. 6120–6129, Dec. 2001.
S. [Saito]{}, K. [Torii]{}, M. [Hiratani]{}, and T. [Onai]{}, “[Analytical quantum mechanical model for accumulation capacitance of MOS structures]{},” *IEEE Electron Device Lett.*, vol. 23, pp. 348–350, Jun. 2002.
E. P. [Nakhmedov]{}, C. [Radehaus]{}, and K. [Wieczorek]{}, “[Study of direct tunneling current oscillations in ultrathin gate dielectrics]{},” *J. Appl. Phys.*, vol. 97, no. 6, pp. 064107–1–7, Mar. 2005.
H. [Page]{}, C. [Becker]{}, A. [Robertson]{}, G. [Glastre]{}, V. [Ortiz]{}, and C. [Sirtori]{}, “[300 K operation of a GaAs-based quantum-cascade laser at $\lambda\approx{}9$ $\mu$m,” *Appl. Phys. Lett.*, vol. 78, pp. 3529–3531, May 2001. ]{}
D. F. [Nelson]{}, R. C. [Miller]{}, and D. A. [Kleinman]{}, “[Band nonparabolicity effects in semiconductor quantum wells]{},” *Phys. Rev. B*, vol. 35, pp. 7770–7773, May 1987.
H. [Li]{}, J. C. [Cao]{}, and H. C. [Liu]{}, “[Effects of design parameters on the performance of terahertz quantum-cascade lasers]{},” *Semicond. Sci. Technol.*, vol. 23, no. 12, pp. 125040–1–6, Dec. 2008.
\[[![image](jirau.eps){width="1in" height="1.25in"} ]{}\][Christian Jirauschek]{} Christian Jirauschek was born in Karlsruhe, Germany, in 1974. He received his Dipl-Ing. and doctoral degrees in electrical engineering in 2000 and 2004, respectively, from the Universität Karlsruhe (TH), Germany.
From 2002 to 2005, he was a Visiting Scientist at the Massachusetts Institute of Technology (MIT), Cambridge, MA. Since 2005, he has been with the TU München in Germany, first as a Postdoctoral Fellow and since 2007 as the Head of an Independent Junior Research Group (Emmy-Noether Program of the DFG). His research interests include modeling in the areas of optics and device physics, especially the simulation of quantum devices and mode-locked laser theory.
Dr. Jirauschek is a member of the IEEE, the German Physical Society (DPG), and the Optical Society of America. Between 1997 and 2000, he held a scholarship from the German National Merit Foundation (Studienstiftung des Deutschen Volkes).
[^1]: This work is supported by the Emmy Noether program of the German Research Foundation (DFG, JI115/1-1).
[^2]: C. Jirauschek is with the Institute for Nanoelectronics, TU München, Arcisstr. 21, D-80333 München, Germany; e-mail: jirauschek@tum.de.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This article explains and extends semialgebraic homotopy theory (developed by H. Delfs and M. Knebusch) to o-minimal homotopy theory (over a field). The homotopy category of definable CW-complexes is equivalent to the homotopy category of topological CW-complexes (with continuous mappings). If the theory of the o-minimal expansion of a field is bounded, then these categories are equivalent to the homotopy category of weakly definable spaces. Similar facts hold for decreasing systems of spaces. As a result, generalized homology and cohomology theories on pointed weak polytopes uniquely correspond (up to an isomorphism) to the known topological generalized homology and cohomology theories on pointed CW-complexes.'
---
**O-minimal homotopy and generalized (co)homology**
** <span style="font-variant:small-caps;">Artur Pikosz</span>
[2000 *Mathematics Subject Classification:* 03C64, 55N20, 55Q05.\
*Key words and phrases:* o-minimal structure, generalized topology, locally definable space, weakly definable space, CW-complex, homotopy sets, generalized homology, generalized cohomology. ]{}
Introduction
============
In the 1980’s, H. Delfs, M. Knebusch and others developed “semialgebraic topology” in locally semialgebraic and weakly semialgebraic spaces (see [@DK2; @DK5; @DK6; @LSS; @WSS]). In the survey paper [@K91], M. Knebusch suggested that this theory may be generalized to the o-minimal context. This programme was partially undertaken first by A. Woerheide, who constructed the o-minimal singular homology theory in [@W], and later by M. Edmundo, who developed and applied the singular homology and cohomology theories over o-minimal structures (see for example [@E]). For homotopy theory, A. Berarducci and M. Otero worked with the o-minimal fundamental group and transfer methods in o-minimal geometry ([@BO; @BO2]). During the period this paper was written, several authors wrote about different types of homology and cohomology (see [@EJP; @EP], for example).
Still the semialgebraic homotopy theory contained in [@LSS] and [@WSS] was not extended to the case of spaces over o-minimal expansions of fields. For the question why, the author may only guess that people in the field wanted to avoid generalized topology. (Notice the failure of E.Baro and M. Otero [@ldh] to give precise definitions and to present the theory clearly, see below.)
The aim of extending a whole theory, not a single theorem or even tens or hundreds of facts, may be sometimes achieved by carefull choice of the definitions and explaining the differences that appear. This can be done in the case of the semialgebraic homotopy theory of H. Delfs and M. Knebusch.
First, the spaces of our interest (with their morphisms) over each of the considered structures form several categories that are best described as full subcategories of some ambient category. The choice of a good ambient category is very important. In [@LSS] the tusk was done using sheaf theory, but M. Knebusch in [@WSS] has already simplified the definitions by using what is called “function sheaves” (involving a simple set-theoretic definition). Notice that the usual sheaf theory is not necessary to understand locally semialgebraic spaces. Thus the extension of the theory should be done through extension of the basic definitions from [@WSS]. Another argument for this is the fact that locally definable spaces do not suffice, we need to speak about weakly definable spaces to get a satisfactory homotopy theory.
Second, some proofs of [@LSS] need modification. The mapping spaces from III.3 are specific for the semialgebraic case. This is modified in the present paper. Moreover, Lemma II.4.3 from [@LSS] (and related facts) need to be modified since one needs to add the third Comparison Theorem (the o-minimal *expansion case*). This was done in [@BaOt] by the use of “normal triangulations” from [@Ba] (the problem appears on the definable sets level).
And third, we need to distinguish between theories that are *bounded* (definition in the present paper) and other that are not bounded. The theory RCF itself is bounded, and some proofs of [@WSS] (related to IV.9-10) do not work in the general setting of an o-minimal expansion of a (real closed) field. The question arises if the corresponding facts are true.
After considering these remarks, one can see that the two volumes [@LSS] and [@WSS] are a source of thousands of facts and their proofs about locally definable spaces and weakly definable spaces. It is usually done just by changing the word “semialgebraic” into the word “definable”. The intention of the author of the present paper is not to re-write about 600 pages with this simple change, but to give enough understanding of the theory to the reader. Some examples and facts from [@LSS] and [@WSS] are restated to make this understanding easy. (The above remarks apply to so-called “geometric” theory. The so-called “abstract” theory, contained in Appendix A of [@LSS], is not considered in the present paper.)
It is convenient to understand that the semialgebraic homotopy theory of H. Delfs and M. Knebusch is basically the usual homotopy theory re-done in the presence of the generalized topology. The constructions of homotopy theory may be carried out in the semialgebraic context. Thus it is not surprising that these constructions may be also done in the context of o-minimal expansions of fields. The use of the generalized topology may be extended far beyond the above context (see [@ap2] for details).
The author considers the main result of this paper to be the following: the semialgebraic homotopy theory of H. Delfs and M. Knebusch is now explained and extended to the o-minimal homotopy theory (over a field). The extension part includes the Comparison Theorems (especially Theorems \[3compld\] and \[3compwd\]), a definable version of the Whitehead theorem (Theorem \[cwwhitehead\]) and equivalence of the homotopy categories (Corollaries \[concld\],\[equi\],\[concwd\],\[conc\]). Majority of the examples and Theorems \[realline\] and \[nonreal\] contribute to the explanation part. Of independent interest are: a characterization of real analytic manifolds as locally definable manifolds (Theorem \[manifold\]) and the definable version of the Bertini-Lefschetz Theorem (Theorem \[bertini\]).
As a result of the homotopy approach, deeper than the homology and cohomology one, we get the generalized homology and cohomology theories (including the standard singular theories) for so called pointed weak polytopes, and these theories appear, if $T$ is bounded, to be “the same” as their topological counterparts.
The categories of locally and weakly definable spaces over o-minimal expansions of real closed fields, introduced here, with their subspaces (locally definable subsets and weakly definable subsets) are far generalizations of *analytic-geometric categories* of van den Dries and Miller ([@DM]). In particular paracompact locally definable manifolds are generalizations of both definable manifolds over o-minimal expansions of fields and real analytic manifolds.
For basic properties of o-minimal structures, see the book [@Dries] and the survey paper [@DM]. **Assume that $R$ is an o-minimal expansion of a real closed field.**
Spaces over o-minimal structures
================================
As o-minimal structures have natural topology, it is quite natural that algebraic topology for such structures should be developed. (This paper deals only with the case of o-minimal expansions of fields.) Unfortunately, there are obstacles to the above when one is doing traditional topology: if $R$ is not (an expansion of) the (ordered) field of real numbers $\mathbb{R}$, then $R$ is not locally compact and is totally disconnected. Moreover, even for ${\mathbb}{R}$, not every family of open definable sets has a definable union, and continuous definable functions do not form a sheaf.
A good idea to overcome that in the case of o-minimal pure (ordered) fields was given by H. Delfs and M. Knebusch in [@LSS]: it is the concept of a generalized topological space. This idea serves well also in our setting.
A **generalized topological space** is a set $M$ together with a family of subsets $\stackrel{\circ}{\mathcal{T}} (M)$ of $M$, called **open sets**, and a family of open families $\mathrm{Cov}_M $, called **admissible (open coverings)**, such that:
(A1)
: $\emptyset , M\in {\stackrel{\circ}{\mathcal{T}}}(M)$ (the empty set and the whole space are open),
(A2)
: if $U_1 ,U_2 \in {\stackrel{\circ}{\mathcal{T}}}(M)$ then $U_1 \cup U_2 ,U_1
\cap U_2 \in {\stackrel{\circ}{\mathcal{T}}}(M)$ (finite unions and finite intersections of open sets are open),
(A3)
: if $\{ U_i \}_{i\in I} \subseteq {\stackrel{\circ}{\mathcal{T}}}(M)$ and $I$ is finite, then $\{ U_i \}_{i\in I} \in \mathrm{Cov}_M $ (finite families of open sets are admissible),
(A4)
: if $\{ U_i \}_{i\in I} \in \mathrm{Cov}_M$ then $\bigcup_{i\in I}
U_i \in {\stackrel{\circ}{\mathcal{T}}}(M)$ (the union of an admissible family is open),
(A5)
: if $\{ U_i \}_{i\in I} \in \mathrm{Cov}_M$, $V\subseteq
\bigcup_{i\in I} U_i $, and $V\in {\stackrel{\circ}{\mathcal{T}}}(M)$, then $\{ V\cap U_i \}_{i\in I}
\in
\mathrm{Cov}_M$ (the traces of an admissible family on an open subset of the union of the family form an admissible family),
(A6)
: if $\{ U_i \}_{i\in I} \in \mathrm{Cov}_M $ and for each $i\in I$ there is $\{ V_{ij} \}_{j\in J_i} \in \mathrm{Cov}_M$ such that $\bigcup_{j\in J_i}
V_{ij} = U_i $, then $\{ V_{ij} \}_{\stackrel{i\in I}{ j\in J_i}}
\in \mathrm{Cov}_M$ (members of all admissible coverings of members of an admissible family form together an admissible family),
(A7)
: if $\{ U_i \}_{i\in I} \subseteq {\stackrel{\circ}{\mathcal{T}}}(M)$, $\{ V_j \}_{j\in J} \in
\mathrm{Cov}_M$, $\bigcup_{j\in J} V_j =\bigcup_{i\in I} U_i$, and $\forall j\in J \: \exists i\in I : V_j
\subseteq U_i$, then $\{ U_i \}_{i\in I} \in \mathrm{Cov}_M$ (a coarsening, with the same union, of an admissible family is admissible),
(A8)
: if $\{ U_i \}_{i\in I} \in \mathrm{Cov}_M $, $V\subseteq
\bigcup_{i\in I} U_i$ and $V\cap U_i \in {\stackrel{\circ}{\mathcal{T}}}(M)$ for each $i$, then $V\in {\stackrel{\circ}{\mathcal{T}}}(M) $ (if a subset of the union of an admissible family has open traces with members of the family, then the subset is open).
Generalized topological spaces may be identified with certain Grothendieck sites, where the underlying category is a full, closed on finite (in particular: empty) products and coproducts subcategory of the category of subsets ${\mathcal}{P}(M)$ of a given set $M$ with inclusions as morphisms, and the Grothendieck topology is subcanonical, contains all finite jointly surjective families and satisfies some regularity condition. (See [@sheaves] for the definition of a Grothendieck site. Considering such an identification we should remember the ambient category ${\mathcal}{P}(M)$.) More precisely: the axioms (A1), (A2) and (A3) contain a stronger version of the identity axiom of the Grothendieck topology. It is natural, since in model theory and in geometry we love finite unions, finite intersections and finite coverings. The axiom (A4) may be called *co-subcanonicality*. Together with subcanonicality, it ensures that admissible coverings are coverings in the traditional sense. (Subcanonicality is imposed by the notation of [@LSS]. The axiom (A4), weaker than (A8), justifies the notation $\mathrm{Cov}_M (U)$ of [@LSS]). The next are: (A5) the stability axiom of the Grothendieck topology, followed by the transitivity axiom (A6). Finally, (A7) is the saturation property of the Grothendieck topology (usually the Grothendieck topology of a site is required to be saturated), and the last axiom (A8) may be called the *regularity axiom*. Both saturation and regularity have a smoothing character. Saturation may be achieved by modifying any generalized topological space, and regularity by modifying a locally definable space (see I.1, page 3 and 9 of [@LSS]). The reader should be warned that (in general) the closure operator does not exist for the generalized topology.
A **strictly continuous mapping** between generalized topological spaces is such a mapping that the preimage of an (open) admissible covering is admissible, which implies that the preimage of an open set is open. (So strictly continuous mappings may be seen as morphisms of sites.) Inductive limits exist in the category ${\mathbf{GTS}}$ of generalized topological spaces and their strictly continuous mappings (see I.2 in [@LSS]).
Generalized topological spaces help to introduce further notions of interest that are generalizations of corresponding semialgebraic notions (we follow here [@WSS]).
A **function sheaf of rings over $R$** on a generalized topological space $M$ is a sheaf $F$ of rings on $M$ (here the sheaf property is assumed only for admissible coverings) such that for each $U$ open in $M$ the ring $F(U)$ is a subring of the ring of all functions from $U$ into $R$, and the restrictions of the sheaf are the set-theoretical restrictions of mappings. A **function ringed space over $R$** is a pair $(M, O_M )$, where $M$ is a generalized topological space and $O_M $ is a function sheaf of rings over $R$. We will say about **spaces** (over $R$) for short. An **open subspace** of a space over $R$ is an open subset of its generalized topological space together with the function sheaf of the space restricted to this open set. A **morphism** $f :(M, O_M )\rightarrow (N, O_N )$ of function ringed spaces over $R$ is a strictly continuous mapping $f: M\rightarrow N$ such that for each open subset $V$ of $N$ the set-theoretical substitution $h \mapsto h\circ f$ gives a morphism of rings $f^{\#}_V: O_N (V) \rightarrow O_M (f^{-1} (V))$. (We could express this by saying that $f^{\#} : O_N \rightarrow f_{*}O_M$ is the morphism of sheaves of rings on $N$ over $R$ induced by $f$. However, if we define for function sheaves $$(f_{*}O_M) (V)=\{ h:V\to R |\: h\circ f\in O_M(f^{-1}(V))\},$$ then each $f^{\#}_V:O_N(V)\to f_{*}O_M (V)$ becomes just an inclusion.) Inductive limits exist in the category ${\mathbf{Space}}(R)$ of spaces over any $R$ and their morphisms (cf. I.2 of [@LSS] and [@ap2]). Notice that our category of spaces over $R$, being a generalization (by passing from the semialgebraic to the general o-minimal case) of the category of spaces from [@WSS], does not use the general sheaf theory for generalized topological spaces (as [@LSS] does), but only a bit of a simplier “function sheaf theory”.
The following basic example is a special case of a definable space defined in [@Dries].
Each definable subset $D$ of $R^n$ has a natural structure of a function ringed space over $R$. Its open sets in the sense of the generalized topology are (relatively) open definable subsets, admissible coverings are such open coverings that already finitely many open sets cover the union, and on each open definable subset $O\subseteq D$ we take the ring $\mathcal{DC}_D (O)$ of all continuous definable $R$-valued functions on $O$. *Definable sets will be identified with such function ringed spaces.* Notice that the topological closure of a definable set is definable, so the topological closure operator restricted to the class of definable subsets of a definable set $D$ can be treated as the closure operator in the generalized topological sense.
We start to re-introduce the theory of locally definable spaces by generalizing the definitions from [@LSS].
An **affine definable space** over $R$ is a space over $R$ isomorphic to a definable subset of some $R^n$. (Notice that morphisms of affine definable spaces are given by continuous definable maps between definable subsets of affine spaces.)
The following example, not explicitely studied before, shows that it is important to consider affine definable spaces as definable sets “embedded” into their ambient affine spaces.
Consider the semialgebraic (that is definable in the ordered field structure) space $S^1_{angle}$ over ${\mathbb}{R}$ on the underlying set $S^1$ of ${\mathbb}{R}^2$ obtained by taking the generalized topology from the usual affine definable circle $S^1\subseteq {\mathbb}{R}^2$ and declaring the structure sheaf to contain the continuous semialgebraic functions of the angle $\theta$ having period $2\pi$. The two semialgebraic spaces are different. The usual circle $S^1$ is an “affine model” of $S^1_{angle}$: there exists an isomorphism of semialgebraic spaces over ${\mathbb}{R}$ (whose formula is not semialgebraic, since it involves a trigonometric function) transforming the “non-embedded circle” $S^1_{angle}$ into the “embedded circle” $S^1$.
A **definable space** over $R$ is a space over $R$ that has a finite open covering by affine definable spaces. Definable spaces were introduced by van den Dries in [@Dries]. They admit clear notions of a definable subset and of an open subset. The definable subsets of a definable space form a Boolean algebra generated by the open definable subsets; “definable” here means “constructible from the generalized topology”. A **locally definable space** over $R$ is a space over $R$ that has an admissible covering by affine definable open subspaces. (So definable spaces are examples of locally definable spaces.) Each locally definable space is an inductive limit of a directed system of definable spaces in the category of spaces over a given $R$ (cf. [@LSS I.2.3]). The **dimension** of a locally definable space is defined as usual (cf. p. 37 of [@LSS]), and may be infinite. **Morphisms** of affine definable spaces, definable spaces and locally definable spaces over $R$ are their morphisms as spaces over $R$. So affine definable spaces, definable spaces and locally definable spaces form full subcategories ${\mathbf{ADS}}(R)$, ${\mathbf{DS}}(R)$, and ${\mathbf{LDS}}(R)$ of the category ${\mathbf{Space}}(R)$ of spaces (over $R$).
A **locally definable subset** of a locally definable space is a subset having definable intersections with all open definable subspaces. Such subsets are also considered as **subspaces**, the locally definable space of such a set is formed as an inductive limit of definable subspaces of the open definable spaces forming the ambient space (cf. I.3, p. 28 in [@LSS].) A locally definable subset of a locally definable space is called **definable** if as a subspace it is a definable space. (The definable subsets of a definable space are exactly the definable subsets of these spaces as locally definable ones.)
On locally definable spaces we often consider a topology in the traditional sense, called the **strong topology** (cf. p. 31 of [@LSS]), taking the open sets from the generalized topology as the basis of the topology. Nevertheless, we will usually work in the generalized topology. This allows, in many cases, to omit the word “definably” applied to topological notions (as in “definably connected”). On a definable space the generalized topology generates both the strong topology and the definable ( i. e. “constructible”) subsets. Similarly, the locally definable subsets of a locally definable space are exactly the sets “locally constructible” from the generalized topology, where “locally” means “when restricted to an open definable subspace”. The closure operator of the strong topology restricted to the class of locally definable subsets may be treated as the closure operator of the generalized topology.
The following new example gives some understanding of the variety of locally definable spaces even in the semialgebraic case. They are obtained by “partial localization”, which generalizes passing to the “localization” $M_{loc}$ of a locally complete locally semialgebraic space $M$ (see I.2.6 in [@LSS]).
\[halfloc\] Consider any o-minimal expansion ${\mathbb}{R}_{{\mathcal}{S}}$ of the field ${\mathbb}{R}$. Take the admissible union (see [@ap2]) of real line open intervals $(-\infty ,n)$ over all natural $n$, which implies that this family is assumed to be admissible. Then this space is definable “on the left-hand side”, but only locally definable “on the right-hand side”. The definable subsets are the finite unions of intervals (of any kind) that are bounded from above. The locally definable subsets are locally finite unions of intervals that have only finitely many connected components on the negative half-line. The structure sheaf consists of functions that are continuous definable on each of the intervals $(-\infty,n)$. This space will be called $({\mathbb}{R}_{{\mathcal}{S}})_{loc,+}$. Analogously we define the space $({\mathbb}{R}_{{\mathcal}{S}})_{loc,-}$ as the admissible union of the family $(-n,+\infty)$, for $n\in {\mathbb}{N}$. (By taking the admissible union of the family $(-n,n)$ for $n\in {\mathbb}{N}$, we would get the usual “localization” $({\mathbb}{R}_{{\mathcal}{S}})_{loc}$ of the real line ${\mathbb}{R}_{{\mathcal}{S}}$.)
As in [@LSS], we have
Any “direct (generalized) topological sum” of definable spaces (in the category of spaces over a given $R$) is a locally definable space.
We call a subset $K$ of a generalized topological space $M$ **small** if for each admissible covering ${\mathcal}{U}$ of any open $U$, the set $K\cap U$ is covered by finitely many members of ${\mathcal}{U}$. (We say that ${\mathcal}{U}$ is **essentially finite** on $K$ in such a situation.) Just from the definitions, we get (as in the semialgebraic case):
Each definable space is small. Each subset of a definable space is also small. Every small open subspace of a locally definable space is definable. Each small set of a locally definable space is contained in a small open set. In particular “small open” means exactly “definable open”, but “small” does not imply “definable”.
On can easily check that: any locally definable space is topologically Hausdorff iff it is **Hausdorff** in the generalized topological sense. Similarly, a locally definable space is topologically regular iff it is **regular** in the generalized topological sense: any single point, always closed, and any closed subspace not containing the point can be separated by disjoint open subspaces.
Clearly, each affine definable space is regular. Of great importance for the theory of definable spaces is the following
Each regular definable space is affine.
\[6\] Even if we define locally definable spaces with the use of structure sheaves, a locally definable space is determined by its generalized topology when we assume silently that the structure of each affine subspace is understood, since it has an admissible covering of regular small open subspaces, which are affine definable spaces. The main purpose of introducing function ringed spaces was to define morphisms.
A practical way of defining and denoting a locally definable space is to write it as the *admissible union* ($\stackrel{a}{\bigcup}$) of its admissible covering by open definable (often affine) subspaces, not just the union set (even if considered with a topology). Such a notation is explored in [@ap2]. One can also just specify an admissible covering of the space by known open subspaces.
The author considers an attempt to encode the generalized topology under the notion of “equivalent atlases” a little bit risky. We have the following important example, which is again obtained by the “localization” process known from [@LSS].
Take an archmedean $R$. Consider three locally definable spaces $X_1,X_2,X_3$ on the same open interval $(0,1)$ given, respectively, by admissible families of open definable sets ${\mathcal}{U}_1=\{ (\frac{1}{n}, 1-\frac{1}{n}): n\geq 3\} $, ${\mathcal}{U}_2=\{ (0,1)\} $, ${\mathcal}{U}_3={\mathcal}{U}_1 \cup {\mathcal}{U}_2$. Then $X_1\neq X_2=X_3$. Such a space $X_1$ is the “localized” unit interval $(0,1)_{loc}$.
We would have two non-equivalent atlases ${\mathcal}{U}_1,{\mathcal}{U}_2$ that combine to a third atlas ${\mathcal}{U}_3$, and the combined atlas ${\mathcal}{U}_3$ would be equivalent to ${\mathcal}{U}_2$, but not equivalent to ${\mathcal}{U}_1$.
Notice that the recent paper [@ldh] by E. Baro and M. Otero can easily mislead the reader. They define a locally definable space as a set with a concrete atlas, call some atlases equivalent (which is not studied later), and in Theorems 3.9 and 3.10 say that a set with only a topology is a locally definable space. The reader gets the impression that they consider only the usual topology, and do not see the essential use of the generalized topology (see the proof of (iii) of their Proposition 2.9). Their notion of an “ld-homeomorphism” is never defined, and the reader may wrongly guess that an ld-homeomorphism is just a locally definable homeomorphism (see Remark 2.11). Their “locally finite generalized simplicial complex” is given a locally definable space structure “star by star”, so it is not necessary “embedded” into the ambient affine space. This may mislead the reader when reading their version of the Triangulation Theorem (Fact 2.10) and some proofs. Their Example 3.1 is highly imprecise, since it depends on the choice of the covering of $M$ by definable subsets $M_i$. The same symbol $M$ denotes both a locally definable space and just a subset of $R^n$ (and this is continued in their Example 3.3). Formula $Fin({\mathbb}{R})={\mathbb}{R}$ (see p. 492) again suggests to the reader the nonexistence of the generalized topology (never mentioned explicitely). Its worth noting that if $R$ does not have any saturation (as in the important case of the field of real numbers ${\mathbb}{R}$), then the usual topology does not determine the generalized topology.
We will say that an object $N$ of ${\mathbf{LDS}}(R)$ **comes from** $R^k$ if the underlying topological space of $N$ is equal to the standard topological space of $R^k$ and for each $x\in R^k$ both $N$ and the affine space $R^k$ induce on an open box $B$ containing $x$ the same definable open subspace. The following two original theorems show the variety of locally definable spaces “living” on the same topological space.
\[realline\] For each o-minimal expansion ${\mathbb}{R}_{{\mathcal}{S}}$ of the field of real numbers ${\mathbb}{R}$:
1\) there are exactly four different objects of ${\mathbf{LDS}}({\mathbb}{R}_{{\mathcal}{S}})$ that come from ${\mathbb}{R}_{{\mathcal}{S}}^1$, namely: ${\mathbb}{R}_{{\mathcal}{S}}$, $({\mathbb}{R}_{{\mathcal}{S}})_{loc}$, $({\mathbb}{R}_{{\mathcal}{S}})_{loc,+}$, $({\mathbb}{R}_{{\mathcal}{S}})_{loc, -}$.
2\) there are uncountably many different objects of ${\mathbf{LDS}}({\mathbb}{R}_{{\mathcal}{S}})$ that come from ${\mathbb}{R}^2_{{\mathcal}{S}} $.
1\) Assume $N$ is an object of ${\mathbf{LDS}}({\mathbb}{R}_{{\mathcal}{S}})$ coming from ${\mathbb}{R}_{{\mathcal}{S}}^1$. Each open subset of $N$ is a countable union of open intervals. Each open definable set is a finite union of open intervals, since it has a finite number of connected components. There is an admissible covering ${\mathcal}{U}$ of the real line by open intervals that are affine definable spaces. If such an interval is bounded, then it is relatively compact, and a standard interval (that is an interval as an “embedded” subspace of ${\mathbb}{R}_{{\mathcal}{S}}$). If such an interval is not bounded, then each bounded subinterval is standard, hence again the whole interval is standard, since it is an affine definable space. If there are no infinite intervals in ${\mathcal}{U}$, then the open family $\{ (-n,n)\}_{n\in {\mathbb}{N}}$ is admissible, and $N=({\mathbb}{R}_{{\mathcal}{S}})_{loc}$. If both $+\infty$ and $-\infty$ are ends of intervals from ${\mathcal}{U}$ , then $N$ is a finite union of standard intervals, thus it is isomorphic to the affine space ${\mathbb}{R}_{{\mathcal}{S}}$. Similarly, the two other cases give the spaces $({\mathbb}{R}_{{\mathcal}{S}})_{loc,+}$, $({\mathbb}{R}_{{\mathcal}{S}})_{loc, -}$.
2\) Choose a slope $a\in {\mathbb}{R}$ and consider the space $N_a$ defined by the admissible covering $ \{ U_{a,n} \}_{n\in {\mathbb}{N}}$, where $U_{a,n}=\{ (x,y)\in {\mathbb}{R}^2:\: y<ax+n\}$ are definable sets. All $N_a$, for $a\in {\mathbb}{R}$, are different objects of ${\mathbf{LDS}}({\mathbb}{R}_{{\mathcal}{S}})$.
Remind (from non-standard analysis) that each non-archimedean $R$ is partitioned into many **galaxies** (two elements $x,y\in R$ are in the same galaxy if their “distance” $|x-y|$ is bounded from above by a natural number).
\[nonreal\] For any o-minimal expansion $R$ of a field not isomorphic to ${\mathbb}{R}$ there are already uncountably many different objects of ${\mathbf{LDS}}(R)$ that come from the line $R^1$.
*Case 1: $R$ contains ${\mathbb}{R}$.*
The set of galaxies of $R$ is uncountable. For any galaxy $G$ of $R$ take $x\in G$ and consider the space $N_G$ defined as the disjoint generalized topological union of the following: all the galaxies $G' >G$, treated each one as a locally definable space (see Remark \[galaxy\]), and the space $N^{'}_G$ given by the admissible covering $ \{(-\infty, x+n)\}_{n\in {\mathbb}{N}}$, which is the union of all galaxies $G''\leq G$ “partially localized” (only) at the end of $G$.
All of $N_G$ are different objects of ${\mathbf{LDS}}(R)$ and come from $R^1$.
*Case 2: $R$ does not contain ${\mathbb}{R}$.*
The field $R\cap {\mathbb}{R}$ has uncountably many irrational cuts, determined by elements $r\in {\mathbb}{R}\setminus R$. For each such $r$, consider the space $N_r$ over $R$ defined by the admissible covering $$\{ (-\infty, s) \}_{s<r} \cup \{ (s,+\infty)\}_{s>r},$$ where $s\in R \cap {\mathbb}{R}$. This space consists of two connected components given by the conditions $x<r$ and $x>r$. All of $N_r$, $r\in {\mathbb}{R}\setminus R$, are different objects of ${\mathbf{LDS}}(R)$ and come from $R^1$.
There are more general sets that are called in [@Fischer], Definition 7.1(a), “locally definable”. We will call them local subsets. (A subset $Y$ of a space $X$ is a **local subset** if for each point $y$ only of $Y$ there is an open definable neighborhood $U$ of $y$ in $X$ such that $U\cap Y$ is definable.) They can be given a locally definable space structure, but their properties are not nice: they are closed only on finite intersection, and are not closed under complement or even finite union.
The locally definable space on such $Y\subseteq X$ may be introduced by the following admissible covering $${\mathcal}{U}_Y = \{ Y\cap U_i \mid U_i\mbox{ is a definable open subset of $X$ and }Y\cap U_i \mbox{ is definable}\} .$$ (The above definition does not depend on any arbitrary choice of an admissible covering, contrary to Example 3.1 and Example 3.3 of [@ldh].)
Local subsets are (as such) called subspaces! Their use often does not recognize the space structure given above (even if they are definable sets), since we mainly want to study “locally definable functions” on them (see Definition 7.1 (b) of [@Fischer]). (Consider a function “locally definable” if its domain and codomain are local subsets of some objects of ${\mathbf{LDS}}(R)$, and all function germs of this function at points of its domain are definable. A function germ $f_x$ at $x$ is called definable if some definable neighborhood of $x$ is mapped by $f$ into a definable neighborhood of $f(x)$ and the obtained restriction of $f$ is a definable mapping.)
The following examples make the above considerations more clear.
The semialgebraic set $(-1,1)_{{\mathbb}{R}}$ inherits an affine semialgebraic space structure from ${\mathbb}{R}$. Nevertheless, when speaking about “locally semialgebraic functions” into ${\mathbb}{R}$ (in the sense of Definition 7.1 (b) of [@Fischer]) we want to treat it as the ”localized” open interval $(-1,1)_{loc}$, which is not a semialgebraic space. Define, for example, functions $w:{\mathbb}{R}\to {\mathbb}{R}$ and $u:(-1,1)\to {\mathbb}{R}$ by formulas $$w(x)=\left\{ \begin{array}{ll}
x-4k, & x\in [4k-1,4k+1), k\in {\mathbb}{Z},\\
2+4k-x, & x\in [4k+1,4k+3), k\in {\mathbb}{Z},
\end{array} \right.$$ and $$u(t)=w(\frac{t}{\sqrt{1-t^2}}).$$ Then $u$ is “locally semialgebraic” (and not semialgebraic).
Consider the semialgebraic set $S=(-1,1)^2\cup \{ (1,1)\}$ in ${\mathbb}{R}^2$. The fact of being a “locally semialgebraic function” (in the sense of Definition 7.1 (b) of [@Fischer]) on $S$ (into ${\mathbb}{R}$) does not reduce to being a morphism of any locally (and even weakly) semialgebraic space that can be formed by redefining the notion of an admissible covering of the space $S$. In particular, each of the functions $F_n:S\to {\mathbb}{R}$ $(n=1,2,3,...)$, where $$F_n (x,y)=\left\{ \begin{array}{ll}
0, & y\geq 1-\frac{1}{n},\\
w(\frac{1-\frac{1}{n}-y}{1-x}), & y<1-\frac{1}{n},
\end{array}\right.$$ is “locally semialgebraic” (function $w$ is defined as in the previous example).
In general, definable spaces and locally definable spaces do not behave well enough for being used in homotopy theory. The right choice of assumptions (as in the semialgebraic case of [@LSS]) are: regularity and one new called “paracompactness”, which is only an analogue of the topological notion.
Regular paracompact locally definable spaces
============================================
One of the reasons why we pass to the locally definable spaces is the need of existence of covering mappings with infinite (for example countable) fibers.
The following example is a generalization of an example from [@DK6].
*The space $\textrm{Fin} (R)$.* We look for the *universal covering* of the unit circle $S^1 \subseteq R^2$. We will see soon that (as in topology) $\pi_1 (S^1 )=\mathbb{Z} $, so the universal covering should have countable fibers. Let $\textrm{Fin} (R)$ be the locally definable space introduced by the admissible covering by open intervals $\{(-n,n)\}_{n\in \mathbb{N}}$ in $R$. There is a surjective semialgebraic (so definable) morphism $e:[0,1]\rightarrow S^1$ that maps 0 and 1 to the distinguished point on $S^1$ and is injective elsewhere. Then the universal covering mapping $p: \textrm{Fin}(R)
\rightarrow S^1$ defined by $p(m+x)=e(x)$, where $m\in \mathbb{Z}, x\in [0,1]$, is a morphism of locally definable spaces.
A family of subsets of a locally definable space is **locally finite** if each open definable subset of the space meets only finitely many members of the family.
A locally definable space is called **paracompact** if there is a locally finite covering of the whole space by open definable subsets. (A locally finite covering must be admissible, since “admissible” means: when restricted to an open definable subspace, there is a finite subcovering. Shortly: “admissible” means exactly “locally *essentially* finite”.)
\[galaxy\] The locally definable space $\textrm{Fin} (R)$ given by the admissible covering $\{ (-n,n): n\in \mathbb{N}\} $ is paracompact for each $R$, since there exists a locally finite covering giving the same space. (Notice that if $R$ contains ${\mathbb}{R}$, then $\bigcup\limits^a_{r\in {\mathbb}{R}_{+}} (-r,r) =\bigcup\limits^a_{n\in {\mathbb}{N}} (-n,n)$.) In the language of nonstandard analysis, we can say that each galaxy may be considered as a regular paracompact locally definable space.
Direct (i. e. cartesian) products preserve regularity and paracompactness of locally definable spaces (cf. I.4.2c) and I.4.4e) in [@LSS]). We will denote the category of regular paracompact locally definable spaces over $R$ by ${\mathbf{RPLDS}}(R)$.
The spaces from the proofs of Theorems \[realline\] and \[nonreal\] are objects of ${\mathbf{RPLDS}}(R)$.
A **connected** (in the sense of generalized topology: the space cannot be decomposed into two open disjoint nonempty subspaces) regular paracompact locally definable space has a countable admissible covering by definable open subsets (so called **Lindelöf property** in [@LSS]). If it has finite dimension $k$, then it can be embedded into the cartesian power Fin$(R)^{2k+1}$. This holds by embedding into a partially complete space, triangulation (see Theorems \[embed\], \[triang\] below) and Theorem 3.2.9 from a book of Spanier [@spa] (see also II.3.3 of [@LSS]).
The notion of paracompactness introduced above differs from the topological one. Each definable space is paracompact. There are Hausdorff definable (so paracompact) spaces which are not regular. With the regularity assumption, each paracompact space is normal and admits partition of unity. Paracompactness is inherited by all subspaces and cartesian products. The Lindelöf property gives paracompactness only with the assumption that the closure of a definable set is definable.
Fiber products exist in the category of locally definable spaces over $R$ (cf. I.3.5 of [@LSS]). A morphism $f:M\rightarrow N$ between locally definable spaces is called **proper** if it is universally closed in the sense of the generalized topology. This means that for each morphism of locally definable spaces $g:N' \rightarrow N$ the induced morphism $f' : M\times_{N} N' \rightarrow N'$ in the pullback diagram is a closed mapping in the sense of the generalized topological spaces (it maps closed subspaces onto closed subspaces). If all restrictions of $f$ to closed definable subspaces are proper, then we call $f$ **partially proper**.
A Hausdorff locally definable space $M$ is called **complete** if the morphism from $M$ to the one point space is proper. Each paracompact complete space is affine definable (compare I.5.10 in [@LSS]). Moreover, $M$ is called **locally complete** if each point has a complete neighborhood. (Each locally complete locally definable space is regular, cf. I.7, p. 75 in [@LSS]). It is **partially complete** if every closed definable subspace is complete. Every partially complete regular space is locally complete (cf. I.7.1 a)) in [@LSS]).
This notion of properness is analogical to a notion from algebraic geometry. Partial completeness is the key notion.
Let $M$ be a locally complete paracompact space. Take the family $\stackrel{\circ}{\gamma}_c (M)$ of all such open definable subsets $U$ of $M$ that $\overline{U}$ is complete. Introduce a new locally definable space $M_{loc}$, the **localization** or **partial completization** of $M$, on the same underlying set taking $\stackrel{\circ}{\gamma}_c (M)$ as an admissible covering by small open subspaces (cf. I.2.6 in [@LSS]). The new space is regular partially complete (not only locally complete) and the identity mapping from $M_{loc}$ to $M$ is a morphism, but $M_{loc}$ may not be paracompact, see Warning-Example \[rloc\]. Notice that localization leaves the strong topology unchanged.
Localization is similar to the process of passing to $k$-spaces (they are exactly the compactly generated spaces if hausdorffness is assumed) in homotopy theory. (Complete spaces play the role of compact spaces.) But notice that each topological locally compact space is a $k$-space.
Only one of the four locally definable spaces of Theorem \[realline\] for each ${\mathbb}{R}_{{\mathcal}{S}}$ is partially complete, namely $({\mathbb}{R}_{{\mathcal}{S}})_{loc}$.
A **paracompact locally definable manifold** of dimension $n$ over $R$ is a Hausdorff locally definable space over $R$ that has a locally finite covering by definable open subsets that are isomorphic to open balls in $R^n$. (Such a space is paracompact and locally complete, so regular, cf. I.7 p.75 in [@LSS].) If additionally the transition maps are (definable) $C^k$-diffeomorphisms ($k=1,...,\infty$), then we get **paracompact locally definable $C^k$-manifolds**. Notice that the differential structure of such manifolds may be encoded by sheaves (in the sense of the strong topology) of $C^k$ functions. We get the following original result:
\[manifold\] Paracompact (in the topological sense) analytic manifolds of dimension $n$ are in bijective correspondence with partially complete paracompact locally definable $C^{\infty}$-manifolds over $\mathbb{R}_{an}$ of the same dimension.
*A paracompact analytic manifold induces a paracompact locally definable $C^{\infty}$-manifold over $\mathbb{R}_{an}$*: Each paracompact manifold (even a topological one) is regular. We may assume (by shrinking the covering of the manifold by chart domains if necessary) that the analytic structure of the manifold is given by a locally finite atlas consisting of charts whose domains and ranges are relatively compact subanalytic sets, and the charts extend analytically beyond the closures of chart domains. By taking a nice locally finite refinement, we additionally can get the chart domains and chart ranges (analytically and globally subanalytically) isomorphic to open balls in ${\mathbb}{R}^n$. Now the chart domains form a locally finite covering of the analytic manifold that defines a paracompact locally definable manifold over ${\mathbb}{R}_{an}$. The transition maps (being analytic diffeomorphisms) are ${\ensuremath{\mathbb{R}}}_{an}$-definable $C^{\infty}$-diffeomorphisms of open, relatively compact, subanalytic subsets of some $\mathbb{R}^n$. Thus we get a locally definable $C^{\infty}$-manifold
Notice that the relatively compact subanalytic sets are now the definable sets and the subanalytic sets are now the locally definable sets.
The obtained locally definable space is partially complete.
*Vice versa*: A paracompact locally definable $C^{\infty}$-manifold over $\mathbb{R}_{an}$ induces a Hausdorff (analytic) manifold with analytic, globally subanalytic transition maps and globally subanalytic chart ranges. We may assume that the manifold is connected. Its locally finite atlas is countable (cf. I.4.17 in [@LSS]), so the manifold is a second countable topological space, and finally a paracompact analytic manifold. All locally definable subsets are now subanalytic (they are globally subanalytic in every chart).
*One-to-one correspondence*: If the paracompact locally definable manifold is partially complete, then the closure of a chart domain is a closed definable set (cf. I.4.6 of [@LSS]) and a complete definable set, which means it is a compact subanalytic set. Thus chart domains are relatively compact. “Locally” in the sense of locally definable spaces means exactly “locally” in the topological sense. It follows that: the definable subsets are exactly the relatively compact subanalytic subsets, and the locally definable subsets are exactly the subanalytic subsets of the obtained paracompact analytic manifold. Notice that the strong topology does not change when we pass from one type of a manifold to the other. So the structure of the partially complete locally definable space is uniquely determined (see Remark \[6\]). Both the structures of a $C^{\infty}$ locally definable manifold over ${\mathbb}{R}_{an}$ and the structure of an analytic manifold do not change during the above operations (only a convenient atlas was chosen).
\[analytic\] A real function on a (paracompact) analytic manifold $M_{an}$ is analytic iff it is a $C^{\infty}$ morphism from the corresponding partially complete paracompact locally definable $C^{\infty}$-manifold (call it $M_{ldm}$) into $\mathbb{R}_{an}$ as an affine definable space. (See 5.3 in [@DM].)
Analogously, for each expansion ${\mathbb}{R}_{{\mathcal}{S}}$ of the field ${\mathbb}{R}$ that is a reduct of ${\mathbb}{R}_{an}$, partially complete paracompact locally definable $C^{\infty}$-manifolds over ${\mathbb}{R}_{{\mathcal}{S}}$ correspond uniquely to paracompact analytic manifolds of some special kinds. Then the locally definable subsets in the sense of a given locally definable manifold (as well as in the sense of its “expansions”, see below) form nice “geometric categories”. This in particular generalizes the *analytic-geometric categories* of van den Dries and Miller [@DM].
The above phenomenon may be explained in the following way: the analytic manifolds ${\mathbb}{R}^n$ ($n\geq 1$), which model all analytic manifolds, have a natural notion of smallness. A subset $S\subset {\mathbb}{R}^n$ is **topologically small** if it is *bounded* or, equivalently, *relatively compact*. In the corresponding partially complete paracompact locally definable $C^{\infty}$-manifolds $Fin({\mathbb}{R}_{an})^n=({\mathbb}{R}_{an})_{loc}^n = \bigcup\limits^a_{k\in {\mathbb}{N}} \: (-k,k)^n = Fin(({\mathbb}{R}_{an})^n)$ over ${\mathbb}{R}_{an}$ this means that $S$ is a *small* subset in the sense of the generalized topology (if $S$ is subanalytic, then this means *definable*). One could also use the notion of being *relatively complete* in this context. It is partial completeness that gives analogy between the usual topology and the generalized topology.
The generalized topology of the space $M_{ldm}$ of Remark \[analytic\] is “the subanalytic site” considered by microlocal analysts (see [@KS]). More generally: the generalized topology of each paracompact locally definable manifold may be considered as a “locally definable site”. It is also possible to consider all subanalytic subsets of a real analytic manifold as open sets of a generalized topological space. But then the strong topology becomes discrete.
$($*The space $R_{loc}$.*$)$ \[rloc\] The structure $R$ as an affine definable space, is locally complete but not complete. For such a space $R_{loc}$ is introduced by the admissible covering $\{ (-r,r): r\in R_{+} \} $. This is a locally (even regular partially) complete space which is not definable. If the cofinality of $R$ is uncountable, then $R_{loc}$ is not paracompact! Here the morphisms from $R$ to $R$ are “the continuous definable functions”, and the morphisms from ${R}_{loc}$ to $R$ are “the continuous locally (in the sense of $R_{loc}$) definable functions”. (The latter case includes some nontrivial periodic funtions for an archimedean $R$.)
A series of topological facts have counterparts for regular paracompact locally definable spaces.
\[15\] Let $M$ be an object of ${\mathbf{RPLDS}}(R)$. Then:
a\) **\[tautness\]** the closure of a definable set is definable (cf. I.4.6);
b\) **\[shrinking of coverings lemma\]** for each locally finite covering $(U_{\lambda} )$ of $M$ by open locally definable sets there is a covering $(V_{\lambda} )$ of $M$ by open locally definable sets such that $\overline{V_{\lambda} }\subseteq
U_{\lambda}$ (cf. I.4.11);
c\) **\[partition of unity\]** for every locally finite covering $(U_{\lambda} )$ of $M$ by open locally definable subsets there is a subordinate partition of unity, i. e. there is a family of morphisms $\phi_{\lambda} : M\rightarrow [0,1]$ such that $\mathrm{supp}
\phi_{\lambda} \subseteq U_{\lambda}$ and $\sum_{\lambda} \phi_{\lambda} =1$ on $M$ (cf. I.4.12);
d\) **\[Tietze’s extension theorem\]** if $A$ is a closed subspace of $M$ and $f:A\rightarrow K$ is a morphism into a convex definable subset $K$ of $R$, then there exists a morphism $g:M\rightarrow K$ such that $g|A =f$ (cf. I.4.13);
e\) **\[Urysohn’s lemma\]** if $A,B$ are disjoint closed locally definable subsets of $M$, then there is a morphism $f:M\rightarrow [0,1]$ with $f^{-1} (0)=A$ and $f^{-1} (1)=B$ (cf. I.4.15).
Each locally definable space $M$ over $R$ has a natural “base field extension” $M(S)$ over any elementary extension $S$ of $R$ (cf. I.2.10 in [@LSS]) and an “expansion” $M_{R'}$ to a locally definable space over any o-minimal expansion $R'$ of $R$. Analogously, we may speak about a base field extension of a morphism.
The rules of conservation of the main properties under the base field extension are the same as for the locally semialgebraic case:
- the base field extensions of the family of the connected components of a locally definable space $M$ form the family of connected components of $M(S)$ (cf. I.3.22 i) in [@LSS]);
- if $M$ is Hausdorff then: the space $M$ is definable iff $M(S)$ is definable, $M$ is affine definable iff $M(S)$ is affine definable, $M$ is paracompact iff $M(S)$ is paracompact, $M$ is regular and paracompact iff $M(S)$ is regular and paracompact (cf. B.1 in [@LSS]).
- if $M$ is regular and paracompact, then: $M$ is partially complete iff $M(S)$ is partially complete, $M$ is complete iff $M(S)$ is complete (cf. B.2 in [@LSS]).
If we expand $R$ to an o-minimal $R'$ then:
- any locally definable space $M$ is regular over $R$ iff $M_{R'}$ is a regular space over $R'$, since they have the same strong topologies;
- a locally definable space $M$ is connected over $R$ iff $M_{R'}$ is connected over $R'$ (for an affine space a clopen subset of a set definable over $R$ is definable over $R$, generally apply an admissible covering by affine subspaces “over $R$”);
- a locally definable space $M$ is Lindelöf over $R$ iff $M_{R'}$ is Lindelöf over $R'$: if $M$ is Lindelöf, then $M_{R'}$ is obviously Lindelöf; if $M_{R'}$ is Lindelöf then each member of a countable admissible covering ${\mathcal}{V}$ of $M_{R'}$ by definable open subspaces is covered by a finite union of elements of the admissible covering ${\mathcal}{U}$ of $M$ by definable open subspaces that allowed to construct $M_{R'}$. Then ${\mathcal}{U}$ has a countable subcovering ${\mathcal}{U'}$. (Up to this moment our proof goes like the proof of Proposition 2.9 iii) in [@ldh], but they do not care about admissibility.) The family ${\mathcal}{U''}$ of finite unions of elements of ${\mathcal}{U'}$ is a countable coarsening of ${\mathcal}{V}$, hence is admissible in $M_{R'}$. Since “admissible” means “locally essentially finite”, ${\mathcal}{U''}$ is in particular admissible in $M$.
- if $M$ is a Hausdorff locally definable space over $R$, then $M$ is paracompact over $R$ iff $M_{R'}$ is paracompact over $R'$: if $M$ is paracompact, then $M_{R'}$ is obviously paracompact; if $M_{R'}$ is paracompact, then we can assume that it is connected. Then $M_{R'}$ is Lindelöf (cf. I.4.17 in [@LSS]) and [taut]{} (i.e. the closure of a definable set is definable, cf. I.4.6 in [@LSS]). Now, by c), the space $M$ is Lindelöf, and it is taut by the construction of $M_{R'}$ and considerations of Fundamental Example 1, so $M$ is paracompact (see I.4.18 in [@LSS] and Proposition 2.9 iv) in [@ldh]).
Homotopies
==========
Here basic definitions of homotopy theory are re-introduced. The unit interval $[0,1]$ of $R$ will be considered as an affine definable space over $R$.
Let $M,N$ be objects of ${\mathbf{Space}}(R)$ and let $f,g$ be morphisms from $M$ to $N$. A **homotopy** from $f$ to $g$ is a morphism $H:M\times
[0,1]\rightarrow N$ such that $H(\cdot ,0)=f$ and $H(\cdot ,1)=g$. If $H$ exists, then $f$ and $g$ are called **homotopic**. If additionally $H(x,t)$ is independent of $t\in [0,1]$ for each $x$ in a subspace $A$, then we say that $f$ and $g$ are **homotopic relative to** $A$. A subspace $A$ of a space $M$ is called a **retract** of $M$ if there is a morphism $r:M\rightarrow A$ such that $r|A=id_A$. Such $r$ is called a **retraction**. A subspace $A$ of $M$ is called a **strong deformation retract** of $M$ if there is a homotopy $H:M\times [0,1]\rightarrow M$ such that $H_0$ is the identity and $H_1$ is a retraction from $M$ to $A$. Then $H$ is called **strong deformation retraction**.
A **system of spaces** over $R$ is any tuple $(M,A_1 ,...,A_k )$ where $M$ is a space over $R$ and $A_1,...,A_k$ are subspaces of $M$. A **closed pair** is a system $(M,A)$ of a space with a closed subspace. A system $(A_0,A_1,...,A_k)$ is **decreasing** if $A_{i+1}$ is a subspace of $A_i$ for $i=0,...,k-1$. A **morphism** of systems of spaces $f:(M,A_1 ,...,A_k )
\rightarrow (N, B_1 ,...,B_k )$ is a morphism of spaces $f:M\rightarrow N$ such that $f(A_i )\subseteq B_i$ for each $i=1,...,k$. A **homotopy** between two morphisms of systems of spaces $f,g$ from $(M,A_1 ,...,A_k )$ to $(N,B_1 ,...,B_k )$ is a morphism $$H:(M\times [0,1],A_1 \times [0,1],...,A_k \times[0,1])\rightarrow
(N,B_1 ,...,B_k )$$ with $H_0 =f$ and $H_1 =g$. The **homotopy class** of such a morphism $f$ will be denoted by $[f]$ and the **set of all homotopy classes** of morphisms from $(M,A_1 ,...,A_k )$ to $(N,B_1 ,...,B_k )$ by $$[ (M,A_1 ,...,A_k ), (N,B_1 ,...,B_k ) ] .$$ If $C$ is a closed subspace of $M$, and $h:C\rightarrow N$ is a pregiven morphism such that $h(C \cap A_i ) \subseteq B_i$, then we denote the sets of classes of homotopy relative to $C$ of mappings extending $h$ by $$[ (M,A_1 ,...,A_k ), (N,B_1 ,...,B_k ) ]^{h} .$$
Let us adopt the notation: $I=[0,1]$, $\partial I^n =I^n \setminus (0,1)^n$, and $J^{n-1} = \overline{\partial I^n \setminus (I^{n-1} \times \{ 0\} )}$. For every pointed space $(M, x_0 )$ over $R$ and $n\in \mathbb{N}^{*}$ we define the **(absolute) homotopy groups** as sets $$\pi_n (M,x_0 )=[ (I^n ,\partial I^n ), (M,x_0 ) ]$$ where the multiplication $[f]\cdot [g]$, for $n\geq 1$, is the homotopy class of $$(f * g) (t_1 ,t_2 ,...,t_n )=\left\{
\begin{array}{l}
f (2 t_1 ,t_2 ,...,t_n ), 0\le t_1 \le \frac{1}{2} \\
g (2 t_1 -1,t_2 ,...,t_n ), \frac{1}{2} \le t_1 \le 1.
\end{array} \right.$$ For $n=0$ we get (only) a set $\pi_0 (M,x_0 )$ of connected components of $M$ with the base point the connected component of $x_0$. Also, as in topology, we define **relative homotopy groups** $$\pi_n (M,A,x_0 ) = [(I^n ,\partial I^n ,J^{n-1} ),(M,A,x_0 )].$$
A morphism $f:M\rightarrow N$ is a **homotopy equivalence** if there is a morphism $g:N\rightarrow M$ such that $g\circ f$ is homotopic to $id_M$ and $f\circ g$ is homotopic to $id_N$. We call $f:M\rightarrow N$ a **weak homotopy equivalence** if $f$ induces bijections in homotopy sets ($\pi_0 (\cdot)$) and group isomorphisms in all homotopy groups ($\pi_n (\cdot)$, $n\geq 1$). Analogously, we define homotopy equivalences and weak homotopy equivalences for systems of spaces.
The following operations, known from the usual homotopy theory, may not be executable in the category of regular paracompact locally definable spaces over a given $R$. The **smash product** of two pointed spaces $M,N$, which is $M\wedge N = M\times N / M\vee N$, where $M\vee N$ denoted the wedge product of such spaces. The **reduced suspension** $SM$ of $M$, which is $S^1 \wedge M$. The **mapping cylinder** $Z(f)$ of $f:M\rightarrow N$, which is the space obtained as the quotient of $(M\times [0,1]) \cup N$ by the equivalence relation that identifies each point of the form $(x,1)$, $x\in M$, with $f(x)$. The **mapping cone** of $f$, which is the mapping cylinder of $f$ divided by $M\times \{ 0\} $. The **cofiber** $C(f)$ of $f:M\rightarrow N$, which is the “switched” mapping cylinder $(([0,1]\times M)\cup_{1\times M,f} N) / \{ 0\} \times M$.
Comparison Theorems for locally definable spaces
================================================
In this section the two Comparison Theorems from [@LSS] are extended, and the third is added. The first steps to do this are: embedding in a partially complete space and triangulation.
\[embed\] Each regular paracompact locally definable space over $R$ is isomorphic to a dense locally definable subset of a partially complete regular paracompact space over $R$.
We restate the triangulation theorem, keeping the notation from [@LSS] to avoid confusion.
\[triang\] Let $M$ be a regular paracompact locally definable space over $R$. For a given locally finite family ${\mathcal}{A}$ of locally definable subsets of $M$, there is a simultaneous triangulation $\phi : X\rightarrow M$ of $M$ and ${\mathcal}{A}$ (i. e. an isomorphism from the underlying set $X$, considered as a locally definable space, of a strictly locally finite geometric simplicial complex $(X,\Sigma(X))$ to $M$ such that all members of ${\mathcal}{A}$ are unions of images of open simplices from $\Sigma(X)$).
In particular, each object of ${\mathbf{RPLDS}}(R)$ is locally (pathwise) connected and even locally contractible.
As an illustration of the methods available by triangulation, the following Bertini or Lefschetz type theorem (known from complex algebraic and analytic geometry) is proven. (See [@ap] for a topological version. Here the difficulty lies in the possibility that two different points are of an infinitesimal distance, and that a curve has an infinitely large velocity.)
A subspace $\Delta$ of a locally definable space $Y$ **nowhere disconnects** $Y$ if for each connected open neighborhood $W$ of any $y\in Y$ there is an open neighborhood $U\subseteq W$ of $y$ such that $U\setminus \Delta$ is connected.
A morphism $p:E\rightarrow B$ in **LDS($R$)** is a **branched covering** if there is a closed, nowhere dense **exceptional subspace** $\Delta\subseteq B$ such that $p|_{p^{-1}(B\setminus \Delta)}:p^{-1}(B\setminus \Delta)
\rightarrow B\setminus \Delta$ is a **covering mapping** (this means: there is an admissible covering of $B\setminus \Delta$ by open subspaces, each of them well covered, analogically to the topological setting). If each of the **regular points** $b\in B\setminus \Delta$ of the branched covering $p:E\to B$ have the fiber of the same cardinality, then this cardinality is called the **degree** of a branched covering $p:E\to B$.
\[bertini\] Let $Y$ be a simply connected (this assumes connected) object of ${\mathbf{RPLDS}}(R)$, $Z$ be a connected, paracompact locally definable manifold over $R$ of dimension at least 2, and $\pi :Y\times Z\to Y$ the canonical projection.
Assume that $V\subset Y\times Z$ is a closed subspace such that the restriction $\pi_V :V\to Y$ is a branched covering of finite degree and an exceptional set $\Delta$ of this branched covering nowhere disconnects $Y$. Put $X=(Y\times Z)\setminus V$, and $L=\{ p\} \times Z$, for some $p\in Y\setminus \Delta$.
If there is a morphism of locally definable spaces $h:Y\to Z$ over $R$ with the graph contained in $X$, then the inclusion $i:L\setminus V \to X$ induces an epimorphism in the fundamental groups $i_{*}: \pi_1 (L\setminus V)\to \pi_1 (X)$.
\[straight\] Every paracompact locally definable manifold $M$ over $R$ has the following straightening property:
For each set $J \subset [0,1]\times M$ such that the natural projection $\beta :[0,1]\times M\rightarrow [0,1]$ restricted to $J$ is a covering mapping of finite degree, there exists an isomorphism, called the **straightening isomorphism**, $\tau :[0,1]\times M\rightarrow [0,1]\times M$ which satisfies the following three conditions:
2.1) $\beta \circ \tau = \beta,$
2.2) $\tau \mid \{ 0\} \times M = id,$
2.3) $\tau (J) = [0,1]\times (\alpha (J\cap (\{ 0\} \times M))),$ where $\alpha :[0,1]\times M\rightarrow M$ is the natural projection.
*Special case.* Assume $M$ is a unit open ball in $R^m$. The set $J$ is a finite union of graphs of definable continuous mappings $\gamma_i :[0,1]_R\to M$ $(i=1,...,n)$. We apply induction on the number $n$ of these graphs.
If $n=1$ then obviously the straightening exists (compare Lemma 2 in [@ap]), and the isomorphism may be chosen to extend continuously to the identity on the unit sphere.
If $n>1$ and the lemma is true for $n-1$, then we can assume that the first $n-1$ graphs (of the functions $\gamma_1,...,\gamma_{n-1}$) are already straightened and that the distances between images of the corresponding mappings (points $p_1,...,p_{n-1}$) are not infinitesimals. Moreover, since the distance from the value $\gamma_n(t)$ of the last function $\gamma_n$ to any of the distinguished points has a positive lower bound, we can assume $\gamma_n(t)$ is always outside some closed balls centered at $p_i$’s with radius larger than some rational number. Now, we can cover the rest of the unit ball by finitely many regions that are each isomorphic to the open unit ball. Since the last function is definable, there is only finitely many transitions from one region to another when $t\in [0,1]_R$. We have the straightening inside each of the regions. By glueing such straightenings as in the proof of Lemma 3 of [@ap], we get the straightening of the whole $n$-th mapping. Again the straightening extends continuously to the identity on the unit sphere.
*General case.* Again $J$ is a finite union of graphs of definable functions on $[0,1]_R$ (by arguments similar to those of the usual topological context). Since $J$ is definable, it is contained in a finite union of open sets each isomorphic to the open unit ball in $R^m$. The thesis of the lemma extends by arguments similar to these of the special case.
Clearly, $X$ is a connected and locally simply connected space. Let $ j:L\setminus V\hookrightarrow X\setminus
(\Delta \times Z)$ and $k:X\setminus (\Delta \times Z)\hookrightarrow X$ be the inclusions. Then the proof falls naturally into two parts.
*Step 1.* [*The induced mapping $j_{*} :\pi_{1} (L\setminus V)\rightarrow \pi_{1}
(X\setminus (\Delta \times Z))$ is an epimorphism.*]{} This step is analogous to Part 1 of the proof of Theorem 1 in [@ap]. Here Lemma \[straight\] is used.
*Step 2.* [*The mapping $k_{*} :\pi_{1} (X\setminus (\Delta \times Z))\rightarrow
\pi_{1} (X)$ induced by $k$ is an epimorphism.*]{}
Notice that $(\Delta \times Z)\cap X$ nowhere disconnects $X$. Let $u=(f,g)$ be a loop in $X$ at $(p,h(p))$. The set $\operatorname{im}(u)$ has an affine open neighborhood $W$.
We use a (locally finite) triangulation “over ${\mathbb}{Q}$” of $Y\times Z$ (that is an isomorphism $\phi : K\to Y\times Z$ for some strictly locally finite, not necessary closed, simplicial complex $(K,\Sigma(K))$, following the notation of [@LSS]), compatible with $\operatorname{im}(u),\Delta\times Z,V,L,h,W$.
There is $\varepsilon \in {\mathbb}{Q}$ such that the “distance” from $\phi^{-1}(\operatorname{im}(u))$ to $\phi^{-1}(V\cap W)$ in some ambient affine space is at least $\varepsilon$. Moreover, the “velocity” of $\phi^{-1}\circ u$ (existing almost everywhere) is bounded from above by some rational number. Since now all the considered sets and functions (appearing in the context of $K$) are piecewise linear over ${\mathbb}{Q}$, the Lebesgue number argument is available. By the use of the “distance” function in the ambient affine space and the barycentric coordinates for the chosen triangulation, we find a loop $\tilde{u}=(\tilde{f},\tilde{g})$ homotopic to $u$ rel $\{ 0,1\}$ with image in $X\setminus (\Delta \times Z)$.
The following facts and theorems, whose proofs use the machinery of *good triangulations*, are straightforward generalizations of the corresponding semialgebraic versions from [@LSS]:
Let $M$ be an object of ${\mathbf{RPLDS}}(R)$ and $A$ a closed subspace. There is an open neighborhood $U$ (in particular a subspace) of $A$ and a strong deformation retraction $$H:\overline{U} \times [0,1]\rightarrow \overline{U}$$ from $\overline{U}$ to $A$ such that the restriction $H|U\times [0,1]$ is a strong deformation retraction from $U$ to $A$.
Let $M$ be an object of ${\mathbf{RPLDS}}(R)$, $A$ a closed subspace, and $U$ a neighborhood of $A$ from the previous theorem. Any morphism $f:A\rightarrow Z$ into a regular paracompact locally definable space extends to a morphism $\tilde{f} :\overline {U} \rightarrow Z$. Moreover, if $\tilde{f}_1$, $\tilde{f}_2$ are extensions of $f$ to $\overline{U}$, then they are homotopic in $\overline{U}$ relative to $A$.
Let $M$ be an object of ${\mathbf{RPLDS}}(R)$. If $A$ is a closed subspace of $M$, then $(A\times [0,1])\cup (M\times \{ 0\}
)$ is a strong deformation retract of $M\times [0,1]$. In particular, the pair $(M,A)$ has the following Homotopy Extension Property:
for each morphism $g:M\rightarrow Z$ into a regular paracompact locally definable space $Z$ and a homotopy $F: A\times [0,1] \rightarrow Z$ with $F_0 =g|A$ there exists a homotopy $G:M\times [0,1] \rightarrow Z$ with $G_0 =g$ and $G|A\times [0,1] =F$.
Since our spaces may be triangulated, the method of *simplicial approximations* (III.2.5 of [@LSS]) makes a good job. In particular, the method of *well cored systems* and *canonical retractions* from III.2 in [@LSS] gives the following
\[cores\] Each object of ${\mathbf{RPLDS}}(R)$ is homotopy equivalent to a partially complete one. A system $(M,A_1,...,A_k)$ of a regular paracompact locally definable space with closed subspaces is homotopy equivalent to an analogous system of partially complete spaces.
The following two main theorems from [@LSS] generalize, but the *mapping spaces* from III.3, which depend on the degrees of polynomials, should be replaced with similar mapping spaces depending on concrete formulas $\Psi (\overline{x}, \overline{y}, \overline{z})$, with parameters $\overline{z}$, of the language of the structure $R$ (one “mapping space” per each formula $\Psi$).
Let $(M,A_1 ,...,A_r )$ and $(N,B_1 ,...,B_r )$ be systems of regular paracompact locally definable spaces over $R$, where each $A_i (i=1,...,r)$ is closed in $M$. Let $h: C\rightarrow N $ be a given morphism from a closed subspace $C$ of $M$ such that $h(C\cap A_i )\subseteq B_i$ for each $i=1,...,r$. Then we have
Let $R\prec S$ be an elementary extension. Then the “base field extension” functor from $R$ to $S$ induces a bijection between the homotopy sets: $$\kappa :
[(M,A_1 ,...,A_k ), (N,B_1 ,...,B_k ) ]^{h} \rightarrow
[( M,A_1 ,...,A_k ), (N,B_1 ,...,B_k ) ]^{h} (S).$$
Let $R$ be an o-minimal expansion of $\mathbb{R}$. Then the “forgetful” functor ${\mathbf{RPLDS}}(R)\rightarrow \mathbf{Top}$ to the topological category induces a bijection between the homotopy sets $$\lambda : [(M,A_1 ,...,A_k ), (N,B_1 ,...,B_k ) ]^{h} \rightarrow
[(M,A_1 ,...,A_k ), (N,B_1 ,...,B_k ) ]^{h}_{top}.$$
Moreover, a version of the proof of the first Comparison Theorem gives
\[3compld\] If $R'$ is an o-minimal expansion of $R$, then the “expansion” functor induces a bijection between the homotopy sets $$\mu :
[(M,A_1 ,...,A_k ), (N,B_1 ,...,B_k ) ]^{h}_{R} \rightarrow
[(M,A_1 ,...,A_k )_{R'}, (N,B_1 ,...,B_k )_{R'} ]^{h}_{R'}.$$
E. Baro and M. Otero [@BaOt] have written a detailed proof of this theorem in the case of systems of definable sets. They use a natural tool of “normal triangulations” from [@Ba] to get an applicable version of II.4.3 from [@LSS]. The theorem extends to the general case as in [@LSS].
Because of the locally finite character of the regular paracompact locally definable spaces, by inspection of the proof of the triangulation theorem (II.4.4 in [@LSS]), each such space has an isomorphic copy that is built from sets definable without parameters glued together along sets that are definable without paramaters. It is possible to triangulate even “over the field of real algebraic numbers $\overline{{\mathbb}{Q}}$ ” or “over the field of rational numbers ${\mathbb}{Q}$”. Moreover, if two 0-definable subsets of $R^n$ are isomorphic as definable spaces (i. e. definably homeomorphic), then there is a 0-definable isomorphism between them (we may change arbitrary parameters into 0-definable parameters in the defining formula of an isomorphism).
By the *(noncompact) o-minimal version of Hauptvermutung* for the structure $R$, we understand the following statement, which is a version of Question 1.3 in [@BO]:
*Given two semialgebraic (definable in the field structure of $R$) sets in some $R^n$, if they are definably homeomorphic, then they are semialgebraically homeomorphic.*
In other words: *if two affine semialgebraic spaces are isomorphic as definable spaces, then they are isomorphic as semialgebraic spaces.*
It follows from Theorem 2.5 in [@Shiota] that this statement is true for every $R$. Thus the category of regular paracompact locally semialgebraic spaces ${\mathbf{RPLSS}}(R)$ over (the underlying field of) $R$ may be viewed as a subcategory of ${\mathbf{RPLDS}}(R)$, but not as a full subcategory. Moreover, by triangulation with vertices having coordinates in the field of real algebraic numbers $\overline{\mathbb{Q}}$, we have the following fact:
*Each regular paracompact locally definable space over $R$ is *isomorphic* to a regular paracompact locally semialgebraic space over (the underlying field of) $R$.*
Thus, by the third Comparison Theorem, the homotopy categories $H{\mathbf{RPLSS}}(R)$ and $H{\mathbf{RPLDS}}(R)$ are equivalent. Analogously, we get
\[concld\] The homotopy categories of: systems $(M,A_1,...,A_k)$ of regular paracompact locally definable spaces with finitely many closed subspaces and systems $(M,A_1,...,A_k)$ of regular paracompact locally semialgebraic spaces with finitely many closed subspaces (over the “same” $R$) are equivalent.
By the triangulation theorem \[triang\], every object of the former category is isomorphic to an object of the later category. Thus the “expansion” functor is essentially surjective. By the Comparison Theorem \[3compld\], it is also full and faithfull. This implies that this functor is an equivalence of categories.
It follows that the homotopy theory for regular paracompact locally definable spaces can to a large extent be transfered from the semialgebraic homotopy theory and, eventually, from the topological homotopy theory, as in [@LSS].
Other important facts about regular paracompact locally definable spaces will be developed in a more general setting of definable CW-complexes and weakly definable spaces.
Weakly definable spaces
=======================
In homotopy theory one needs to use quotient spaces (e. g. mapping cylinders, mapping cones, cofibers, smash products, reduced suspensions, CW-complexes), and this operation is not always executable in the category of locally definable spaces (as in the semialgebraic case). That is why weakly definable spaces, which are analogues of arbitrary Hausdorff topological spaces, need to be introduced. We start here to re-develop the theory of M. Knebusch from [@WSS].
Let $(M, O_M )$ be a space over $R$, and let $K$ be a small subset of $M$. We can induce a space on $K$ in the following way:
i\) *open* sets in $K$ are the intersections of open sets on $M$ with $K$,
ii\) *admissible* coverings in $K$ are such open coverings that some finite subcovering already covers the union,
iii\) a function $h:V\rightarrow R$ is a *section* of $O_K (V)$ if it is a finite open union of restrictions to $K$ of sections of the sheaf $O_M$.\
We call $(K, O_K )$ a **small subspace** of $(M, O_M )$.
A subset $K$ of $M$ is called **closed definable** in $M$ if $K$ is closed, small, and the space $(K, O_K)$ is a definable space. The collection of closed definable subsets of $M$ is denoted by $\overline{\gamma}(M)$. The set $K$ is called a **polytope** if it is a closed definable complete space. We denote the collection of polytopes of $M$ by $\gamma_c (M)$.
A **weakly definable space** (over $R$) is a space $M$ (over $R$) having a family, indexed by a partially ordered set $A$, of regular closed definable subsets $(M_{\alpha} )_{\alpha \in A}$ such that the following conditions hold:
WD1) $M$ is the union of all $M_{\alpha}$,
WD2) if $\alpha \le \beta$ then $M_{\alpha}$ is a (closed) subspace of $M_{\beta}$,
WD3) for each $\alpha$ there is only a number of $\beta$ such that $\beta \le \alpha$,
WD4) the family $(M_{\alpha} )$ is strongly inverse directed, i. e. for each $\alpha, \beta$ there is some $\gamma$ such that $\gamma \le \alpha$, $\gamma \le \beta$ and $M_{\gamma}
= M_{\alpha} \cap M_{\beta} $,
WD5) the set of indices is directed: for each $\alpha, \beta$ there is $\gamma$ with $\gamma \ge \alpha$, $\gamma \ge \beta$,
WD6) the space $M$ is the *inductive limit* of the spaces $(M_{\alpha} )$, what means the following:
a\) a subset $U$ of $M$ is *open* iff each $U\cap M_{\alpha}$ is open in $M_{\alpha}$,
b\) an open family $(U_{\lambda} )$ is *admissible* iff for each $\alpha$ the restricted family $(M_{\alpha} \cap U_{\lambda} )$ is admissible in $M_{\alpha}$,
c\) a function $h:U\rightarrow R$ on some open $U$ is a *section* of $O_M$ iff all the restrictions $h|U\cap M_{\alpha}$ are sections of respective sheaves $O_{M_{\alpha}}$.
Such family $(M_{\alpha} )$ is called an **exhaustion** of $M$.
A space $M$ is called a **weak polytope** if $M$ has an exhaustion composed of polytopes. **Morphisms** and **isomorphisms** of weakly definable spaces are their morphisms and isomorphisms as spaces (we get the full subcategory ${\mathbf{WDS}}(R)$ of ${\mathbf{Space}}(R)$).
A **weakly definable subset** is such a subset $X\subseteq M$ that: has definable intersections with all members of some exhaustion $(M_{\alpha} )$, and is considered with the exhaustion $(X\cap M_{\alpha} )$, hence it may be considered as a **subspace** of $M$ (cf. IV.3 in [@WSS]).
A subset $X$ of $M$ is **definable** if it is weakly definable and the space $(X,O_X)$ is definable. A subset $X$ of $M$ is definable iff it is weakly definable and is contained in a member of an exhaustion $M_{\alpha}$ (cf. IV.3.4 of [@WSS]).
The **strong topology** on $M$ is the topology that makes the topological space $M$ the respective inductive limit of the topological spaces $M_{\alpha}$. The unpleasant fact about the weakly definable spaces (comparising with the locally definable spaces) is that points may not have small neighborhoods (see Example \[26\]). Moreover, the open sets from the generalized topology may not form a basis of the strong topology (cf. Appendix C in [@WSS]).
The closure of a definable subset of $M$ is always definable (cf. IV.3.6 of [@WSS]), so the topological closure operator restricted to the class $\gamma(M)$ of definable subsets of $M$ may be treated as the closure operator of the generalized topology. The weakly definable subsets are “piecewise constructible” from the generalized topology (compare [@ap2]).
All weakly definable spaces are Hausdorff, actually even “normal”, see IV.3.12 in [@WSS]. We can consider “expansions” and “base field extensions” of weakly definable spaces (compare considerations in IV.2) or morphisms (in the case of a base field extension) similar to the operations defined for locally definable spaces. They do not depend on the chosen exhaustion and preserve connectedness (cf. IV.2 and IV.3 of [@WSS]).
Assume that a weakly definable space $M$ is also locally definable. Then $\overline{\gamma}(M)$ is the family of all closed small subsets (as in [@LSS], p. 57), since closed small subsets are definable as subspaces. We can speak about complete subspaces of $M$. It is easy to see that complete subspaces are always closed. Thus the family $\gamma_c (M)$ contains exactly the definable complete subspaces (as in [@LSS], p. 81).
Fiber products exist in ${\mathbf{WDS}}(R)$ (cf. IV.3.20 of [@WSS]). So we (analogously to the case of locally definable spaces) define **proper** and **partially proper** mappings between weakly definable spaces as well as **complete** and **partially complete** spaces. It appears that the complete spaces are the polytopes, and the partially complete spaces are the weak polytopes (cf. IV.5 in [@WSS]).
The following examples from [@WSS] remain relevant in the case of an o-minimal expansion of a real closed field.
The category ${\mathbf{RPLDS}}(R)$ is a full subcategory of ${\mathbf{WDS}}(R)$. An exhaustion of an object $M$ of ${\mathbf{RPLDS}}(R)$ is given by all finite subcomplexes $Y$ in $X$ that are closed in $X$ for some triangulation $\phi :X\to M$.
\[26\] An infinite wedge of circles is a weak polytope but not a locally definable space. A “countable comb” or “uncountable comb” is a weak polytope which is not a locally definable space.
Consider the “countable comb” from IV.4.7 in [@WSS]. This example shows that the topological closure of a weakly definable subset may not be weakly definable. Moreover, the naive “Arc Sellecting Lemma for weakly definable spaces” does not hold.
On the other hand, the following examples did not appear explicitely in [@WSS].
Consider an uncountable proper subfield $F$ of ${\mathbb}{R}$. Let $X$ be a subset of the unit square $[0,1]^2$ consisting of points that have at least one coordinate in $F$. This set has a natural exhaustion making $X$ into a weak polytope over ${\mathbb}{R}$. This weak polytope is not locally simply connected.
An open interval of $R$ is a definable space but not a weak polytope, an infinite comb with such a “hand” is a weakly definable space but not a weak polytope.
Glueing weakly definable spaces is possible: for a closed pair $(M,A)$ and a partially proper morphism $f:A\rightarrow N$ the quotient space of $M \sqcup N$ by an equivalence relation identifying each $a\in A$ with $f(a)$ is a weakly definable space $M\cup_{f} N$ called the **space obtained by glueing $M$ to $N$ along $A$ by $f$**. Then the projection $\pi :M\sqcup N \to M\cup_{f} N$ is partially proper and strongly surjective, cf. IV.8.6 in [@WSS]. (A morphism $f:M\rightarrow N$ is **strongly surjective** if each definable subset of $N$ is covered by the image of a definable subset of $M$.)
A family ${\mathcal}{A}$ of subsets of a weakly definable space $M$ will be called **piecewise finite** if for each $D\in \gamma(M)$, the set $D$ meets only finitely many members of ${\mathcal}{A}$. (Such families are called “partially finite” in [@WSS].)
A **definable partition** of a weakly definable space $M$ is a piecewise finite partition of $M$ into a subset $\Sigma$ of the family $\gamma (M)$ of definable subsets of $M$. An element $\tau$ of $\Sigma$ is an **immediate face** of $\sigma$ if $\tau \cap (\overline{\sigma} \setminus \sigma )\neq\emptyset$. Then we write $\tau \prec \sigma $. A **face** of $\sigma$ is an element of some finite chain of immediate faces finishing with $\sigma$. (Each $\sigma$ has only finitely many immediate faces, even a finite numer of faces, cf. V.1.7 in [@WSS]).
A **patch decomposition** of $M$ is a definable partition $\Sigma$ of $M$ such that: for each $\sigma \in \Sigma$ there is a number $n \in {\ensuremath{\mathbb{N}}}$ such that any chain $\tau_r \prec \tau_{r-1} \prec ... \prec \tau_0 =\sigma$ in $\Sigma$ has length $r\le n$. The smallest such $n$ is called the **height** of $\sigma$ and denoted by $h(\sigma )$. A **patch complex** is a pair $(M, \Sigma (M))$ consisting of a space $M$ and a patch decomposition $\Sigma (M)$ of $M$. Elements of the patch decomposition are called **patches**.
Each exhaustion gives a patch decomposition of $M$.
Instead of triangulations for ${\mathbf{RPLDS}}(R)$, we have available for ${\mathbf{WDS}}(R)$ so called special patch decompositions. A **special patch decomposition** is such a patch decomposition that for each $\sigma \in \Sigma$, the pair $(\overline{\sigma},\sigma)$ is isomorphic to the pair with the second element being a standard open simplex, and the first element this standard open simplex with some added open proper faces.
Let $M$ be an object of ${\mathbf{WDS}}(R)$ and let ${\mathcal}{A}$ be a piecewise finite family of subspaces. Then there is a simultaneous special patch decomposition of $M$ and the family ${\mathcal}{A}$.
A **relative patch decomposition** of a closed pair $(M,A)$ is a patch decomposition $\Sigma$ of the space $M\setminus A$. Then we denote by $\Sigma (n)$ the union of all patches of height $n$, by $M_n$ the union of $A$ and all $\Sigma (m)$ with $m\le n$, $M(n)$ the “direct (generalized) topological sum” of all closures $\overline{\sigma}$ where $\sigma \in \Sigma(n)$, and $\partial M(n)$ the direct sum of all frontiers $\partial \sigma
=\overline{\sigma} \setminus \sigma $ of $\sigma \in \Sigma (n)$.
By $\psi_n :M(n) \rightarrow M_n$ we denote the union of all inclusions $\overline{\sigma} \rightarrow M_n$ with $\sigma \in \Sigma (n)$, and by $\phi_n : \partial M(n) \rightarrow M_{n-1}$ the restriction of $\psi_n$, which is called the **attaching map**. Then, since $\phi_n $ is partially proper (cf. VI.2 in [@WSS]), we can express $M_n$ as $M(n) \cup_{\phi_n} M_{n-1}$. The space $M_n$ is called **n-chunk** and $M(n)$ is called **n-belt**. So each weakly definable space is built up by glueing direct (generalized) topological sums of definable spaces to the earlier constructed spaces in countably many steps. In particular, definable versions of CW-complexes are among weakly definable spaces (see below).
A family $(X_{\lambda} )_{\lambda \in \Lambda}$ from $\mathcal{T} (M)$, the class of weakly definable subsets of $M$, is called **admissible** if each definable subspace $B$ of $M$ is contained in the union of finitely many elements of the family. (One could call such families “piecewise essentially finite” or “partially essentially finite”.) Thus definable partitions are exactly the admissible partitions into definable subsets.
An **admissible filtration** of a space $X$ is an admissible increasing sequence of closed subspaces $(X_n )_{n\in {\ensuremath{\mathbb{N}}}}$ covering $X$. For example: the sequence $(M_n )_{ n\in {\ensuremath{\mathbb{N}}}}$ of chunks of $M$ (for a given patch decomposition) is an admissible filtration of $M$ (cf. VI.2 in [@WSS]).
The next fact is very important in homotopy-theoretic considerations.
\[comphom\] Let $(C_n)_{n\in {\ensuremath{\mathbb{N}}}}$ be an admissible filtration of a space $M$. Assume $(G_n :M\times [0,1]\rightarrow N)_{n\in {\ensuremath{\mathbb{N}}}}$ is a family of homotopies such that $G_{n+1} (\cdot ,0)=G_{n} (\cdot ,1)$ and $G_{n} $ is constant on $C_n$ . For any given strictly increasing sequence $0=s_0 <s_1 <s_2 <...$ with all $s_m$ less than 1 there is a homotopy $F:M\times [0,1]\rightarrow N$ such that $$F(x,t) = G_{k+1} (x,\frac{t - s_k}{s_{k+1} - s_k}), \mbox{ for }(x,t)\in C_n\times [s_k ,s_{k+1} ],
0\le k\le n-2,$$ and $F(x,t)=G_n (x,0)$ for $(x,t)\in C_n \times [s_{n-1},1] $.
Comparison Theorems for weakly definable spaces
===============================================
Now, with [patch decompositions]{} playing the role of triangulations we get the Comparison Theorems for weakly definable spaces as in [@WSS].
\[hep\] Let $(M,A)$ be a closed pair of weakly definable spaces over $R$. Then $(A\times [0,1])\cup (M\times \{ 0\} )$ is a strong deformation retract of $M\times [0,1]$. In particular, the pair $(M,A)$ has the following Homotopy Extension Property:
for each morphism $g:M\rightarrow Z$ into a weakly definable space $Z$ and a homotopy $F: A\times [0,1] \rightarrow Z$ with $F_0 =g|A$ there exists a homotopy $G:M\times [0,1] \rightarrow Z$ with $G_0 =g$ and $G|A\times [0,1] =F$.
Let $(M,A_1 ,...,A_r )$ and $(N,B_1 ,...,B_r )$ be systems of weakly definable spaces over $R$ where each $A_i$ is closed in $M$. Let $h: C\rightarrow N $ be a given morphism from a closed subspace $C$ of $M$ such that $h(C\cap A_i )\subseteq B_i$ for each $i=1,...,r$. Then we have
For an elementary extension $R\prec S$ the following map, induced by the “base field extension” functor, is a bijection $$\kappa : [(M,A_1 ,...,A_r ),(N,B_1 ,...,B_r )]^{h} \rightarrow
[(M,A_1 ,...,A_r ),(N,B_1 ,...,B_r )]^{h} (S) .$$
If $R={\ensuremath{\mathbb{R}}}$ as fields, then the following map to the topological homotopy sets, induced by the “forgetful” functor, is a bijection $$\lambda :[(M,A_1 ,...,A_r ),(N,B_1 ,...,B_r )]^{h} \rightarrow
[(M,A_1 ,...,A_r ),(N,B_1 ,...,B_r )]^{h}_{top} .$$
Again, a version of the proof of the first Comparison Theorem (thus a version of the proof of V.5.2 i); we present the proof for the convenience of the reader) gives:
\[3compwd\] If $R'$ is an o-minimal expansion of $R$, then the following map, induced by the “expansion” functor, is a bijection $$\mu : [(M,A_1 ,...,A_k ), (N,B_1 ,...,B_k ) ]^{h}_{R} \rightarrow
[(M,A_1 ,...,A_k )_{R'}, (N,B_1 ,...,B_k )_{R'} ]^{h}_{R'}.$$
It suffices to prove the surjectivity, and only the case $k=0$. We have a map $f:M\to N$ (over $R'$) extending $h:C\to N$ (over $R$), and we seek for a mapping $g:M\to N$ (over $R$) such that $g$ is homotopic to $f$ relative to $C$ (the homotopies appearing in this proof are allowed to be over $R'$).
We choose a relative patch decomposition (over $R$) of $(M,C)$, and will construct maps $h_n:M_n\to N$ (over $R$), $f_n:M\to N$ (over $R'$) for $n\geq -1$, and a homotopy $H_n:M\times [0,1]\to N$ relative $M_{n-1}$ such that: $h_{-1}=h$, $h_n|_{M_{n-1}}=h_{n-1}$, $f_{-1}=f$, $f_n|_{M_n}=h_n$, $H_n(\cdot,0)=f_{n-1}$, $H_n(\cdot,1)=f_n$. If we do this, we are done: we have a map $g:M\to N$ with $g|_{M_n}=h_n$ for each $n$. Composing, by Fact \[comphom\], the homotopies $(H_n)_{n\geq 0}$ along a sequence $s_n\in [0,1)$ with $s_{-1}=0$, we obtain a homotopy $G:M\times [0,1]\to N$ relative $C$ from $f$ to $g$ as desired.
We start with $h_{-1}=h$, $f_{-1}=f$. Assume that $h_i,f_i,H_i$ are given for $i<n$. Then we get a pushout diagram over $R$ (see page 149 of [@WSS]) and we define: $$k_n =h_{n-1}\circ \phi_n :\partial M(n)\to N \mbox{ (over $R$)},$$ $$u_n= (f_{n-1}|_{M_n})\circ \psi_n :M(n)\to N \mbox{ (over $R'$)}.$$ Notice that $u_n$ extends $k_n$. By the Comparison Theorem for locally definable spaces (Theorem \[3compld\]) there is a map $v_n:M(n)\to N$ over $R$ extending $k_n$ and a homotopy $F_n:M(n)\times [0,1]\to N$ relative $\partial M(n)$ from $u_n$ to $v_n$. The maps $v_n$ and $h_{n-1}$ combine to a map $h_n:M_n\to N$, with $h_n\circ\psi_n=v_n$ and $h|_{M_{n-1}}=h_{n-1}$. The map $F_n$ and $M_{n-1}\times [0,1]\ni (x,t)\mapsto h_{n-1}(x)\in N$ combine (cf. IV.8.7.ii) in [@WSS]) to the homotopy $\tilde{H}_n:M_n\times [0,1]\to N$ relative $M_{n-1}$ from $f_{n-1}|_{M_n}$ to $h_n$. It can be extended (by Fact \[hep\]) to the homotopy $H_n:M\times [0,1]\to N$ with $H_n(\cdot,0)=f_{n-1}$. Put $f_n=H_n(\cdot,1)$. This finishes the induction step and the proof of the theorem.
Again, the category of weakly semialgebraic spaces over (the underlying field of) $R$ may be considered a (not full in general) subcategory of ${\mathbf{WDS}}(R)$. But see the following important new example:
Let $Q$ be the square $[0,1]^2_{R}$. Now form $\widetilde{Q}$ in the following way: for each definable subset $A$ of $Q$ glue $A\times S^1$ to $Q$ by identifying $A\times \{ 1\} $ with $A$. If there are definable non-semialgebraic sets in $R^2$, then $\widetilde{Q}$ as a weakly definable space is not isomorphic to (an expansion of) a weakly semialgebraic space over $R$.
Definable CW-complexes
======================
A **relative definable CW-complex** $(M,A)$ over $R$ is a relative patch complex $(M,A)$ satisfying the conditions:
(CW1) immediate faces of patches have smaller dimensions than the original patches in the patch decomposition of $M\setminus A$,
(CW2) for each patch $\sigma \in \Sigma (M,A)$ there is a morphism $\chi_{\sigma} : E_n \rightarrow \overline{\sigma}$ ($E_n$ denotes the unit closed ball of dimension $n$) that maps the open ball isomorphically onto $\sigma$ and the sphere onto $\partial \sigma$. For $A=\emptyset$, we have an **absolute definable CW-complex** over $R$. All definable CW-complexes are weak polytopes (absolute or relative, see V.7, p. 165, in [@WSS]). A **system of definable CW-complexes** is a system of spaces $(M,A_1,...,A_k)$ such that each $A_i$ is a closed subcomplex of the definable CW-complex $M$ (cf. V.7, p. 178, of [@WSS]). Such a system is **decreasing** if $A_i$ is a (closed) subcomplex of $A_{i-1}$ for $i=1,...,k$, where $A_0=M$. As in the semialgebraic case, we have the following.
\[fact\] Each partially complete object of ${\mathbf{RPLDS}}(R)$ admits a definable CW-complex structure over $R$, since it is isomorphic to a closed (geometric) locally finite simplicial complex. (Compare considerations of II.4 and ii) in Examples V.7.1.)
Fact \[cores\] and Example \[fact\] give
\[locdefred\] Each object of ${\mathbf{RPLDS}}(R)$ is homotopy equivalent to a definable CW-complex over $R$. Each system $(M,A_1,...,A_k)$ of a regular paracompact locally definable space with closed subspaces is homotopy equivalent to a system of definable CW-complexes.
The following version of the Whitehead theorem for definable CW-complexes may be proved like its topological analogue (see Theorem 7.5.4 in [@maunder]).
\[cwwhitehead\] Each weak homotopy equivalence between definable CW-complexes is a homotopy equivalence. Similar facts hold for any decreasing systems of definable CW-complexes.
The proof is analogous to the proofs of 7.5.2, 7.5.3 and 7.5.4 in [@maunder]. The argument from the long exact homotopy sequence may be proved like in [@Hu] (compare III.6.1 in [@LSS] and V.6.6 in [@WSS]). The second part of the thesis follows from the definable analogue of V.2.13 in [@WSS].
Using the above instead of Theorem V.6.10 of [@WSS], we can both pass to a reduct and eliminate parameters.
\[elimination\] Each definable CW-complex is homotopy equivalent to an expansion of a base field extension of a semialgebraic CW-complex over $\overline{{\mathbb}{Q}}$. Analogous facts hold for decreasing systems of definable CW-complexes.
This follows from the reasoning with relative CW-complexes analogous to the proof of V.7.10 in [@WSS] (instead of the case of an elementary extension of real closed fields, we have the case of an o-minimal expansion of a real closed field). The construction of the desired relative CW-complex “skeleton by skeleton” is similar. Since we are dealing only with decreasing systems of definable CW-complexes, the use of V.6.10 of [@WSS] (whose role is the transition from finite unions to any unions) may be replaced with the use of Theorem \[cwwhitehead\].
Moreover, combining the above with the Comparison Theorems gives an extension of Remarks VI.1.3 of [@WSS].
\[equi\] The homotopy categories of: topological CW-complexes, semialgebraic CW-complexes over (the underlying field of) $R$, and definable CW-complexes over $R$ are equivalent. Similar facts hold for decreasing systems of CW-complexes.
The case of bounded o-minimal theories
======================================
Let $T$ be an o-minimal complete theory extending RCF. We may assume that the theory is already Skolemized, so every 0-definable function is in the language and $T$ has quantifier elimination. We can build models of $T$ using the *definable closure* operation in some huge model (or, equivalently, using the notion of a *generated substructure* of a huge model for the chosen rich language). Taking a “primitive extension” generated by a single element $t$ over a model $R$ gives a new model $R\langle t\rangle$ of $T$ determined up to isomorphism by the type this single element realizes over the former model $R$.
Such a $T$ will be called **bounded** if the model $P\langle t\rangle$ has countable cofinality, where $P$ is the prime model of $T$ and $t$ realizes $+\infty$ over $P$. This condition can be expressed in the following words: there is a (countable) sequence of 0-definable unary functions that is cofinal in the set of all 0-definable unary functions at $+\infty$ (this property does not depend on a model of $T$). In particular, polynomially bounded theories are bounded. Notice that $P\langle t\rangle$ is cofinal in $R\langle t\rangle$, for any model $R$ of $T$, if $t$ realizes $+\infty$ over $R$.
Each bounded theory $T$ has the following property: each model $R$ has an elementary extension $S$ such that both $S$ and its “primitive extension” $S\langle t\rangle$, with $t$ realizing $+\infty $ over $S$, have countable cofinality. (Take $S=R\langle t_1 \rangle$, with $t_1$ realizing $+\infty$ over $R$). This allows, by the first Comparison Theorem, to extend many facts about weakly definable spaces over “nice” models to spaces over any model of $T$.
The following example may be extracted from the proof of Theorem IV.9.2 in [@WSS]. It shows the importance of the boundedness assumption. (The role of the boundedness assumption may be also seen by considering Example IV.9.12 in [@WSS].)
Consider the closed $m$-dimensional simplex with one open proper face removed ($m\geq 2$), call this set $A$, as a definable subset of $R^{m+1}$. We want to introduce a partially complete space on the same set $A$. If $R$ and $R\langle t\rangle$ have countable cofinality, then we can find a sequence of internal points tending to the barycenter of the removed face, and we can use a “cofinal at $0_{+}$” sequence of unary functions tending (even uniformly) to the zero function to produce an increasing sequence $(P_n)_{n\in {\mathbb}{N}}$ of polytopes covering our set $A$ and such that any polytope contained in $A$ is contained in some $P_n$. Then $(P_n)_{n\in {\mathbb}{N}}$ is an exhaustion of a weak polytope with the underlying set $A$. The old space and the new space on $A$ have the same polytopes.(Compare the proof of Theorem IV.9.2.) A similar construction can be made if several open proper faces are removed.
By the reasoning similar to that of V.7.8, we get
\[CWappr\] If $T$ is bounded, then each decreasing system of weakly definable spaces $(M_0 ,...,M_r )$ over $R$ has a CW-approximation (that is a morphism $\phi : (P_0 ,...,P_r ) \rightarrow (M_0 ,...,M_r )$ from a decreasing system of definable CW-complexes over $R$ that is a homotopy equivalence of systems of spaces).
The methods to obtain this theorem include the use (as in IV.9-10 of [@WSS]) of a so called **partially complete core** $P(M)$ **of a weakly definable space** $M$, which is an analogue and generalization of the localization $M_{loc}$ for locally complete paracompact locally definable spaces $M$, and a **partially proper core** $p_{f}$ **of a morphism** $f:M\rightarrow N$ of weakly definable spaces. (Note that it is sensible to ask for a partially complete core only if $R$ has countable cofinality.) In particular, the Strong Whitehead Theorem (cf. V.6.10), proved by methods of IV.9-10 and V.4.7, V.4.13 in [@WSS], guarantees the extension of relevant results to weakly definable spaces. Thus the homotopy category of decreasing systems of weakly definable spaces over $R$ is equivalent to its full (homotopy) subcategory of decreasing systems of definable CW-complexes over $R$ (one uses an analogue of Theorem V.2.13 in [@WSS]).
The following corollary is an extension of Corollary \[equi\] in the bounded case.
\[concwd\] If $T$ is bounded, then the homotopy categories of weakly definable spaces (over any model $R$ of $T$) and of topological, semialgebraic and definable CW-complexes are all equivalent. Similarly for decreasing systems of spaces.
Still the homotopy category of ${\mathbf{WDS}}(R)$ may possibly be reacher in the non-bounded case.
Generalized homology and cohomology theories
============================================
Now we have the operation of taking the (reduced) suspension $SM=S^1 \wedge M$ on the category of pointed weak polytopes $\mathcal{P}^{*}(R)$ over $R$, and on its homotopy category $H\mathcal{P}^{*}(R)$ (cf. VI.1 in [@WSS]). This allows to define analogues of so called *complete generalized homology and cohomology theories*, known from the usual homotopy theory, just as in VI.2 and VI.3 of [@WSS]. (Such theories do not necessarily satisfy the *dimension axiom*.) Denote the category of abelian groups by $Ab$. For a pair $(M,A)$ of pointed weak polytopes, $M/A$ will denote the quotient space of $M$ by a closed space $A$, with the distinguished point being the point obtained from $A$.
A **reduced cohomology theory** $k^{*}$ over $R$ is a sequence $(k^n)_{n\in \mathbb{Z}}$ of contravariant functors $k^n :H\mathcal{P}^{*} (R)\rightarrow Ab$ together with natural equivalences $s^n : k^{n+1}\circ S \leftrightsquigarrow k^n$ such that the following hold:
**Exactness axiom**
For each $n\in \mathbb{Z}$ and each pair of pointed weak polytopes $(M,A)$ the sequence $$k^n (M/A) \stackrel{p^{*}}{\rightarrow } k^n (M) \stackrel{i^{*}}{\rightarrow}
k^n (A)$$ is exact.
**Wedge Axiom**
For each $n\in \mathbb{Z}$ and each family $(M_{\lambda})_{\lambda \in \Lambda}$ of pointed weak polytopes the mapping $$(i_{\lambda} )^{*} : k^n (\bigvee_{\lambda} M_{\lambda}) \rightarrow
\prod_{\lambda} k^n (M)$$ is an isomorphism.
A **reduced homology theory** $h_*$ over $R$ is a sequence $(h_n)_{n\in \mathbb{Z}}$ of covariant functors $h_n :H\mathcal{P}^* (R)\rightarrow Ab$ together with natural equivalences $s_n : h_n \leftrightsquigarrow h_{n+1} \circ S$ such that the following hold:
**Exactness axiom**
For each $n\in \mathbb{Z}$ and each pair of pointed weak polytopes $(M,A)$ the sequence $$h_n (A) \stackrel{i_{*}}{\rightarrow } h_n (M) \stackrel{p_{*}}{\rightarrow}
h_n (M/A)$$ is exact.
**Wedge Axiom**
For each $n\in \mathbb{Z}$ and each family $(M_{\lambda})_{\lambda \in\Lambda}$ of pointed weak polytopes the mapping $$(i_{\lambda} )_{*} : \bigoplus_{\lambda} h_n (M_{\lambda} )\rightarrow h_n (\bigvee_{\lambda} M_{\lambda})$$ is an isomorphism.
If $T$ is bounded, then these theories correspond uniquely (up to an isomorphism) to topological theories (cf. VI.2.12 and VI.3 in [@WSS]). All these generalized homology and cohomology functors can be built by using spectra for homology theories, or $\Omega$-spectra for cohomology theories as in VI.8 of [@WSS].
Similarly, *unreduced* generalized *homology* and *cohomology* theories may be considered on the category $H{\mathcal}{P}(2,R)$ of pairs of weak polytopes. If $T$ is bounded, then these theories are equivalent to respective reduced theories, cf. VI.4 in [@WSS]; homology theories are extendable to $H{\mathbf{WDS}}(2,R)$, and some difficulties appear for cohomology theories, cf. VI.5-6 in [@WSS]. We get the following extension of Corollaries \[equi\] and \[concwd\].
\[conc\] If $T$ is bounded, then, by the equivalence of respective homotopy categories of topological pointed CW-complexes (with continuous mappings) and of pointed weak polytopes, we get “the same” generalized homology and cohomology theories as the classical ones, known from the usual topological homotopy theory.
Open problems
=============
The following problems are still open:
1\) Can the assumption of boundedness of $T$ in Theorem \[CWappr\] and later be omitted? Is there a way of proving the Strong Whitehead Theorem (the analogue of V.6.10) without methods of IV.9-10 of [@WSS]?
2\) Do the above consideratons lead to a “(closed) model category” (see [@h], page 109, for the definition)? Such categories are desired in (abstract) homotopy theory.
**Acknowledgements.** I acknowledge support from the european Research Training Network RAAG (HPNR-CT-2001-00271) that gave me the opportunity to speak about locally and weakly definable spaces in Perugia (Italy) in 2004 and in Passau (Germany) in 2005. The organizers of the semester “Model Theory and Applications to Algebra and Analysis” held in the Isaac Newton Institute in Cambridge (UK) gave me the opportunity to present a poster during the workshop “An Introduction to Recent Applications of Model Theory”, 29 March – 8 April 2005, supported by the European Commission (MSCF-CT-2003-503674). I thank Alessandro Berarducci, Margarita Otero, and Kobi Peterzil for pointing out some mistakes in the previous version of this paper during the workshop “Around o-minimality” organized by Anand Pillay, held in Leeds, in March 2006. Some part of the work on this paper was done during my stay at the Fields Institute during the Thematic Program on o-minimal Structures and Real Analytic Geometry in 2009. I would like to thank the Fields Institute (primarily the Organizers of the semester) for warm hospitality.
[ww]{}
E. Baro, *Normal triangulations in o-minimal structures*, Journal of Symbolic Logic Volume 75, Issue 1 (2010), 275–288.
E. Baro, M. Otero, *On o-minimal homotopy groups*, Quart. J. Math. 61 no. 3 (2010), 275–289.
E. Baro, M. Otero, *Locally definable homotopy*, Annals of Pure and Applied Logic 161 (2010), 488–503.
A. Berarducci, M. Otero, *O-minimal fundamental group, homology, and manifolds*, J. London Math. Soc. 65 (2002), 257–270.
A. Berarducci, M. Otero, *Transfer methods for o-minimal topology*, Journal of Symbolic Logic 68 no. 3 (2003), 785–794.
H. Delfs, M. Knebusch, *Semialgebraic topology over a real closed field II: Basic theory of semialgebraic spaces*, Math. Z. 178 (1981), 175–213.
H. Delfs, M. Knebusch, *Separation, retractions and homotopy extension in semialgebraic spaces*, Pacific J. Math. 114 (1984), 47–71.
H. Delfs, M. Knebusch, *An introduction to locally definable spaces*, Rocky Mountain J. Math. 14 (1984), 945–963.
H. Delfs, M. Knebusch, *Locally Semialgebraic Spaces*, Lecture Notes in Mathematics 1173, Springer-Verlag 1985.
L. van den Dries, C. Miller, *Geometric categories and o-minimal structures*, Duke Mathematical Journal 84 no. 2 (1996), 497–540.
L. van den Dries, *Tame topology and o-minimal structures*, London Math. Soc. Lecture Notes Series 248, CUP 1998.
M. Edmundo, *O-minimal (co)homology and applications*, in: O-minimal Structures, Proceedings of the RAAG Summer School Lisbon 2003, Lecture Notes in Real Algebraic and Analytic Geometry (M. Edmundo, D. Richardson and A. Wilkie eds.), Cuvillier Verlag 2005.
M. Edmundo, G. Jones, N. Peatfield *Sheaf cohomology in o-minimal structures*, J. Math. Logic 6 (2) (2006), 163–179.
M. Edmundo, N. Peatfield, *O-minimal Čech cohomology*, Quart. J. Math. 59 (2) (2008), 213–220.
A. Fischer, *On smooth locally definable functions*, preprint 268 of the RAAG network (http://www.maths.manchester.ac.uk/raag/preprints/0268.pdf), 2008.
Ph. Hirschhorn, *Model Categories and Their Localizations*, Mathematical Surveys and Monographs Vol. 99, American Mathematical Society 2003.
S. T. Hu, *Homotopy theory*, Academic Press 1959.
M. Kashiwara, P. Shapira, *Ind-sheaves*, Astérisque 271, Soc. Math. France 2001.
M. Knebusch, *Weakly Semialgebraic Spaces*, Lecture Notes in Mathematics 1367, Springer-Verlag 1989.
M. Knebusch, *Semialgebraic topology in the recent ten years*, “Real Algebraic Geometry Proceedings, Rennes 1991”, M. Coste, L. Mahé, M.-F. Roy, eds., LNM 1524, Springer 1992, 1–36.
S. Mac Lane, I. Moerdijk, *Sheaves in Geometry and Logic*, Universitext, Springer-Verlag 1992.
C. R. F. Maunder, *Algebraic Topology*, Van Nostrand Reinhold Company 1970.
A. Pikosz, [*A topological version of Bertini’s theorem*]{}, Annales Polonici Mathematici [**LXI.1**]{} (1995), 89–93.
A. Pikosz, [*On generalized topological spaces*]{}, arXiv: 0904.4896 \[math.LO\].
D. Quillen, *Homotopical Algebra*, Lecture Notes in Mathematics 43, Springer-Verlag 1967.
R. Robson, *Embedding semialgebraic spaces*, Math. Z. 183 (1983), 365–370.
M. Shiota, *PL and differential topology in o-minimal structure*, arXiv: 1002.1508v3 \[math.LO\].
E. Spanier, *Algebraic topology*, McGraw-Hill Book Company 1966.
A. Woerheide, *O-minimal homology*, Ph. D. Thesis, University of Illinois at Urbana-Champaign 1996.
Politechnika Krakowska
Instytut Matematyki
Warszawska 24
PL-31-155 Kraków
Poland
E-mail: *pupiekos@cyf-kr.edu.pl*
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We describe an “active” antenna system for HF/VHF (long wavelength) radio astronomy that has been successfully deployed 256-fold as the first station (LWA1) of the planned Long Wavelength Array. The antenna system, consisting of crossed dipoles, an active balun/preamp, a support structure, and a ground screen has been shown to successfully operate over at least the band from 20 MHz (15 m wavelength) to 80 MHz (3.75 m wavelength) with a noise figure that is at least 6 dB better than the Galactic background emission noise temperature over that band. Thus, the goal to design and construct a compact, inexpensive, rugged, and easily assembled antenna system that can be deployed many-fold to form numerous large individual “stations” for the purpose of building a large, long wavelength synthesis array telescope for radio astronomical and ionospheric observations was met.'
author:
- 'Brian C. Hicks'
- 'Nagini Paravastu-Dalal'
- 'Kenneth P. Stewart'
- 'William C. Erickson'
- 'Paul S. Ray'
- 'Namir E. Kassim'
- Steve Burns
- Tracy Clarke
- Henrique Schmitt
- Joe Craig
- Jake Hartman
- 'Kurt W. Weiler'
title: 'A wide-band, active antenna system for long wavelength radio astronomy'
---
Introduction
============
Background\[background\]
------------------------
Radio astronomy began in 1932 with the discovery of radio emission from the Galactic Center at the relatively long wavelength of 15 m (20 MHz) by Karl Jansky [@Jansky32; @Jansky33a; @Jansky33b; @Jansky33c]. This pioneering work was followed by the innovative research of Grote Reber at frequencies ranging from 10 – 160 MHz ( 30 – 2 m wavelength) in the 1940s that closely tied radio astronomy to the broader field of astronomy and astrophysics [@Reber40; @Reber44; @Reber47; @Reber49; @Reber50].
However, the requirement for impractically large single radio antennas or dishes to obtain resolution at long wavelengths (resolution $\theta \sim \lambda/\rm D$, where $\theta$ is the angular resolution in radians, $\lambda$ is the observing wavelength in meters, and D is the diameter of the observing instrument in meters) quickly pushed the new field of radio astronomy to higher frequencies (shorter wavelengths). In other words, since increasing D was severely limited by cost and mechanical considerations, the only way to achieve better resolution seemed to be to decrease $\lambda$.
As early as 1946, Ryle and Vonberg [@Ryle46; @Ryle48] and Pawsey and collaborators [@Pawsey46a; @Pawsey46b] began to use interferometric techniques consisting of large arrays of simple dipoles or widely separated individual, small dishes to increase the effective “D” without greatly increasing the cost. Even then, distortions introduced into the incoming radio signals by the Earth’s ionosphere made imaging at long wavelengths difficult and appeared to place a rather short upper size limit to “D” at frequencies $<100$ MHz (wavelengths $>3$ m) of $\sim 5$ km. Thus, the move to higher frequencies, even for interferometry, continued until, by the 1970s, relatively few long wavelength radio astronomy telescopes were still operating at frequencies $<100$ MHz. Exceptions include the Ukrainian UTR-2 [@Braude78], the 38 MHz survey [@Rees90] with the Cambridge Low-Frequency Synthesis Telescope [CLFST; see @Baldwin85; @Hales88] and the Gauribidanur Radio Observatory (GEETEE) in India [@Shankar90; @Dwarakanath95].
Another important long wavelength array in the 1970s and 1980s was the “Tee Pee Tee” (TPT) Clark Lake array built by William C. (Bill) Erickson on a dry lake in the Anza-Borrego desert east of San Diego, CA [@Erickson82]. The TPT was also limited to a maximum baseline “D” of 3 km because of concerns about ionospheric distortion. For a more complete history of the Clark Lake Radio Observatory (CLRO) TPT array, its precedents and follow-on arrays, see [@Kassim05a]. Due to lack of funding, the CLRO TPT was decommissioned in the late 1980s and dismantled in the early 1990s.
Modern long wavelength arrays\[modernarrays\]
---------------------------------------------
A significant development in radio astronomy in the early 1980s known as Self Calibration or “Self-Cal” [@Pearson84] finally showed that the “ionospheric barrier” could be overcome. Self-Cal makes use of one or more sources in the field of view to monitor instrumental, atmospheric, and ionospheric changes on short time scales so that they can be removed during the data reduction processes. This new data handling method allowed astronomers to contemplate high resolution imaging through the ionosphere at frequencies $<100$ MHz and revitalized the field of long wavelength radio astronomy.
The defining paper for the start of this “modern era” of long wavelength arrays is probably the one by [@Perley84] that made a strong scientific and technical case for a long wavelength synthesis array operating at $\sim 75$ MHz located near the Very Large Array (VLA) of the National Radio Astronomy Observatory (NRAO[^1]) and making use of the VLA’s existing infrastructure. Although this proposal was not funded or constructed, it certainly influenced subsequent plans for long wavelength synthesis arrays for radio astronomy.
In particular, Self-Cal, the CLRO TPT array, and the [@Perley84] concept led directly to the proposal and installation, with Naval Research Laboratory (NRL) and NRAO financial and technical support, of a 74 MHz (4 m) feed and receiver system on one of the VLA 25m dishes in 1991, eight VLA dishes by 1994, and all 27 VLA dishes by 1998. When fully implemented, this “4-band” system became a universally available user band [see, [[*e.g.*]{}]{}, @Kassim93; @Kassim05a; @Kassim07].
The international success of this 4-band system demonstrated both the scientific richness of the $<100$ MHz frequencies and the possibilities for using these new data reduction techniques for overcoming the previous limitations of relatively short ($<5$ km) baseline lengths to obtain arc-second resolution from ground-based, long wavelength imaging arrays.
Such resounding success also led to the concept of a dedicated low frequency ($\nu < 100$ MHz) Long Wavelength Array (LWA) in the late 1990s [@Kassim98] and, shortly thereafter, NRL, MIT, and ASTRON (Netherlands) formed the LOFAR Consortium to further develop the plans that are now present in the Dutch-led Low Frequency Array [LOFAR; @Rottgering03; @Wijnholds11], the somewhat higher frequency MIT-led Murchison Widefield Array [MWA; @Lidz08], and the University of New Mexico/NRL-led Long Wavelength Array [LWA; @Kassim98; @Kassim05a; @Kassim05b; @Ellingson09a]. For technical reasons, such long wavelength arrays need a large number ($\ge10,000$) of electromagnetic-wave receptors [see, [[*e.g.*]{}]{}, @Ellingson05; @Ellingson09a; @Ellingson11] so that the development of a cheap, easily deployable antenna system that is Galactic background noise limited[^2] (see Section \[SystemPerform\]) is vital.
Initial prototyping of such an antenna system was carried out while NRL was still part of the LOFAR Consortium and tested on the NRL LOFAR Test Array [NLTA; @Stewart04; @Stewart05]. The NLTA, an 8-element, long wavelength array employing active, droopy-dipole, fat antennas and operating between $\sim 15$ MHz ($\sim20$ m) to $\sim 115$ MHz ($\sim2.6$ m), provided valuable experience leading to next stage of prototyping with the Long Wavelength Demonstrator Array [LWDA; @Lazio10]. The LWDA was a 16-element long wavelength synthesis array operating between 60 MHz (5 m wavelength) and 80 MHz (3.75 m wavelength) constructed by the Applied Research Laboratories of the University of Texas, Austin in collaboration with, and funding from, NRL. Lessons from these two prototyping efforts led to the improved design for the active antenna system for the LWA that is the focus of this paper.
The Long Wavelength Array (LWA)\[LWA\]
--------------------------------------
The LWA is a long wavelength radio astronomy synthesis array now under construction. It is designed to enable new research in the largely unexplored frequency range of $\sim 20$ – 80 MHz (15 m – 3.75 m wavelength) with reduced sensitivity both above and below that range. When completed, the full LWA will require $>10,000$ full polarization, crossed dipole antenna elements organized into $\sim50$ “stations,” each station consisting of 256 antenna elements distributed over an ellipse $\sim100$ m E/W by $\sim110$ m N/S with a quasi-random placement. The entire $\sim50$ station synthesis array will eventually be spread over an area roughly 400 km in diameter centered in the state of New Mexico. A possible antenna concept was given already by [@Hicks02] and a compact description of a concept array can be found in [@Kassim05b]. Specifications from this latter paper are included here in Table \[tbl-LWA-spec\] for easy reference. [@Clarke09] also provides an excellent discussion of requirements and specifications. For an updated description of the actual parameters of the LWA1, see [@Ellingson12] and [@Taylor12].
To meet these stringent, and often conflicting requirements at reasonable cost, and drawing on our prototyping experience described above, we chose an electrically short, relatively fat, droopy-dipole design similar to that shown to be effective on our NLTA and LWDA prototypes with an amplified or “active” balun/pre-amplifier at the apex of the dipole arms. Of course, a support for these droopy dipoles that was simple and easy to install had to be designed and, to stabilize the properties of the ground under the antenna against changes in, [[*e.g.*]{}]{}, moisture content (such as rain), an inexpensive and rugged “ground screen” had to be included. All of these elements of dipole antenna (ANT), front-end electronics (FEE), support stand (STD), and ground screen (GND) work as a coupled system and had to be designed together. In this paper we describe the electrical and mechanical properties of these four components that are already designed, built, and deployed 256-fold as the first LWA1 station (designated for its stand alone use as the LWA1 Radio Observatory). The LWA1 Radio Observatory is currently performing observations resulting from its first call for proposals in addition to carrying out a continuing program of commissioning and characterization observations [see Figure \[arraypix\] and @Ellingson12; @Taylor12].
It should be noted that the ANT/FEE/STD/GND system described here has also proven to be versatile enough to draw interest for other radio astronomy applications for both national observatory \[[[*e.g.*]{}]{}, Nançay Observatory in France [@Girard11]\] and university groups \[[[*e.g.*]{}]{}. the Low Frequency All Sky Monitor, LOFASM, of the University of Texas, Brownsville [@Miller12; @Rivera12] and the Universidad Nacional Autonoma de Mexico, UNAM\].
An antenna system for HF/VHF radio astronomy\[ANT-FEE-STD-GND\]
===============================================================
For simplicity, we break the discussion of this “antenna system” into four parts: the crossed-dipole antenna (ANT), the front-end electronics (FEE), the support stand (STD), and the ground screen (GND). We will first describe each of these separately, and then report on the electrical response of the entire system.
Antenna - ANT\[ANT\]
--------------------
### Geometry\[antgeom\]
Extensive simulations, measurements, and prototyping were carried out to determine a satisfactory antenna design considering cost, weight, mechanical stability, wind resistance, ease of fabrication, and RF performance. These were briefly mentioned above and are discussed further in [@Paravastu07a; @Paravastu08a; @Paravastu08c].
From electrical considerations, it was clear that the antenna elements had to be broad in shape to improve the inherent bandwidth characteristics over those of a simple, thin-wire dipole. Furthermore, the elements had to slope downward at $45\degr$ to improve the sky coverage over that of a simple, straight dipole. Drawing on the NLTA and LWDA experience, initial tests were with broad, flat sheets of aluminum, roughly 1.75 m long $\times$ 0.42 m wide, sharply tapered to a feed point. While the performance of these “Big Blades” was encouraging [see, [[*e.g.*]{}]{}, @Erickson06; @Kerkhoff07a; @Paravastu08b], it was estimated that the cumbersome size, high wind resistance, and large metal content/cost were unsatisfactory [@Paravastu08c].
Next a series of “frame” antennas using aluminum angle pieces, with and without vertical and horizontal cross pieces, and with and without mesh covering, were considered. Again, the electrical results were satisfactory, but the metal cost remained high. Finally, just a triangular frame of aluminum angle pieces with a single vertical bar (known as the “tied fork”) and a single horizontal crosspiece for increased stiffness, was chosen for the final design. This selection process is described in [@Paravastu07a; @Paravastu08c] and one arm of this “tied fork with crosspiece” is shown in Figure \[ANT-Figure2-2\]. The vertical height of this triangle is 1.50 m and the base of the triangle is 0.8 m. The distance between the feed points on the FEE (see Section \[FEE\]) unit is 9.0 cm and the apexes of the triangular ANT elements are separated by about 13.2 cm. Numerous simulations [@Paravastu07a using the the experimental software package Numerical Electromagnetics Code NEC[^3]-4.1 provided by the Lawrence Livermore National Laboratory] were carried out and field tests were performed [@Paravastu08c] on both early and later designs. The simulation and field test results indicated that the “tied fork with crosspiece” yielded the best compromise for low cost, high mechanical stability, and good electrical performance.
### Electrical performance\[ANTperform\]
The simulated E- and H-plane patterns over a range of frequencies are shown in Figures \[ANT-Figure2-4\] and \[ANT-Figure2-5\] and summarized in Table \[tbl-ANTpatt\]. Actual measurements discussed in [@Hartman09a] indicate that the simulated and measured beam patterns correspond to better than 1 dB over almost all of the sky. The predicted impedance characteristics are shown in Figure \[ANT-Figure2-6\] with the antenna terminal impedance (Z) shown on the left and the impedance mismatch efficiency (IME; the fraction of the power at the antenna feed point that is transferred to the preamp) shown on the right.
Front-end electronics - FEE\[FEE\]
----------------------------------
The LWA FEE is an extension of the prototype active-balun design utilized for the Long Wavelength Demonstrator Array [LWDA; @Bradley05; @Lazio10]. The LWDA design already used low-cost Monolithic Microwave Integrated Circuits (MMICs), but improvements for the FEE described here include the use of an additional 12 dB of gain to handle cable losses without affecting noise performance [@Hicks07], a local voltage regulator, an integral $\rm 5^{\rm th}$ order Butterworth filter, transient protection ([[*e.g.*]{}]{}, lightning protection), and direct feed point connections. The block diagram of the LWA FEE is shown in Figure \[FEE-block\] and the circuit diagram of the final unit is shown in Figure \[FEE-Figure2-12\].
Dual polarization FEE units are formed by rotating two identical double-sided FEE circuit boards 90 and bolting them together back-to-back with ground planes touching. This geometry was motivated by the need for isolation between polarizations, serviceability, and economy of fabrication.
### Input impedance\[InputImped\]
The input impedance of the active balun ($Z_0$) is an important design parameter of the FEE, as it affects the bandwidth of the antenna system, the efficiency with which power is coupled into the antenna (see Figure \[ANT-Figure2-6\]), and the mutual coupling with nearby antennas. Extensive studies based on optimized models and field measurements were undertaken to determine the optimal balun impedance [@Erickson03; @Gaussiran05; @Paravastu07a]. High impedance baluns were initially considered because of their ability to buffer the widely varying dipole impedances over our relatively wide bandwidth. However, it was determined that raising the input impedance above 1 k$\Omega$ resulted in insufficient current flow into the balun, making it impossible to maintain sky noise dominated operation [@Erickson03]. Based on this early work, we began to optimize antenna topologies for desired beam pattern and a feedpoint impedance of approximately 100 $\Omega$ [@Paravastu07a]. It is possible to obtain a feedpoint impedance of 100 $\Omega$ by directly buffering the individual feedpoint connections with inexpensive commercially available MMIC amplifiers exhibiting high input return loss. A 180$\degr$ hybrid or transformer is then used to convert the amplifier outputs to a single ended 50 $\Omega$ output. This method avoids the loss, and subsequent increase in noise temperature, associated with adding transformers and other matching networks before the first amplification stage, while lowering production costs.
### Filter design\[Filter\]
A $\rm 5^{\rm th}$ order, low-pass Butterworth filter was included before the final 12 dB gain stage to define the bandpass and reject out-of-band interference that could drive the FEE into non-linear operation. The characteristics of the filter can be widely varied within the topology of the filter through component selection. The 3 dB point of the filter is at 150 MHz; at 250 MHz it achieves $\sim21$ dB of attenuation. A high cut-off frequency was chosen to minimize distortion of the working bandpass of 20 – 80 MHz.
### Performance\[FEEperform\]
The FEE serves to fix the system noise temperature, match the antenna impedance to the coax signal cables running to the distantly located receiver, provide adequate gain to overcome cable loss, and limit out-of-band RFI presented to the analog receiver module. The performance of a single polarization of the FEE is given in Table \[tbl-FEE2-3\]. A crossed polarization unit will draw twice as much current as a single FEE board for a total of 460 mA at 15 V DC. Total power consumption for a 256 element, crossed dipole station is then $\sim1.8$ kW.
Environmental testing of the final design of the FEE was carried out by [@Hartman09b] between -20 - +40 $\degr$C, a temperature range not likely to be exceeded for an extended period at the LWA site. The gain dependence on temperature varies between Ð0.0042 dB/$\degr$C and Ð0.0054 dB/$\degr$C, with the magnitude of the slope monotonically increasing with frequency between 20 MHz and 100 MHz. The phase also has a weak dependence on temperature, with a slope of Ð0.011 degrees/$\degr$C and Ð0.014 degrees/$\degr$C.
### Noise figure\[NF\]
We measured the noise figure of the final FEE using an Agilent N9030A signal analyzer using an Agilent 346B noise source. The noise figure ranged from 2.74 dB (255K) to 2.88 dB (273K) over the frequency range of 20 – 80 MHz. The N9030A signal analyzer estimates an intrinsic measurement uncertainty of 0.21 dB ($\sim15$ K). We also measured the gain linearity and intermodulation distortion using one and two injected tones (see Hartman and Hicks 2009 for measurement details). The results of these measurements are presented in Table 3 and they agree closely with those predicted using an analytic cascade analysis[^4] based on the data sheet values for the components.
### Manufacturing\[FEEmanufact\]
[**Manufacturing quotes:**]{} We obtained manufacturing quotes from a number of companies interested in producing turn-key FEEs. While there was some variation, most quotes, including printed circuit board (PCB) fabrication, assembly, and the administrative overhead associated with ordering all of the requisite parts were $\sim \$200$ per polarization in 2009.
[**Quality control and functional testing:**]{} Test scripts to confirm basic functionality and conduct full characterization of an FEE are detailed in [@Hicks08]. We have discussed the basic functional test (gain, stability, power consumption) described in that document with manufacturers and they agree that it could be readily implemented as an automated test procedure. The FEE also includes a test point to allow proper supply voltage to be safely verified in the field after the FEE has been installed on the support stand.
### PCB layout and mechanical details\[FEElayout\]
The components are all mounted on one side of the circuit board. The opposite side of the board is a solid copper ground plane aperiodically “stitched” to the grounded copper on the component side. The Bill of Materials for the FEE is given in [@Hicks09]. The hard gold plated bolt holes on the FEE PCB directly connect to the stainless steel tabs connecting to the ANT dipole elements. Materials were chosen to avoid galvanic corrosion. The bolt holes were sized for 1/4-20 studs with standard clearance. A related mechanical interface to the STD was developed by Burns Industries of Nashua, NH[^5] and is shown in Figure \[FEE-Figure2-15\].
### Installation\[FEEinstal\]
The FEE is installed on to the STD after the cables have been pulled and are ready to be connected. A keying scheme is incorporated into the FEE and STD hub such that the FEE can only be installed with the N/S polarization in the correct orientation. The connections to the coax cable are color coded for the two polarizations and 7 – 10 in-lbs of torque is required to tighten the SMA connectors.
Antenna stand - STD\[STD\]
--------------------------
After considering several possible designs for the STD, we chose a central mast design. That conferred several advantages:
- The antenna elements are not required to be load bearing structural elements.
- The antenna elements can be much easier to assemble than self-supporting pyramidal designs.
- Site preparation work is minimized because the STD only touches the ground at one point.
- The footprint of the design is smaller than the self-supporting pyramidal designs so there is more clearance between antenna systems.
We developed the central mast design in collaboration with our manufacturing partner Burns Industries, Inc.. The design is shown in Figure \[STD-Figure2-8\]. It consists of four welded “tied fork with crosspiece” (see Section \[antgeom\]) aluminum dipole arms attached to the bottom of a solid plastic hub at the top of the mast. The FEE is mounted to the top of the hub and the solid hub prevents mechanical stresses on the dipole arms from being transmitted through the stainless steel tabs to the FEE PCB. A plastic cap fits over the hub to protect the FEE from the elements. A fiberglass rod “spider” midway down the mast supports the dipole arms so that they do not move significantly in the wind. The mast is standard 2 3/8 inch ($\sim6$ cm) outer diameter galvanized steel fence post, machined to accept a mount to the junction box where the connection to the RF/Power Distribution (RPD) conduit is made.
### Installation and alignment\[STDinst\]
The components of the STDs were manufactured by Burns Industries, Inc. and shipped to the array site. The pieces were then assembled under a shelter and carried out to the mounting points. They were fitted with a compression collar and set into the Oz-Post[^6] sleeve. After alignment (described below), the collar was hammered into place with the Oz-Post CDT-07 - Cap Driving Tool and the installation was complete.
After field testing and finding that compass alignment of the STD with True North was unsatisfactory (possibly due to intrinsic magnetization in the steel mast), angular alignment of the LWA STDs was accomplished using a sighting telescope permanently fixed to a base that is identical in shape, polarization keying, and mounting holes to an FEE unit. The base provided a stable mount for an inexpensive 4X telescope commonly sold for use with air rifles (see Figure \[STD-telescope\]).
Through surveying, the angular offset at the STD installation point from True North to a distant, geographic reference point was established in advance and the telescope was firmly mounted to the base with that offset. The antenna hub was then rotated until the distant reference appeared in the crosshairs of the sighting telescope, indicating that the hub was properly aligned and ready to be locked in place. For the LWA1, the $\sim 40$ km distant peak of South Baldy Mountain served as the reference point and its azimuth was offset by $102\degr$ from True North. Clearly, the fixture must be site specific, but once manufactured it allowed rapid and precise alignment by untrained personnel. Experience has shown that the procedure can easily produce alignment with True North to a tolerance of $<5\degr$, which is more stringent than the system requirement [see @Janes09].
### Mechanical and environmental survivability\[STDmech\]
The survivability requirements in “The Long Wavelength Array System Technical Requirements” document [@Clarke09; @Janes09] include survival of winds up to 80 mph with gusts to 100 mph (wind speed up to 36 m s$^{-1}$ with gusts up to 45 m s$^{-1}$), UV lifetimes of 15 years, and alighting of a 4 lb ($\sim 2$ kg) bird. Both the fiberglass and plastic in the STD design are UV stabilized materials with long lifetimes. Wind survivability has been verified by both modeling and, so far, three years of testing in the field. No problem with heavy birds has been noted.
### Removal\[STDremove\]
Using Oz-Posts as the ground anchors facilitates removal of the STDs, should we need to return the site to its original condition. The Oz-Post collars are removable, so the masts may be removed, and the Oz-Post company sells a simple device, the “Oz Post Oz Puller with Post Clamp” for pulling the posts out of the ground.
Ground screen - GND\[GND\]
--------------------------
[@Paravastu07b; @Stewart07; @York07] carried out simulations and detailed tests of the effects of deploying large and small ground screens above both wet and dry ground. They concluded that that there are significant benefits from deploying a ground screen beneath the antennas, including reduced ground losses and reduced susceptibility to variable soil conditions. Additionally, for an antenna in isolation, [@Kerkhoff07b] demonstrated that a small ground screen provides these benefits without the axial asymmetry and significant sensitivity to RFI coming from the horizon that are caused by using a full-station ground screen [@Paravastu07a; @Schmitt08]. It is difficult to accurately model these effects for a full array in the presence of mutual coupling, but initial studies [@Kerkhoff07b; @Ellingson09b] indicate that the behavior of a random array of antennas should be qualitatively similar to that of an antenna in isolation [@York07].
### Design\[GNDdesign\]
For the above reasons, we chose a 10 $\times$ 10 ft ($\sim3~\rm{x}~3$ m) ground screen under each STD, as detailed in [@Schmitt08] and [@Robbins09]. Simulations [@Robbins09] indicated that the mesh density was not important as long as the lattice spacing was less than 12 inches ($\sim30$ cm). We chose a 4 $\times$ 4 inch ($\sim10~\rm{x}~10$ cm), galvanized welded wire mesh material that is structurally sound and inexpensive, made with wire diameter of 14 gauge ($\sim 2$ mm). A vendor was found that produces rolls of this material with dimensions of 6 $\times$ 200 ft ($\sim2~\rm{x}~60$ m). Considering that we needed two 6 $\times$ 10 ft ($\sim2~\rm{x}~3$ m) sections of mesh, overlapped by 2 ft ($\sim60$ cm), to make a 10 $\times$ 10 ft ($\sim3~\rm{x}~3$ m) ground screen, one of these rolls could be used to produce 10 complete ground screens. Taking into account possible mistakes and losses that can happen while cutting the mesh, we estimated a need for 27 rolls in order to produce 256 ground screens.
For the physical connection of the two ground screen sections, we used split splicing sleeves (Nicopress stock number FS-2-3 FS-3-4), 6 sleeves per ground screen (1,700 for a full 256 antenna station, assuming a 10% loss). Simulations have shown [@Stewart09] that the performance of such a two-part ground screen is negligibly different from a single, unitary ground screen. The anchoring of the ground screens is also an important issue, since this must prevent the buckling of the sides of the mesh. For this purpose we used 12 inch ($\sim30$ cm) plastic tent stakes, 8 per ground screen, which were purchased in buckets containing 180 stakes each (12 buckets needed).
### Installation\[GNDinstal\]
The installation procedure of the ground screen was to unroll the mesh on a flat surface; cut it into 10 ft ($\sim3$ m) sections and flip each section upside down to prevent it from rolling back; overlap two 10 ft ($\sim3$ m) sections of mesh by 2 ft ($\sim60$ cm) and connect them using 6 splicing sleeves, spaced by 2 ft ($\sim60$ cm); and move the ground screens to the position of each stand and stake them, aligning the sides in the E/W by N/S direction with the ground screen centered on the Oz-Post position. We then staked each corner of the ground screen and also put one stake in the mid point of each side to improve the stability.
Antenna system performance\[SystemPerform\]
-------------------------------------------
Of course, the ultimate question is how the antenna system performs in the field. In order to determine this, we performed initial tests with a preliminary prototype of the antenna system in July/August 2007 [@Paravastu08c], carried out further tests with an improved prototype in September 2008, and then tested a final prototype of the ANT/FEE/STD/GND antenna system at our site in New Mexico on NRAO property near the center of the VLA in April 2009. A photo of the FEE (with the FEE protecting box removed) mounted on the STD with the crossed dipole ANT arms attached is shown on the left in Figure \[FEE-Figure2-16\]. Shown in the same figure on the right is the assembled ANT/FEE/STD with the ground screen (GND) installed. [@Hartman09a] describes field measurements in April 2009 of two of the prototype antennas operating as an interferometer.
Most exciting, of course, is that the results of the tests of the full antenna system on the sky \[see Figure \[ANT-Figure2-7\], [@Taylor12], and [@Ellingson12]\] show that it meets its requirement of $>6$ dB Sky Noise Dominance (SND) across the 20 – 80 MHz band \[Specification TR-10A listed in the technical requirements documents [see, [[*e.g.*]{}]{}, @Kassim05b; @Clarke09; @Janes09]\] and clearly shows sensitivity to the Galactic background emission (and interference) .
Thus, the ANT/FEE/STD/GND antenna system meets requirements. This led directly to the installation of 256 examples for the LWA1 station described in Section \[LWA\].
Summary and conclusions
=======================
We have presented the mechanical and electrical design and response of a HF/VHF antenna system suitable for economical use in large numbers and currently deployed 256-fold in the Long Wavelength Array first station (LWA1) near the center of NRAO’s VLA. The antenna system consists of crossed-dipole antennas (ANT), an active balun/amplifier front-end electronics (FEE), a supporting stand (STD), and a ground screen (GND). This system permits Galactic Background limited observations by at least 6 dB over the band from 20 – 80 MHz with full sky coverage.
Because it must be reproduced $>10,000$-fold for a state-of-the-art long wavelength synthesis array in an unprotected desert environment, we have designed and procured an antenna system that is inexpensive, robust, easily deployed, and easily maintained with excellent response properties for radio astronomy synthesis imaging.
We are very grateful to the National Radio Astronomy Observatory for their long-term support for long wavelength radio astronomy. Basic research in radio astronomy at the Naval Research Laboratory is supported by 6.1 base funds.
Baldwin, J. E., Boysen, R. C., Hales, S. E. G., et al. 1985, , 217, 717 Bradley, R. & Parashare, C.R. 2005, “Evaluation of the NRL LWA Active Balun Prototype,” NRAO Electronics Division Technical Note, Series No. 220, [*http://www.gb.nrao.edu/electronics/edtn/edtn220.pdf*]{} Braude, S. Ia., Megn, A. V., Riabov, B. P., Sharykin, N. K., & Zhuk, I. N. 1978, [Astrophys. and Space Sci.]{}, 54, 3 Clarke, T. E. 2009, “Scientific Requirements for the Long Wavelength Array,” LWA Memo \#159, [*http://www.ece.vt.edu/swe/lwa/*]{} Dwarakanath, K. S., Shankar, N., & Shankar, T. S. 1995, Astronomy Data Image Library, 1 Ellingson, S. W. 2005, [IEEE Transactions on Antennas and Propagation]{}, 53, 2480 Ellingson, S. W., Clarke, T. E., Cohen, A., et al. 2009, [IEEE Proceedings]{}, 97, 1421 Ellingson, S. 2009, “Performance of Simple Delay-and-Sum Beamforming Without Per-Stand Polarization Corrections for LWA Stations,” LWA Memo \#149, [*http://www.ece.vt.edu/swe/lwa/*]{} Ellingson, S. W. 2011, [IEEE Transactions on Antennas and Propagation]{}, 59, 6, 1855 Ellingson, S. W., Taylor, G. B., Craig, J., [[*et al.*]{}]{} 2012, arXiv:1204.4816 Erickson, W. C., Mahoney, M. J., & Erb, K. 1982, , 50, 403 Erickson, W. 2003. “A Study of Low and Mid-Range Dipoles for the LWA and LOFAR,” LWA Memo \#0011, [*http://www.ece.vt.edu/swe/lwa/*]{} Erickson, W. C. 2005, “Integration Times vs. Sky Noise Dominance,” LWA Memo \#0023, [*http://www.ece.vt.edu/swe/lwa/*]{} Erickson, W. C. 2006, “Tests on Large Blade Dipoles,” LWA Memo \#0036, [*http://www.ece.vt.edu/swe/lwa/*]{} Gaussiran, T. L., Kerkhoff, A., York, J., & Slack, C. 2005, “From Clark Lake to the Long Wavelength Array: Bill Erickson’s Radio Science,” ASP Conference Series 345, 410 Girard, J. N., Zarka, P., Tagger, M., et al. 2011, Planetary, Solar and Heliospheric Radio Emissions (PRE VII), 495 Hales, S. E. G., Baldwin, J. E., & Warner, P. J. 1988, , 234, 919 Hartman, J. 2009, “Antenna pattern measurements from a two-element interferometer,” LWA Memo \#155, [*http://www.ece.vt.edu/swe/lwa/*]{} Hartman, J. & Hicks, B. 2009, “Environmental Chamber testing of the FEE,” LWA Engineering Memo FEE0010 (see LWA Memo \#0190, [*http://www.ece.vt.edu/swe/lwa/*]{}) Hicks, B., Erickson, W., & Stewart, K. 2002, “NRL Low Frequency Antenna Development,” LWA Memo \#4, [*http://www.ece.vt.edu/swe/lwa/*]{} Hicks, B. & Paravastu, N. 2007, “The Rapid Test Array Balun,” LWA Memo \#120, [*http://www.ece.vt.edu/swe/lwa/*]{} Hicks, B., Paravastu, N., & Ray, P. 2008, “Test Scripts for LWA FEE Units,” LWA Engineering Memo FEE0012 (see LWA Memo \#0190, [*http://www.ece.vt.edu/swe/lwa/*]{}) Hicks, B., Burns, S., Clarke, T., [[*et al.*]{}]{} 2009, “Preliminary Design of the LWA-1 Array, Antenna, Stand, Front End Electronics, and Ground Screen,” LWA Engineering Memo ARR-PDR (see LWA Memo \#0188, [*http://www.ece.vt.edu/swe/lwa/*]{}) Janes, C., Craig, J., & Rickard, L. 2009, “The Long Wavelength Array System Technical Requirements,” LWA Memo \#160, [*http://www.ece.vt.edu/swe/lwa/*]{} Jansky, K. G. 1932, Proc. Inst. Radio Engrs., 20, 1920 Jansky, K. G. 1933a, Popular Astronomy, 41, 548 Jansky, K. G. 1933b, , 132, 66 Jansky, K. G. 1933c, Proc. Inst. Radio Engrs., 21, 1387 Kassim, N. E., Perley, R. A., Erickson, W. C., & Dwarakanath, K. S. 1993, , 106, 2218 Kassim, N. E. & Erickson, W. C. 1998, , 3357, 740 Kassim, N. E. & Polisensky, E. J. 2005, “From Clark Lake to the Long Wavelength Array: Bill Erickson’s Radio Science,” ASP Conference Series 345, 114 Kassim, N. E., Polisensky, E., Clarke, T., [[*et al.*]{}]{} 2005, “From Clark Lake to the Long Wavelength Array: Bill Erickson’s Radio Science,” ASP Conference Series 345, 392 Kassim, N. E., Lazio, T. J. W., Erickson, W. C., [[*et al.*]{}]{} 2007, , 172, 686 Kerkhoff, A. 2007a, “Comparison of Dipole Antenna Designs for the LWA,” LWA Memo \#102, [*http://www.ece.vt.edu/swe/lwa/*]{} Kerkhoff, A. 2007b, “The Calculation of Mutual Coupling Between Two Antennas and its Application to the Reduction of Mutual Coupling Effects in a Pseudo-Random Phased Array,” LWA Memo \#103, [*http://www.ece.vt.edu/swe/lwa/*]{} Lazio, T. J. W., Clarke, T. E., Lane, W. M., et al. 2010, , 140, 1995 Lidz, A., Zahn, O., McQuinn, M., Zaldarriaga, M., & Hernquist, L. 2008, , 680, 962 Miller, R. B., Jenet, F. A., Hicks, B., et al. 2012, [American Astronomical Society Meeting Abstracts]{}, 219, \#422.33 Paravastu, N., Erickson, W., & Hicks, B. 2007, “Comparison of Antenna Designs on Groundscreens for the Long Wavelength Array,” LWA Engineering Memo ANT007B (see LWA Memo \#0191, [*http://www.ece.vt.edu/swe/lwa/*]{}) Paravastu, N., Hicks, B., Aguilera, E., Erickson, W., Kassim, N., York, J. 2007, “Impedance Measurements of the Big Blade and Fork Antennas on Ground Screens at the LWDA Site,” LWA Memo \#90, [*http://www.ece.vt.edu/swe/lwa/*]{} Paravastu, N. 2008a, “Antenna Design for LWA-1,” LWA Engineering Memo ANT0007 (see LWA Memo \#0191, [*http://www.ece.vt.edu/swe/lwa/*]{}) Paravastu, N. 2008b, “Big Blade Measurement Plots," LWA Engineering Memo ANT0004 (see LWA Memo \#0191, [*http://www.ece.vt.edu/swe/lwa/*]{}) Paravastu, N., Ray, P., Hicks, B., [[*et al.*]{}]{} 2008, “Comparison of Field Measurements on Active Antenna Prototypes on Ground Screens for the Long Wavelength Array," LWA Engineering Memo ANT007B-2 (see LWA Memo \#0191, [*http://www.ece.vt.edu/swe/lwa/*]{}) Pawsey, J. L. 1946, , 158, 633 Pawsey, J. L., Payne-Scott, R., & McCready, L. L. 1946, , 157, 158 Pearson, T. J., & Readhead, A. C. S. 1984, , 22, 97 Perley, R.A. & Erickson, W.C. 1984, “Proposal for a Low Frequency Array Located at the VLA Site,” VLA Scientific Memorandum 146, [*http://www.vla.nrao.edu/memos/sci/*]{} Reber, G. 1940, , 91, 621 Reber, G. 1944, , 100, 279 Reber, G. & Greenstein, J. L. 1947, [The Observatory]{}, 67, 15 Reber, G. 1949, , 8, 139 Reber, G. 1950, [Leaflet of the Astronomical Society of the Pacific]{}, 6, 67 Rees, N. 1990, , 244, 233 Rivera, J., Ford, A. J., Jenet, F. A., [[*et al.*]{}]{} 2012, [American Astronomical Society Meeting Abstracts]{}, 219, \#422.35 Robbins, W.J., Schmitt, H.R., Ray, P.S., & Paravastu, N. 2009, “Simulations and Final Choice of Ground Screen Material,” LWA Engineering Memo GND0005 (see LWA Memo \#0189, [*http://www.ece.vt.edu/swe/lwa/*]{}) R[ö]{}ttgering, H. 2003, , 47, 405 Ryle, M. & Vonberg, D. D. 1946, , 158, 339 Ryle, M. & Vonberg, D. D. 1948, Royal Society of London Proceedings Series A. 193, 98 Schmitt, H. 2008, “Baseline Design of Station Ground Screen,” LWA Engineering Memo GND0001 (see LWA Memo \#0189, [*http://www.ece.vt.edu/swe/lwa/*]{}) Shankar, N. U. & Shankar, T. S. R. 1990, Journal of Astrophysics and Astronomy, 11, 297 Stewart, K. P., Hicks, B. C., Ray, P. S., [[*et al.*]{}]{} 2004, , 52, 1351 Stewart, K. P., Hicks, B. C., Crane, P. C., [[*et al.*]{}]{}2005, “From Clark Lake to the Long Wavelength Array: Bill Erickson’s Radio Science,” ASP Conference Series 345, 433 Stewart, K. 2007, “Electromagnetic Performance of a Wire Grid Ground Screen,” LWA Memo \#83, [*http://www.ece.vt.edu/swe/lwa/*]{} Stewart, K. 2009, “Connection requirements for Two-Part Ground Screens,” LWA Engineering Memo GND0008 (see LWA Memo \#0189, [*http://www.ece.vt.edu/swe/lwa/*]{}) Taylor, G., Ellingson, S., Kassim, N., [[*et al.*]{}]{} 2012, [Journal of Astronomical Instrumentation]{}, accepted (see arXiv: 1206.6733) Wijnholds, S. J. & van Cappellen, W. A. 2011, [IEEE Transactions on Antennas and Propagation]{}, 59, 6, 1981 York, J., Kerkhoff, A., Taylor, G., Moats, S., Gonzalez, E., Kuniyoshi, M. 2007, “LWDA Ground Screen Performance Report,” LWA Memo \#95, [*http://www.ece.vt.edu/swe/lwa/*]{}
[ll]{} Parameter & Design Goal\
Frequency range (minimum) & 20 – 80 MHz\
Effective collecting area & $\sim 1$ km$^2$ at 20 MHz\
Number of dipoles & $\sim13,000$\
Number of stations & $\sim50$\
Station diameter & 100 m E/W $\times$ 110 m N/S\
Crossed dipoles stands per station & 256\
Configuration & Core: 17 stations in 5 km\
& Intermed.: 17 sta. in 5-50 km\
& Outliers: 18 sta. in 50-400 km\
Baselines & 0.2 – 400 km\
Point-source sensitivity & $\sim1.1$ mJy at 30 MHz\
(2 pol., 1 hr integ., 4 MHz BW) & $\sim0.7$ mJy at 74 MHz\
Sky Noise Dominance (SND) & $\ge6$ dB from 20 – 80 MHz\
Maximum angular resolution & $\sim5''$ at 30 MHz\
& $\sim2''$ at 74 MHz\
Station Field of View (FoV) & $\sim2$at 74 MHz\
Number of independent FoV & 2 – 8\
Mapping capability & Full FoV\
Maximum observable bandwidth & 32 MHz\
Spectral resolution & $<1$ kHz\
Time resolution & 1 ms\
Image dynamic range & $>10,000$\
Polarization & Full Stokes\
Digitized bandwidth & Full RF\
[cccccc]{} Frequency & Gain & &\
& & &\
& (dBi) & -3 dB & -6 dB & -3 dB& -6 dB\
20 MHz & 4.0 & 41& 57& 51& 66\
40 MHz & 6.0 & 45& 64& 53& 67\
60 MHz & 5.9 & 48& 71& 55& 68\
80 MHz & 5.6 & 45& 77& 58& 70\
[lc]{} Parameter & Value\
Current Draw (at +15 VDC) & 230 mA\
Voltage Range & $\pm 5$%\
Gain & 35.5 dB\
Noise Temperature & 255 - 273 K\
Input 1 dB Compression Point & $-$18.20 dBm\
Input $3^{\rm rd}$ order intercept (IIP3) & $-$2.3 dBm\
[^1]: The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
[^2]: At long wavelengths (particularly at frequencies $<100$ MHz), the Galactic background radio emission is the ultimate limit on the effective noise temperature of any radio receiving system. Thus, the active balun/preamp (frontend) should provide enough gain that any noise contributed by components following it is negligible. Also, in order to have the frontend noise itself not raise the total system noise temperature much above that of the fundamental Galactic background limit, it must have a noise temperature significantly below that of the Galactic background at the observing wavelengths of interest. While a cooled, very low noise temperature receiver might be desirable, it is clearly not practical to build such a unit at reasonable cost when it must be reproduced thousands of times in a large, long wavelength array. As a compromise between a “perfect” frontend and an affordable frontend, the specification given in Table \[tbl-LWA-spec\] for Sky Noise Dominance (SND) was chosen such that the frontend should have a noise temperature better than 6 dB below the Galactic background noise temperature over the principal band of interest from 20 – 80 MHz. At 6 dB below the Galactic background, the increased integration time to reach a given sensitivity is only $\sim57$% more than with a perfect, noiseless balun/preamp [@Erickson05]. This was considered to be acceptable.
[^3]: www.nec2.org
[^4]: http://sourceforge.net/projects/rfcascade/
[^5]: http://www.burnsindustriesinc.com
[^6]: http://www.ozcobuildingproducts.com/Oz-Post.html
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We consider evolution equations of the form $$\label{Abstract equation}
\dot u(t)+{\mathcal{A}}(t)u(t)=0,\ \ t\in[0,T],\ \
u(0)=u_0,$$ where ${\mathcal{A}}(t),\ t\in [0,T],$ are associated with a non-autonomous sesquilinear form ${\mathfrak{a}}(t,\cdot,\cdot)$ on a Hilbert space $H$ with constant domain $V\subset H.$ In this note we continue the study of fundamental operator theoretical properties of the solutions. We give a sufficient condition for norm-continuity of evolution families on each spaces $V, H$ and on the dual space $V'$ of $V.$ The abstract results are applied to a class of equations governed by time dependent Robin boundary conditions on exterior domains and by Schrödinger operator with time dependent potentials.
address:
- 'University of Wuppertal School of Mathematics and Natural Sciences Arbeitsgruppe Funktionalanalysis, Gaußstraße 20 D-42119 Wuppertal, Germany'
- ' Ibn Zohr University, Faculty of Sciences Departement of mathematics, Agadir, Morocco'
author:
- 'Omar EL-Mennaoui and Hafida Laasri\'
title: 'On the norm-continuity for evolution family arising from non-autonomous forms$^*$[^1]'
---
Introduction\[s1\] {#introductions1 .unnumbered}
==================
Throughout this paper $H,V$ are two separable Hilbert spaces over $\mathbb C$ such that $V$ is densely and continuously embedded into $H$ (we write $V \underset d \hookrightarrow H$). We denote by $(\cdot {\, \vert \,}\cdot)_V$ the scalar product and $\|\cdot\|_V$ the norm on $V$ and by $(\cdot {\, \vert \,}\cdot)_H, \|\cdot\|_H$ the corresponding quantities in $H.$ Let $V'$ be the antidual of $V$ and denote by $\langle ., . \rangle$ the duality between $V'$ and $V.$ As usual, by identifying $H$ with $H'$ we have $V\underset d\hookrightarrow H\cong H'\underset d\hookrightarrow V'$ see e.g., [@Bre11].
Let ${\mathfrak{a}}:[0,T]\times V \times V \to {\mathbb{C}}$ be a *non-autonomous sesquilinear form*, i.e., ${\mathfrak{a}}(t;\cdot,\cdot)$ is for each $t\in[0,T]$ a sesquilinear form, $$\label{measurability}
{\mathfrak{a}}(\cdot;u,v) \text{ is measurable for all } u,v\in V,$$ such that $$\label{eq:continuity-nonaut}
|{\mathfrak{a}}(t,u,v)| \le M \Vert u\Vert_V \Vert v\Vert_V\ \text{ and } {\operatorname{Re}}~{\mathfrak{a}}(t,u,u)\ge \alpha \|u\|^2_V \quad \ (t,s\in[0,T], u,v\in V),$$ for some constants $M, \alpha>0$ that are independent of $t, u,v.$ Under these assumptions there exists for each $t\in[0,T]$ an isomorphism ${\mathcal{A}}(t):V\to V^\prime$ such that $\langle {\mathcal{A}}(t) u, v \rangle = {\mathfrak{a}}(t,u,v)$ for all $u,v \in V.$ It is well known that $-{\mathcal{A}}(t),$ seen as unbounded operator with domain $V,$ generates an analytic $C_0$-semigroup on $V'$. The operator ${\mathcal{A}}(t)$ is usually called the operator associated with ${\mathfrak{a}}(t,\cdot,\cdot)$ on $V^\prime.$ Moreover, we associate an operator $A(t)$ with ${\mathfrak{a}}(t;\cdot,\cdot)$ on $H$ as follows $$\begin{aligned}
D(A(t))={}&\{u\in V {\, \vert \,}\exists f\in H \text{ such that } {\mathfrak{a}}(t;u,v)=(f{\, \vert \,}v)_H \text{ for all } v\in V \}\\
A(t) u = {}& f.\end{aligned}$$ It is not difficult to see that $A(t)$ is the part of ${\mathcal{A}}(t)$ in $H.$ In fact, we have $D(A(t))= \{ u\in V : {\mathcal{A}}(t) u \in H \}$ and $A(t) u = {\mathcal{A}}(t) u.$ Furthermore, $-A(t)$ with domain $D(A(t))$ generates a holomorphic $C_0$-semigroup on $H$ which is the restriction to $H$ of that generated by $-{\mathcal{A}}(t).$ For all this results see e.g. [@Tan79 Chapter 2] or [@Ar06 Lecture 7].
We now assume that there exist $0< \gamma<1$ and a continuous function $\omega:[0,T]\longrightarrow [0,+\infty)$ such that $$\label{eq 1:Dini-condition}
|{\mathfrak{a}}(t,u,v)-{\mathfrak{a}}(s,u,v)| \le\omega(|t-s|) \Vert u\Vert_{V_\gamma} \Vert v\Vert_{V_\gamma}\quad \ (t,s\in[0,T], u,v\in V),$$ with $$\label{eq 2:Dini-condition}\sup_{t\in[0,T]} \frac{\omega(t)}{t^{\gamma/2}}<\infty \quad \text{ and }
\int_0^T\frac{\omega(t)}{t^{1+\gamma/2}}<\infty$$ where $V_\gamma:=[H,V]$ is the complex interpolation space. In addition we assume that ${\mathfrak{a}}(t_0; \cdot,\cdot)$ has the square root property for some $t_0\in[0,T]$ (and then for all $t\in[0,T]$ by [@Ar-Mo15 Proposition 2.5] ), i.e.,
$$\label{square property}
D(A^{\frac{1}{2}}(t_0))=V.$$
Recall that for symmetric forms, i.e., if ${\mathfrak{a}}(t;u,v)=\overline{{\mathfrak{a}}(t;v,u)}$ for all $t,u,v,$ then the square root property is satisfied.
Under the assumptions - it is known that for each $x_0\in V$ the non-autonomous homogeneous Cauchy problem $$\label{Abstract Cauchy problem 0}
\left\{
\begin{aligned}
\dot u(t)&+{\mathcal{A}}(t)u(t)= 0\quad \hbox{a.e. on}\ [0,T],\\
u(0)&=x_0, \,\\
\end{aligned}
\right.$$ has a unique solution $u \in {\textit{MR}\,}(V,H):=L^2(0,T;V)\cap H^1(0,T;H)$ such that $u\in C([0,T];V).$ This result has been proved by Arendt and Monniaux [@Ar-Mo15] (see also [@ELLA15]) when the form ${\mathfrak{a}}$ satisfies the weaker condition $$\label{eq 1:Dini-conditionWeaker}
|{\mathfrak{a}}(t,u,v)-{\mathfrak{a}}(s,u,v)| \le\omega(|t-s|) \Vert u\Vert_V \Vert v\Vert_{V_\gamma}\quad \ (t,s\in[0,T], u,v\in V).$$
In this paper we continue to investigate further regularity of the solution of (\[Abstract Cauchy problem 0\]). For this it is necessary to associate to the Cauchy problem an *evolution family* $$\mathcal U:=\Big\{U(t,s):\ 0\leq s\leq t\leq T\Big\}\subset {\mathcal{L}}(H)$$ which means that:
- $U(t,t)=I$ and $U(t,s)= U(t,r)U(r,s)$ for every $0\leq r\leq s\leq t\leq T,$
- for every $x\in X$ the function $(t,s)\mapsto U(t,s)x$ is continuous into $H$ for $0\leq s\leq t\leq T$
- for each $x_0\in H, U(\cdot,s)x_0$ is the unique solution of (\[Abstract Cauchy problem 0\]).
Let $Y\subseteq H$ be a subspace. An evolution family $\mathcal U\subset {\mathcal{L}}(H)$ is said to be *norm continuous in $Y$* if $\mathcal U\subset{\mathcal{L}}(Y)$ and the map $(t,s)\mapsto U(t,s)$ is norm continuous with value in ${\mathcal{L}}(Y)$ for $0\leq s<t\leq T.$
If the non-autonomous form ${\mathfrak{a}}$ satisfies the weaker condition (\[eq 1:Dini-conditionWeaker\]) then it is known that (\[Abstract Cauchy problem 0\]) is governed by an evolution family which is norm continuous in $V$ [@LH17 Theorem 2.6], and norm continuous in $H$ if in addition $V\hookrightarrow H$ is compact [@LH17 Theorem 3.4]. However, for many boundary value problem the compactness embedding fails.
In this paper we prove that the compactness assumption can be omitted provided ${\mathfrak{a}}$ satisfies instead of (\[eq 1:Dini-conditionWeaker\]). This will allow us to consider a large class of examples of applications. One of the main ingredient used here is the non-autonomous *returned adjoint form* ${\mathfrak{a}}^*_r : [0,T]\times V\times V{\longrightarrow}{\mathbb{C}}$ defined by $$\label{returnd adjoint form}
{\mathfrak{a}}^*_r(t,u,v):=\overline{{\mathfrak{a}}(T-t,v,u)} \quad \ (t,s\in[0,T], u,v\in V).$$ The concept of returned adjoint forms appeared in the work of D. Daners [@Daners01] but for different interest. Furthermore, [@LH17 Theorem 2.6] cited above will be also needed to prove our main result.
We note that the study of regularity properties of the evolution family with respect to $(t,s)$ in general Banach spaces has been investigated in the case of constant domains by Komatsu [@H.Ko61] and Lunardi [@Lu89], and by Acquistapace [@Ac88] for time-dependent domains.
We illustrate our abstract results by two relevant examples. The first one concerns the Laplacian with non-autonomous Robin boundary conditions on a unbounded Lipschitz domain. The second one traits a class of Schrödinger operators with time dependent potential.
Preliminary results \[Approximation\]
=====================================
Let $
{\mathfrak{a}}: [0,T]\times V\times V \to {\mathbb{C}}$ a non-autonomous sesquilinear form satisfying and (\[eq:continuity-nonaut\]). Then the following well known result regarding *$L^2$-maximal regularity in $V'$* is due to J. L. Lions
\[wellposedness in V’2\] For each given $s\in[0,T)$ and $x_0\in H$ the homogenuous Cauchy problems $$\label{evolution equation u(s)=x}
\left\{
\begin{aligned}
\dot u(t)&+{\mathcal{A}}(t)u(t)= 0\quad \hbox{a.e. on}\ [s,T],\\
u(s)&=x, \,\\
\end{aligned}
\right.$$ has a unique solution $u \in {\textit{MR}\,}(V,V'):={\textit{MR}\,}(s,T;V,V'):=L^2(s,T;V)\cap H^1(s,T;V').$
Recall that the maximal regularity space ${\textit{MR}\,}(V,V')$ is continuously embedded into $C([s,T],H)$ [@Sho97 page 106]. A proof of Theorem \[wellposedness in V’2\] using a representation theorem of linear functionals, known in the literature as *Lions’s representation Theorem* can be found in [@Sho97 page 112] or [@DL88 XVIII, Chpater 3, page 513].
Furthermore, we consider the non-autonomous adjoint form ${\mathfrak{a}}^*:[0,T]\times V\times V{\longrightarrow}{\mathbb{C}}$ of ${\mathfrak{a}}$ defined by $${\mathfrak{a}}^*(t;u,v):=\overline{{\mathfrak{a}}(t;v,u)}$$ for all $t\in[0,t]$ and $u,v\in V.$ Finally, we will need to consider the returned adjoin form ${\mathfrak{a}}^*_r:[0,T]\times V\times V{\longrightarrow}{\mathbb{C}}$ given by $${\mathfrak{a}}^*_r(t,u,v):={\mathfrak{a}}^*(T-t,u,v).$$ Clearly, the adjoint form is a non-autonomous sesquilinear form and satisfies and (\[eq:continuity-nonaut\]) with the same constant $M, \alpha.$ Moreover, the adjoint operators $A^*(t), t\in[0,T]$ of $A(t), t\in[0,T]$ coincide with the operators associated with ${\mathfrak{a}}^*$ on $H.$ Thus applying Theorem \[wellposedness in V’2\] to the returned adjoint form we obtain that the Cauchy problem associated with ${\mathcal{A}}^*_r(t):={\mathcal{A}}^*(T-t)$ $$\label{evolution equation u(s)=x returned}
\left\{
\begin{aligned}
\dot v(t)&+{\mathcal{A}}^*_r(t)v(t)= 0\quad \hbox{a.e. on}\ [s,T],\\
v(s)&=x, \,\\
\end{aligned}
\right.$$ has for each $x\in H$ a unique solution $v\in MR(V,V').$ Accordingly, for every $(t,s)\in\overline{\Delta}:=\{ (t,s)\in[0,T]^2:\ t\leq s\}$ and every $x\in H$ we can define the following family of linear operators $$\label{evolution family}
U(t,s)x:=u(t) \quad \hbox{ and }\quad
U^*_r(t,s)x:=v(t),$$ where $u$ and $v$ are the unique solutions in $MR(V,V')$ respectively of (\[evolution equation u(s)=x\]) and (\[evolution equation u(s)=x returned\]). Thus each family $\{\mathcal U(t,s):\ (t,s)\in\overline{\Delta}\}$ and $\{\mathcal U^*_r(t,s):\ (t,s)\in\overline{\Delta}\}$ yields a contractive, strongly continuous evolution family on $H$ [@LH17 Proposition].
In the autonomous case, i.e., if ${\mathfrak{a}}(t,\cdot,\cdot)={\mathfrak{a}}(\cdot,\cdot)$ for all $t\in[0,T],$ then one knows that $-A_0,$ the operator associated with ${\mathfrak{a}}_0$ in $H,$ generates a $C_0$-semigroup $(T(t))_{t\geq 0}$ in $H.$ In this case ${{U}}(t,s):=T(t-s)$ yields a strongly continuous evolution family on $H.$ Moreover, we have $$\label{equalities: adjoint EVF and EVF autonomous case}
{{U}}(t,s)^\prime=T(t-s)^{\prime}=T^*(t-s)={{U}}^*(t,s)={{U}}^*_r(t,s).$$ Here, $T(\cdot)^\prime$ denote the adjoint of $T(\cdot)$ which coincides with the $C_0$-semigroup $(T^*(t))_{t\geq 0}$ associated with the adjoint form ${\mathfrak{a}}^*.$ In the non-autonomous setting however, (\[equalities: adjoint EVF and EVF autonomous case\]) fails in general even in the finite dimensional case, see [@Daners01 Remark 2.7]. Nevertheless, Proposition \[equalities: adjoint EVF and EVF\] below shows that the evolution families ${{U}}$ and ${{U}}_r^*$ can be related in a similar way. This formula appeared in [@Daners01 Theorem 2.6].
\[equalities: adjoint EVF and EVF\] Let ${{U}}$ and ${{U}}^*_r$ be given by (\[evolution family\]). Then we have $$\label{Key equalities: adjoint EVF and EVF}
\big [ {{U}}^*_r(t,s)\big ]^\prime x={{U}}(T-s,T-t)x$$ for all $x\in H$ and $(t,s)\in\overline{\Delta}.$
The equality will play a crucial role in the proof of our main result. We include here a new proof for the sake of completeness.
(of Proposition \[equalities: adjoint EVF and EVF\]) Let $\Lambda=(0=\lambda_0<\lambda_1<...<\lambda_{n+1}=T)$ be a subdivision of $[0,T].$ Let ${\mathfrak{a}}_k:V \times V \to \mathbb C\ \ \hbox{ for } k=0,1,...,n$ be given by $$\begin{aligned}
\ {\mathfrak{a}}_k(u,v):={\mathfrak{a}}_{k,\Lambda}(u,v):=\frac{1}{\lambda_{k+1}-\lambda_k}
\int_{\lambda_k}^{\lambda_{k+1}}&{\mathfrak{a}}(r;u,v){\rm d}r\ \hbox{ for } u,v\in V. \
\end{aligned}$$ All these forms satisfy (\[eq:continuity-nonaut\]) with the same constants $\alpha, M.$ The associated operators in $V'$ are denoted by ${\mathcal{A}}_k\in {\mathcal{L}}(V,V')$ and are given for all $u\in V$ and $k=0,1,...,n$ by $$\label{eq:op-moyen integrale}
{\mathcal{A}}_ku :={\mathcal{A}}_{k,\Lambda}:=\frac{1}{\lambda_{k+1}-\lambda_k}
\int_{\lambda_k}^{\lambda_{k+1}}{\mathcal{A}}(r)u{\rm d}r.\ \ $$ Consider the non-autonomous form ${\mathfrak{a}}_\Lambda:[0,T]\times V \times V \to {\mathbb{C}}$ defined by $$\label{form: approximation formula1}
{\mathfrak{a}}_{\Lambda}(t;\cdot,\cdot):=\begin{cases}
{\mathfrak{a}}_k(\cdot,\cdot)&\hbox{if }t\in [\lambda_k,\lambda_{k+1})\\
{\mathfrak{a}}_n(\cdot,\cdot)&\hbox{if }t=T\ .
\end{cases}$$ Its associated time dependent operator ${\mathcal{A}}_\Lambda(\cdot): [0,T]\subset {\mathcal{L}}(V,V')$ is given by $$\label{form: approximation formula1}
{\mathcal{A}}_{\Lambda}(t):=\begin{cases}
{\mathcal{A}}_k&\hbox{if }t\in [\lambda_k,\lambda_{k+1})\\
{\mathcal{A}}_n &\hbox{if }t=T\ .
\end{cases}$$ Next denote by $T_k$ the $C_0-$semigroup associated with ${\mathfrak{a}}_k$ in $H$ for all $k=0,1...n.$ Then applying Theorem \[wellposedness in V’2\]) to the form ${\mathfrak{a}}_\Lambda$ we obtain that in this case the associated evolution family ${{U}}_\Lambda(t,s)$ is given explicitly for $\lambda_{m-1}\leq s<\lambda_m<...<\lambda_{l-1}\leq t<\lambda_{l}$ by $$\label{promenade1}{{U}}_\Lambda (t,s):= T_{l-1}(t-\lambda_{l-1})
T_{l-2}(\lambda_{l-1}-\lambda_{l-2})...T_{m}(\lambda_{m+1}-\lambda_{m})T_{m-1}(\lambda_{m}-s),$$ and for $\lambda_{l-1}\leq a\leq b<\lambda_{l}$ by $$\label{promenade2}U_\Lambda (t,s):= T_{l-1}(t-s).$$ By [@LASA14 Theorem 3.2] we know that $({{U}}_\Lambda)_{\Lambda}$ converges weakly in $MR(V,V')$ as $|\Lambda|\to 0$ and $$\lim\limits_{|\Lambda|\to 0}\|{{U}}_\Lambda-{{U}}\|_{MR(V,V')}=0$$ The continuous embedding of $MR(V,V')$ into $C([0,T];H)$ implies that $\lim\limits_{|\Lambda|\to 0}{{U}}_\Lambda={{U}}$ in the weak operator topology of ${\mathcal{L}}(H).$
Now, let $(t,s)\in \overline{\Delta}$ with $\lambda_{m-1}\leq s<\lambda_m<...<\lambda_{l-1}\leq t<\lambda_{l}$ be fixed. Applying the above approximation argument to ${\mathfrak{a}}_r^*$ one obtains that $$\begin{aligned}
\label{eq1 proof returned adjoint}
{{U}}^*_{\Lambda,r}(t,s)&=T_{l-1,r}^*(t-\lambda_{l-1})
T_{l-2,r}^*(\lambda_{l-1}-\lambda_{l-2})...T_{m,r}^*(\lambda_{m+1}-\lambda_{m})T_{m-1,r}^*(\lambda_{m}-s)
\\\label{eq2 proof returned adjoint}&=T_{l-1,r}^\prime(t-\lambda_{l-1})
T_{l-2,r}^\prime(\lambda_{l-1}-\lambda_{l-2})...T_{m,r}^\prime(\lambda_{m+1}-\lambda_{m})T_{m-1,r}^\prime(\lambda_{m}-s)
$$ where $T_{k,r}$ and $T_{k,r}^*$ are the $C_0$-semigroups associated with $$\label{eq1 proof Thm equalities: adjoint EVF and EVF} {\mathfrak{a}}_{k,r}(u,v):=\frac{1}{\lambda_{k+1}-\lambda_k}\int_{\lambda_k}^{\lambda_{k+1}}{\mathfrak{a}}(T-r;u,v){\rm d}r=\frac{1}{\lambda_{k+1}-\lambda_k}\int_{T-\lambda_{k+1}}^{T-\lambda_k}{\mathfrak{a}}(r;u,v){\rm d}r$$ and its adjoint ${\mathfrak{a}}_{k,r}^*$, respectively. Recall that $T_{k,r}^*=T_{k,r}^\prime.$
On the other hand, the last equality in (\[eq1 proof Thm equalities: adjoint EVF and EVF\]) implies that $T_{k,r}$ coincides with the semigroup associated with ${\mathfrak{a}}_{k,\Lambda_T}$ where $\Lambda_T$ is the subdivision $\Lambda_T:=(0=T-\lambda_{n+1}<T-\lambda_n<...<T-\lambda_1<T-\lambda_0=T).$ It follows from - and - that $$\begin{aligned}
\Big[{{U}}^*_{\Lambda,r}(t,s)\Big]^\prime&={\mathcal{T}}_{m-1,r}(\lambda_{m}-s){\mathcal{T}}_{m,r}(\lambda_{m+1}-\lambda_{m})...{\mathcal{T}}_{l-2,r}(\lambda_{l-1}-\lambda_{l-2}){\mathcal{T}}_{l-1,r}(t-\lambda_{l-1})
\\&={\mathcal{T}}_{m-1}\Big((T-s)-(T-\lambda_{m})\Big){\mathcal{T}}_{m}\Big((T-\lambda_m)-(T-\lambda_{m+1})\Big)...{\mathcal{T}}_{l-1}\Big((T-\lambda_{l-1})-(T-t)\Big)\\&={{U}}_{\Lambda_T}(T-s,T-t)\end{aligned}$$ Finally, the desired equality follows by passing to the limit as $|\Lambda|=|\Lambda_T|\to 0.$
\[remark-rescaling\] The coerciveness assumption in may be replaced with $$\label{eq:Ellipticity-nonaut2}
{\operatorname{Re}}{\mathfrak{a}}(t,u,u) +\omega\Vert u\Vert_H^2\ge \alpha \|u\|^2_V \quad ( t\in [0,T], u\in V)$$ for some $\omega\in {\mathbb{R}}.$ In fact, ${\mathfrak{a}}$ satisfies (\[eq:Ellipticity-nonaut2\]) if and only if the form $a_\omega$ given by ${\mathfrak{a}}_\omega(t;\cdot,\cdot):={\mathfrak{a}}(t;\cdot,\cdot)+\omega (\cdot{\, \vert \,}\cdot)$ satisfies the second inequality in . Moreover, if $u\in MR(V,V')$ and $v:=e^{-w.}u,$ then $v\in MR(V,V')$ and $u$ satisfies (\[evolution equation u(s)=x\]) if and only if $v$ satisfies $$\dot{v}(t)+(\omega+\mathcal A(t))v(t)=0 \ \ \ t{\rm
-a.e.} \hbox{ on} \ [s,T],\
\ \ \ \ v(s)=x. $$
Norm continuous evolution family {#Sec2 Norm continuity}
================================
In this section we assume that the non-autonomous form ${\mathfrak{a}}$ satisfies (\[eq:continuity-nonaut\])-. Thus as mentioned in the introduction, under theses assumptions the Cauchy problem (\[evolution equation u(s)=x\]) has $L^2$-maximal regularity in $H$. Thus for each $x\in V,$ $$U(\cdot,s)x\in {\textit{MR}\,}(V,H):={\textit{MR}\,}(s,T;V,H):=L^2(s,T;V)\cap H^1(s,T;H).$$ Moreover, $ U(\cdot,s)x\in C[s,T];V)$ by [@Ar-Mo15 Theorem 4.2]. From [@LH17 Theorem 2.7] we known that the restriction of ${{U}}$ to $V$ defines an evolution family which norm continuous. The same is also true for the Cauchy problem (\[evolution equation u(s)=x returned\]) and the assocaited evolution family $U^*_r$ since the returned adjoint form ${\mathfrak{a}}_r^*$ inherits all properties of ${\mathfrak{a}}.$
In the following we establish that ${{U}}$ can be extended to a strongly continuous evolution family on $V'.$
\[Lemma EVF on V’\] Let ${\mathfrak{a}}$ be a non-autonomous sesquilinear form satisfying (\[eq:continuity-nonaut\])-(\[square property\]). Then ${{U}}$ can be extended to a strongly continuous evolution family on $V^\prime,$ which we still denote ${{U}}.$
Let $x\in H$ and $(t,s)\in\overline {\Delta}.$ Then using Proposition \[equalities: adjoint EVF and EVF\] and the fact that ${{U}}$ and ${{U}}_r^*$ define both strongly continuous evolution families on $V$ and $H$ we obtain that $$\begin{aligned}
\|{{U}}(t,s)x\|_{V'}&=\sup_{\underset{v\in V}{\|v\|_V=1}}{\, \vert \,}<{{U}}(t,s)x, v>{\, \vert \,}\\&=\sup_{\underset{v\in V}{\|v\|_V=1}}{\, \vert \,}({{U}}(t,s)x|v)_H{\, \vert \,}=\sup_{\underset{v\in V}{\|v\|_V=1}}{\, \vert \,}(x| {{U}}(t,s)^\prime v)_H{\, \vert \,}\\&=\sup_{\underset{v\in V}{\|v\|_V=1}}{\, \vert \,}(x|{{U}}_r^*(T-s,T-t)v)_H{\, \vert \,}\\&=\sup_{\underset{v\in V}{\|v\|_V=1}}{\, \vert \,}<x,{{U}}_r^*(T-s,T-t)v>{\, \vert \,}\\&\leq \|x\|_{V^\prime}\|{{U}}_r^*(T-s,T-t)\|_{{\mathcal{L}}(V)}
\\&\leq c\|x\|_{V^\prime}\end{aligned}$$ where $c>0$ is such that $\underset{t,s\in\Delta}{\sup}\|{{U}}_r^*(t,s)\|_{{\mathcal{L}}(V)}\leq c.$ Thus, the claim follows since $H$ is dense in $V'.$
Let $\Delta:=\{(t,s)\in\Delta{\, \vert \,}t\ge s\}.$ The following theorem is the main result of this paper
\[main result\] Let ${\mathfrak{a}}$ be a non-autonomous sesquilinear form satisfying (\[eq:continuity-nonaut\])-(\[square property\]). Let $\{U(t,s):\ (t,s)\in\Delta\}$ given by (\[evolution family\]). Then the function $(t,s)\mapsto {{U}}(t,s)$ is norm continuous on $\Delta$ into ${\mathcal{L}}(X)$ for $X=V, H$ and $V'.$
The norm continuity for ${{U}}$ in the case where $X=V$ follows from [@LH17 Theorem 2.7]. On the other hand, applying [@LH17 Theorem 2.7] to ${\mathfrak{a}}_r^*$ we obtain that ${{U}}_r^*$ is also norm continuous on $\Delta$ with values in ${\mathcal{L}}(V).$ Using Proposition \[equalities: adjoint EVF and EVF\], we obtain by similar arguments as in the proof of Lemma \[Lemma EVF on V’\] $$\|{{U}}(t,s)-{{U}}(t',s')\|_{{\mathcal{L}}(V')}\leq \|{{U}}_r^*(T-s,T-t)x-{{U}}_r^*(T-s',T-t')x\|_{{\mathcal{L}}(V)}$$ for all $x\in V'$ and $(t,s), (t',s')\in \Delta.$ This implies that ${{U}}$ is norm continuous on $\Delta$ with values in ${\mathcal{L}}(V').$ Finally, the norm continuity in $H$ follows then by interpolation.
Examples {#S application}
========
This section is devoted to some relevant examples illustrating the theory developed in the previous sections. We refer to [@Ar-Mo15] and [@Ou15] and the references therein for further examples.
$(i)$ **Laplacian with time dependent Robin boundary conditions on exterior domain** Let $\Omega$ be a bounded domain of ${\mathbb{R}}^d$ with Lipschitz boundary $\Gamma.$ Denote by $\sigma$ the $(d-1)$-dimensional Hausdorff measure on $\Gamma.$ Let $\Omega_{ext}$ denote the exterior domain of $\Omega,$ i.e., $\Omega_{ext}:={\mathbb{R}}^d\setminus\overline{\Omega}.$ Let $T>0$ and $\alpha>1/4.$ Let $\beta:[0,T]\times \Gamma {\longrightarrow}{\mathbb{R}}$ be a bounded measurable function such that $$|\beta(t,\xi)-\beta(t,\xi)|\leq c|t-s|^\alpha$$ for some constant $c>0$ and every $t,s\in [0,T], \xi\in \Gamma.$ We consider the from ${\mathfrak{a}}:[0,T]\times H^1(\Omega_{ext})\times H^1(\Omega_{ext}){\longrightarrow}{\mathbb{C}}$ defined by $${\mathfrak{a}}(t;u,v):=\int_{\Omega_{ext}}\nabla u\cdot\nabla v {\rm d}\xi+\int_{\Omega_{ext}}\beta(t,\cdot){{u}{_{|\Gamma}}} {{\bar v}{_{|\Gamma}}} {\rm d}\sigma$$ where $u\to {{u}{_{|\Gamma}}}: H^1(\Omega_{ext}) {\longrightarrow}L^2(\Gamma,\sigma)$ is the trace operator which is bounded [@AdFou Theorem 5.36]. The operator $A(t)$ associated with ${\mathfrak{a}}(t;\cdot,\cdot)$ on $H:=L^2(\Omega_{ext})$ is minus the Laplacian with time dependent Robin boundary conditions $$\partial_\nu u(t)+\beta(t,\cdot)u=0\ \text{ on } \Gamma.$$ Here $\partial_\nu$ is the weak normal derivative. Thus the domain of $A(t)$ is the set $$D(A(t))=\Big\{ u\in H^1(\Omega_{ext}) {\, \vert \,}{\mathop{}\!\mathbin\bigtriangleup}u\in L^2(\Omega_{ext}), \partial_\nu u(t)+\beta(t,\cdot){{u}{_{|\Gamma}}}=0 \Big\}$$ and for $u\in D(A(t)), A(t)u:=-{\mathop{}\!\mathbin\bigtriangleup}u.$ Thus similarly as in [@Ar-Mo15 Section 5] one obtains that ${\mathfrak{a}}$ satisfies (\[eq:continuity-nonaut\])-(\[square property\]) with $\gamma:=r_0+1/2$ and $\omega(t)=t^\alpha$ where $r_0\in(0,1/2)$ such that $r_0+1/2<2\alpha.$ We note that [@Ar-Mo15 Section 5] the authors considered the Robin Laplacian on the bounded Lipschitz domain $\Omega.$ The main ingredient used there is that the trace operators are bounded from $H^{s}(\Omega)$ with value in $H^{s-1/2}(\Gamma,\sigma)$ for all $1/2<s<3/4.$ This boundary trace embedding theorem holds also for unbounded Lipschitz domain, and thus for $\Omega_{ext}$, see [@Mclean Theorem 3.38] or [@Cos Lemma 3.6].
Thus applying [@Ar-Mo15 Theorem 4.1] and Theorem \[main result\] we obtain that the non-autonomous Cauchy problem
$$\label{RobinLpalacian}
\left\{
\begin{aligned}
\dot {u}(t) - {\mathop{}\!\mathbin\bigtriangleup}u(t)& = 0, \ u(0)=x\in H^1(\Omega_{ext })
\\ \partial_\nu u(t)+\beta(t,\cdot){u}&=0 \ \text{ on } \Gamma
\end{aligned} \right.$$
has $L^2$-maximal regularity in $L^2(\Omega_{ext})$ and its solution is governed by an evolution family ${{U}}(\cdot,\cdot)$ that is norm continuous on each space $V, L^2(\Omega_{ext})$ and $V'.$
Non-autonomous Schrödinger operators
------------------------------------
Let $m_0, m_1\in L_{Loc}^1({\mathbb{R}}^d)$ and $m:[0,T]\times{\mathbb{R}}^d{\longrightarrow}{\mathbb{R}}$ be a measurable function such that there exist positive constants $\alpha_1,\alpha_2$ and $\kappa$ such that $$\alpha_1 m_0(\xi)\leq m(t,\xi)\leq \alpha_2 m_0(\xi),
\quad \text{ }\
\quad {\, \vert \,}m(t,\xi)-m(s,\xi){\, \vert \,}\leq \kappa|t-s|m_1(\xi)$$ for almost every $\xi\in {\mathbb{R}}^d$ and every $t,s\in [0,T].$ Assume moreover that there exist a constant $c>0$ and $s\in[0,1]$ such that for $u\in C_c^\infty({\mathbb{R}}^d)$ $$\label{additional AS Svhrödinger example}\int_{{\mathbb{R}}^d}m_1(\xi)|u(\xi)|^2 {\mathrm{d}}\xi\leq c\|u\|_{H^s({\mathbb{R}}^d)}.$$Consider the non-autonomous Cauchy problem $$\label{Schroedinger operator}
\left\{
\begin{aligned}
\dot {u}(t) - &{\mathop{}\!\mathbin\bigtriangleup}u(t)+m(t,\cdot)u(t) = 0,
\\ u(0)&=x\in V.
\end{aligned} \right.$$ Here $A(t)=-{\mathop{}\!\mathbin\bigtriangleup}+m(t,\cdot)$ is associated with the non-autonomous form ${\mathfrak{a}}:[0,T]\times V\times V{\longrightarrow}{\mathbb{C}}$ given by $$V:=\left\{u\in H^1({\mathbb{R}}^d): \int_{{\mathbb{R}}^d}m_0(\xi)|u(\xi)|^2 {\mathrm{d}}\xi <\infty \right\}$$and $${\mathfrak{a}}(t;u,v)=\int_{{\mathbb{R}}^d}\nabla u\cdot\nabla v {\rm d} \xi+\int_{{\mathbb{R}}^d}m(t,\xi)|u(\xi)|^2 {\mathrm{d}}\xi.$$ The form ${\mathfrak{a}}$ satisfies also (\[eq:continuity-nonaut\])-(\[square property\]) with $\gamma:=s$ and $\omega(t)=t^\alpha$ for $\alpha>\frac{s}{2}$ and $s\in[0,1].$
This example is taken from [@Ou15 Example 3.1]. Using our Theorem \[main result\] we have that the solution of Cauchy problem (\[Schroedinger operator\]) is governed by a norm continuous evolution family on $L^2({\mathbb{R}}^d), V$ and $V'.$
[999]{}
P. Acquistapace. Evolution operators and strong solutions of abstract linear parabolic equations. *Differential Integral Equations* 1 (1988), no. 4, 433-457. R. A. Adams, J. J. F. Fournier. *Sobolev spaces.* Second edition. Pure and Applied Mathematics (Amsterdam), 140. Elsevier/Academic Press, Amsterdam, 2003. W. Arendt. *Heat kernels.* $9^{th}$ Internet Seminar (ISEM) 2005/2006. Available at. https://www.uni-ulm.de/mawi/iaa/members/professors/arendt.html
W. Arendt, S. Monniaux. Maximal regularity for non-autonomous Robin boundary conditions. Math. Nachr. 1-16(2016) /DOI: 10.1002/mana.201400319
H. Brézis. *Functional Analysis, Sobolev Spaces and Partial Differential Equations*. Springer, Berlin 2011. M. Costabel. Boundary integral operators on Lipschitz domains: elementary results. *SIAM J. Math. Anal.* 19 (1988), no. 3, 613-626. D. Daners. Heat kernel estimates for operators with boundary conditions. Math. Nachr. 217 (2000), 13-41. R. Dautray and J.L. Lions. *Analyse Mathématique et Calcul Numérique pour les Sciences et les Techniques.* Vol. 8, Masson, Paris, 1988.
O. El-Mennaoui, H. Laasri. [*Stability for non-autonomous linear evolution equations with $L^p-$ maximal regularity.*]{} *Czechoslovak Mathematical Journal.* 63 (138) 2013.
O. El-Mennaoui, H. Laasri. On evolution equations governed by non-autonomous forms. Archiv der Mathematik (2016), 1-15, DOI 10.1007/s00013-016-0903-5 K. J. Engel, R. Nagel. One-Parameter Semigroups for Linear Evolution Equations, Springer-Verlag, 2000. J.-L. Lions’ problem concerning maximal regularity of equations governed by non-autonomous forms. Ann. Inst. H. Poincaré Anal. Non Linéaire 34 (2017) T. Kato. *Perturbation theory for linear operators.* Springer-Verlag, Berlin 1992.
H. Komatsu, Abstract analyticity in time and unique continuation property of solutions of a parabolic equation, J. Fac. Sci. Univ. Tokyo, Sect. 1 9 (1961), 1-11. H. Laasri. Regularity properties for evolution family governed by non-autonomous forms. Archiv der Mathematik (2018), https://doi.org/10.1007/s00013-018-1175-z. J.L. Lions. *Equations Différentielles Opérationnelles et Problèmes aux Limites.* Springer-Verlag, Berlin, Göttingen, Heidelberg, 1961. A. Lunardi. Differentiability with respect to $(t,s)$ of the parabolic evolution operator. Israel J. Math. 68 (1989), no. 2, 161-184. W. Mclean. *Strongly elliptic systems and boundary integral equations.* Cambridge University Press, Cambridge, 2000. E. M. Ouhabaz. *Maximal regularity for non-autonomous evolution equations governed by forms having less regularity.* Arch. Math. 105 (2015), 79-91.
Princeton Univ. Press 2005. A. Sani, H. Laasri, [ Evolution Equations governed by Lipschitz Continuous Non-autonomous Forms.]{} *Czechoslovak Mathematical Journal.* 65 (140) (2015), 475-491. R. E. Showalter. *Monotone Operators in Banach Space and Nonlinear Partial Differential Equations.* Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 1997.
H. Tanabe. *Equations of Evolution.* Pitman 1979.
[^1]: $^*$This work is supported by Deutsche Forschungsgemeinschaft DFG (Grant LA 4197/8-1)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Federated Learning (FL) enables learning a shared model across many clients without violating the privacy requirements. One of the key attributes in FL is the heterogeneity that exists in both resource and data due to the differences in computation and communication capacity, as well as the quantity and content of data among different clients. We conduct a case study to show that heterogeneity in resource and data has a significant impact on training time and model accuracy in conventional FL systems. To this end, we propose [<span style="font-variant:small-caps;">TiFL</span>]{}, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round to mitigate the straggler problem caused by heterogeneity in resource and data quantity. To further tame the heterogeneity caused by non-IID (Independent and Identical Distribution) data and resources, [<span style="font-variant:small-caps;">TiFL</span>]{}employs an *adaptive* tier selection approach to update the tiering on-the-fly based on the observed training performance and accuracy over time. We prototype [<span style="font-variant:small-caps;">TiFL</span>]{}in a FL testbed following Google’s FL architecture and evaluate it using popular benchmarks and the state-of-the-art FL benchmark LEAF. Experimental evaluation shows that [<span style="font-variant:small-caps;">TiFL</span>]{}outperforms the conventional FL in various heterogeneous conditions. With the proposed adaptive tier selection policy, we demonstrate that [<span style="font-variant:small-caps;">TiFL</span>]{}achieves much faster training performance while keeping the same (and in some cases - better) test accuracy across the board.'
author:
- 'Zheng Chai\*'
- 'Ahsan Ali\*'
- 'Syed Zawad\*'
- Stacey Truex
- Ali Anwar
- Nathalie Baracaldo
- Yi Zhou
- Heiko Ludwig
- Feng Yan
- Yue Cheng
bibliography:
- 'main.bib'
title: '[[<span style="font-variant:small-caps;">TiFL</span>]{}]{}: A Tier-based Federated Learning System'
---
Conclusion {#sec:conclusion}
==========
In this paper, we investigate and quantify the heterogeneity impact on “decentralized virtual supercomputer” - FL systems. Based on the observations of our case study, we propose and prototype a Tier-based Federated Learning System called [<span style="font-variant:small-caps;">TiFL</span>]{}. Tackling the resource and data heterogeneity, [[<span style="font-variant:small-caps;">TiFL</span>]{}]{} employs a tier-based approach that groups clients in tiers by their training response latencies and selects clients from the same tier in each training round. To address the challenge that data heterogeneity information cannot be directly measured due to the privacy constraints, we further design an *adaptive* tier selection approach that enables [<span style="font-variant:small-caps;">TiFL</span>]{}be data heterogeneity aware and outperform conventional FL in various heterogeneous scenarios: *resource heterogeneity*, *data quantity heterogeneity*, *non-IID data heterogeneity*, and their combinations. Specifically, [<span style="font-variant:small-caps;">TiFL</span>]{}achieves an improvement over conventional FL by up to 3$\times$ speedup in overall training time and by 6% in accuracy.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Phase-resolved spectroscopy of the newly discovered X-ray transient MAXIJ0556$-$332 has revealed the presence of narrow emission lines in the Bowen region that most likely arise on the surface of the mass donor star in this low mass X-ray binary. A period search of the radial velocities of these lines provides two candidate orbital periods (16.43$\pm$0.12 and 9.754$\pm$0.048 hrs), which differ from any potential X-ray periods reported. Assuming that MAXIJ0556$-$332 is a relatively high inclination system that harbors a precessing accretion disk in order to explain its X-ray properties, it is only possible to obtain a consistent set of system parameters for the longer period. These assumptions imply a mass ratio of $q$$\simeq$0.45, a radial velocity semi-amplitude of the secondary of $K_2$$\simeq$190 km s$^{-1}$ and a compact object mass of the order of the canonical neutron star mass, making a black hole nature for MAXIJ0556$-$332 unlikely. We also report the presence of strong NIII emission lines in the spectrum, thereby inferring a high N/O abundance. Finally we note that the strength of all emission lines shows a continuing decay over the $\simeq$1 month of our observations.'
author:
- |
R. Cornelisse$^{1,2}$[^1], P. D’Avanzo$^3$, S. Campana$^3$, J. Casares$^{1,2}$, P.A. Charles$^{4,5}$, G. Israel$^6$, T. Muñoz-Darias$^3$, K. O’Brien$^7$, D. Steeghs$^8$, L. Stella$^6$, M.A.P. Torres$^9$\
$^{1}$ Instituto de Astrofisica de Canarias, Via Lactea, La Laguna E-38200, Santa Cruz de Tenerife, Spain\
$^2$ Departamento de Astrofisica, Universidad de La Laguna, E-38205 La Laguna, Tenerife, Spain\
$^3$INAF- Osservatorio Astronomico di Brera, via E. Bianchi 46, 23807 Merate, Italy\
$^4$ South Africa Astronomical Observatory, P.O. Box 9, Observatory 7935, South Africa\
$^5$ School of Physics and Astronomy, University of Southampton, Highfield, Southampton SO17 1BJ, UK\
$^6$ INAF-Osservatorio Astronomico di Roma, Via Frascati 33, I-00040 Monteporzio Catone (Rome), Italy\
$^7$ Department of Physics, University of California, Santa Barbara, CA, USA\
$^8$ Department of Physics, University of Warwick, Coventry, CV4 7AL, UK\
$^9$ SRON, Netherlands Institute for Space Research, Sorbonnelaan 2, 3584 CA, Utrecht, The Netherlands\
date: 'Accepted Received ; in original form '
title: 'The nature of the X-ray transient MAXIJ0556$-$332'
---
\[firstpage\]
accretion, accretion disks – stars:individual (MAXIJ0556$-$332) – X-rays:binaries.
Introduction
============
Amongst the brightest objects in the X-ray sky are the low mass X-ray binaries (LMXBs), exotic systems where the primary is a compact object (either a neutron star or black hole) and the secondary a low-mass ($<$1$M_\odot$) star. Since the secondary is overflowing its Roche-Lobe, matter is accreted via an accretion disk onto the compact object, giving rise to the observed X-rays. The majority of these LMXBs, the so-called X-ray transients, only show sporadic X-ray activity, but most of the time remain in a state of low-level activity, referred to as quiescence (see e.g. Psaltis 2006 for an overview and more detailed references).
Due to reprocessing of the X-rays, mainly in the outer accretion disk, a LMXB also becomes much brighter in the optical during a transient outburst (e.g. Charles & Coe 2006). Unfortunately, this reprocessed emission completely dominates the optical light, making radial velocity studies using spectral features from the donor star impossible in most cases, while during their quiescent state most transients become too faint for such studies. A powerful alternative method of investigation was opened when Steeghs & Casares (2002) detected narrow emission line components that originated from the irradiated face of the donor star in ScoX-1. These narrow features were most obvious in the Bowen blend (NIII $\lambda$4634/4640 and CIII $\lambda$4647/4650), which is the result of UV fluorescence from the hot inner disk, and gave rise to the first radial velocity curve of the mass donor in Sco X-1. Thus far, high resolution phase-resolved spectroscopic studies have revealed these narrow lines in more than a dozen optically bright LMXBs, including several transients during their outburst, leading (for most of them) to the first constraint on the mass of the compact object (see Cornelisse et al. 2008 for an overview).
On January 11 2011 the X-ray transient MAXIJ0556$-$332 (hereafter MAXIJ0556) was newly discovered by MAXI/GSC (Matsumura et al. 2011), and quickly thereafter localized by the Swift X-ray telescope (Kennea et al. 2011). Eclipse-like features in the X-ray light-curves led to several possible orbital periods (Strohmayer 2011, Maitra et al. 2011a). Although follow-up X-ray observations excluded all except one (9.33 hr) of these periods (Belloni et al. 2011), further observations were also able to discard this period (Belloni private communication). The optical counterpart was quickly confirmed by Halpern (2011) as an $R$$\simeq$17.8 object, making it an excellent target for high resolution spectroscopy. Initial studies of the X-ray spectral and timing variations were unable to indicate the nature of the compact object in MAXIJ0556 (Belloni et al. 2011), but more detailed studies by Homan et al. (2011) suggest that its behavior is similar to some of the neutron stars at high accretion rates (the so-called Z-sources). Together with its optical and radio properties this strongly suggests that MAXIJ0556 harbors a neutron star (Russell et al. 2011; Coriat et al. 2011).
In order to identify the nature of MAXIJ0556 using the narrow components in the Bowen region we triggered our Target of Opportunity (ToO) observations using FORS 2 on the Very Large Telescope (VLT) and complemented it with Directors Discretionary Time (DDT) using X-Shooter, which is also on the VLT. In this paper we present the results of these observations. In Sect.2 we discuss the observations in detail, and in Sect.3 we show the results. We finish with a discussion in Sect.4 and present further evidence that MAXIJ0556 is most likely a neutron star LMXB.
Observations and Data Reduction
===============================
------------ ------------- ----------- ------ ----- -----------
Date Time Setting Int. No. Seeing
(yy-mm-dd) (UT) (s) (arcsec)
2011-01-19 00:44-04:50 FORS 671 20 0.45-0.97
2011-01-21 00:54-01:51 FORS 671 4 0.95-1.14
2011-01-22 00:57-01:52 FORS 671 4 0.39-0.45
2011-01-23 00:41-01:37 FORS 671 4 0.74-0-98
2011-01-24 00:50-01:45 FORS 671 4 0.46-0.62
2011-01-25 02:38-03:32 FORS 671 4 0.39-0.77
2011-02-07 00:48-02:19 FORS 671 7 0.47-0.62
2011-02-14 00:28-01:22 X-Shooter 595 4 1.34-1.69
2011-02-14 03:45-04:39 X-Shooter 595 4 0.79-1.02
------------ ------------- ----------- ------ ----- -----------
: Observation log of MAXIJ0556. Indicated are the observing dates and times, instrument used (Setting), integration time for each spectrum (Int.), number of spectra obtained during each night (No,) and the variation of the seeing during these observations (as measured by the DIMM on Paranal). \[obs\]
On 20 January 2011 we triggered our ToO to observe MAXIJ0556 over a period of $\simeq$3 weeks using the FORS 2 instrument on the ESO/VLT. For each exposure we used the 1400V volume-phased holographic grism with a slit width of 0.7$''$, leading to a wavelength coverage of $\lambda$$\lambda$4512-5815 with a resolution of 87 km s$^{-1}$ (FWHM). Furthermore, these observations were complemented with 2$\times$1 hr of X-Shooter DDT observations (also located on the VLT). Although both blocks were observed on the same night (14 February 2011), they were separated by several hours. For the X-Shooter observations, which cover the full UV-optical-IR wavelength band, we used a 1.0$"$ slit for the UBV arm (and 0.9$"$ for the other two arms), resulting in a resolution of 66 km s$^{-1}$ (FWHM) around the Bowen region. In Table\[obs\] we give an overview of the observations.
For the FORS observations we de-biased and flat-fielded all the images and used optimal extraction techniques (from the PAMELA software package) to maximize the signal-to-noise ratio of the extracted spectra (Horne 1986). Wavelength calibration was performed using the daytime He, Ne, Hg and Cd arc lamp exposures. We determined the pixel-to-wavelength scale using a 4th order polynomial fit to 10 reference lines giving a dispersion of 0.64 Åpixel$^{-1}$ and rms scatter $<$0.01 Å. The X-Shooter observations were reduced using the pipeline v.1.2.2 provided by ESO (see Goldoni et al. 2006 for more information), resulting in three 1-dimensional spectra (for each arm of the instrument) per exposure that were flux calibrated using a flux standard obtained during the same night. For the corresponding analysis of the full dataset we used the MOLLY package.
Finally, we reduced our $B$-band acquisition images from the FORS observations using standard reduction techniques (i.e. performed bias subtraction and flat-fielding). For each image the exposure time was 5 seconds. We only had one image at the start of each observing block, resulting in a total of 7 images. From the reduced images we performed a photometric calibration against the USNO $B$-band measurements from nearby stars, and list the results in Table\[ew\]. We note that for our X-Shooter observations we only have a $g$’ broad band image, and have not included its magnitude in the Table.
Data Analysis
=============
Spectrum
--------
First we present the average normalized UV-optical spectrum of MAXIJ0556 obtained by X-Shooter (top and middle panels) and FORS 2 (bottom) in Fig.\[spec\], and have indicated the most prominent lines. The IR arm of X-Shooter was completely dominated by noise, and we therefore did not include this part of the spectrum in Fig.\[spec\]. This figure emphasizes the remarkable contrast in emission line strength between the FORS and X-Shooter observations. In the latter only the most prominent emission lines that are typically observed in LMXBs (i.e the Balmer and He lines and also the Bowen blend) are present, but they are much weaker than during the FORS observations.
------------ --------------- --------------- --------------
Date EW HeII EW H$\beta$ Magnitude
(yy-mm-dd) (Å) (Å) $B$
2011-01-19 4.32$\pm$0.02 6.15$\pm$0.02 17.1$\pm$0.1
2011-01-21 3.42$\pm$0.03 5.61$\pm$0.03 17.0$\pm$0.1
2011-01-22 2.82$\pm$0.02 3.19$\pm$0.02 17.1$\pm$0.1
2011-01-23 2.00$\pm$0.03 2.81$\pm$0.03 16.9$\pm$0.1
2011-01-24 1.86$\pm$0.03 2.64$\pm$0.03 17.0$\pm$0.1
2011-01-25 1.83$\pm$0.02 3.31$\pm$0.03 17.0$\pm$0.1
2011-02-07 1.19$\pm$0.02 0.79$\pm$0.02 16.4$\pm$0.1
2011-02-14 1.04$\pm$0.06 0.25$\pm$0.06 –
------------ --------------- --------------- --------------
: Overview of the average Equivalent Width (EW) of HeII$\lambda4686$ and H$\beta$, the two most prominent emission lines in the FORS spectra, during each observing night. Also listed is the corresponding $B$ magnitude from the acquisition images obtained at the beginning of each block of FORS observations. Also included, as the final entry, are the EW measurements from the average X-Shooter spectrum. \[ew\]
For our first observation, the wealth of emission lines in the average FORS spectrum shown in Fig.\[spec\], we start by noting that the typical emission lines for LMXBs (i.e. H$\beta$, HeII $\lambda$4686, Bowen and all HeI lines) are all present. Furthermore, there are several other emission lines which are not commonly observed in LMXBs. Although these uncommon lines become fainter as a function of time in a similar way to H$\beta$ and HeII $\lambda$4686 (see above), they appear to be present in all spectra. We therefore conclude that they are real and not an artifact of our reduction. In order to identify these lines, we first moved the average spectrum in such a way that the central wavelength of several HeI lines corresponded to their rest-wavelength. Using an atomic line list by van Hoof & Verner (1997) we then noted that most of the uncommon emission lines correspond to the strongest NIII or NII transitions or their multiplets. We therefore tentatively conclude that all these lines are due to NIII or NII and have indicated them as such in Fig.\[spec\].
Also, given the suggestion by Maitra et al. (2011b) that MAXIJ0556 could have an extremely high N/O abundance, we also checked for the presence of strong O and C lines by cross-correlating the line list obtained from ultra-compact C/O binaries by Nelemans et al. (2004). This shows that only CIII$\lambda$4652, which is typically the strongest C or O line in most X-ray binaries (see e.g. Steeghs & Casares 2002), is clearly present and its relative strength is comparable to that of other X-ray binaries.
In order to understand our second observation, the much weaker emission lines in the X-Shooter, we examined the average FORS spectra for each individual night in more detail. We note that there is a large variability in the emission lines from night to night. In order to quantify this variability we averaged all our spectra during an observing night together and measured the Equivalent Width (EW) of the two strongest emission lines (i.e. HeII$\lambda$4686 and H$\beta$) and list the result in Table\[ew\]. Interestingly, the EW for both lines shows a decreasing trend as a function of time (with the exception of H$\beta$ during January 25), while the $B$-band magnitude stays more or less constant (except during the last FORS observation). Although this decrease in EW resembles a power-law decay, almost all individual points strongly deviate from their best power-law fit, and we decided against providing any fit results. We conclude that the strength of the emission lines in MAXIJ0556 is independent of the continuum flux, and show a continuing decay over the $\simeq$1 month of our observations.
Radial Velocities
-----------------
A close inspection of the Bowen region shows that it consists of several narrow components, and in Fig.\[average\] we show a close-up of this region. In most individual spectra we could identify two (sometimes three) individual components. Following previous observations of the Bowen region (see e.g. Cornelisse et al. 2008 for an overview) we identified the strongest component with NIII $\lambda$4640.64 and the second strongest as CIII $\lambda$4647.42. In the cases that a third component was present this was identified as NIII $\lambda$4634.12, but we could not find evidence for CIII $\lambda$4650.2, a line that was clearly detected in e.g. ScoX-1 (Steeghs & Casares 2002).
The narrow components in the Bowen region showed clear velocity shifts from night to night. In order to obtain radial velocities from them we averaged two consecutive FORS spectra together to increase the signal-to-noise, while for the X-Shooter spectra we averaged all spectra from the same observing block. This provided a total of 26 spectra from which radial velocities could be measured (note that only the very last spectrum of FORS was not grouped with any other spectrum). Following Steeghs & Casares (2002) we fitted the narrow components with 3 Gaussians (NIII$\lambda$4634.12/4640.64 and CIII$\lambda$4647.42) under the assumption that they all have a common radial velocity but an independent strength. Using a least squares technique we determined the best common radial velocity and corresponding error.
In order to constrain the orbital period we performed a period analysis on the derived radial velocities using the Lomb-Scargle method (Scargle 1982), which is best suited for unevenly sampled time-series. In Fig.\[period\] we show the resulting periodogram where we have indicated the 4 strongest peaks that represent good candidates for the orbital period (note that most are related to each other due to the daily alias).
To estimate the significance of the peaks in the periodogram we performed a Monte-Carlo simulation, using identical temporal sampling as for the original radial velocity measurements. Furthermore, our simulations used a distribution and mean of radial velocities that was similar to the original dataset. We produced 500,000 random radial velocity datasets and measured the peak power from the corresponding periodograms. From the distribution of the peak powers we estimated both the 3$\sigma$ and 5$\sigma$ confidence levels and have indicated these in Fig.\[period\]. The only periods that have a significance $\ge$5$\sigma$ are 2.17 and 0.68 days, and we note that the (almost significant) 0.41 day period is related via the daily alias to the other two significant periods.
First of all we note that none of the suggested X-ray periods by Strohmayer (2011) fit our data. Furthermore, our first block of 20 FORS spectra consisted of 3.7 hrs of continuous observations, with radial velocity measurements that only show an increasing trend and a total amplitude of $\simeq$60 km s$^{-1}$. Since this amplitude is much smaller than what is observed from night to night ($\simeq$200 km s$^{-1}$) we conclude that the orbital period must be longer than 0.2 days. Also, the spectra in Fig.\[spec\] show strong Hydrogen lines, thereby excluding an ultra-compact nature of MAXIJ0556 and hence any period $\le$1 hr (i.e. close to the integration time of our spectra) as was suggested by Maitra et al. (2011b). We therefore conclude that one of the periods indicated in Fig.\[period\] is likely to be the true orbital period.
We also looked in more detail at the two blocks of X-Shooter observations, and Fig.\[phase\] shows the average spectrum around the Bowen region of both blocks. We note that the narrow lines, in particular NIII $\lambda$4640, are almost absent during the first block but have become stronger during the second (while the strength of emission lines such as H$\beta$ and HeII $\lambda$4686 have not changed significantly). In order to explain this change in the narrow components in only a few hours, we estimated (for the 4 potential orbital periods listed in Fig.\[period\]) the orbital phases for both blocks of X-Shooter observations. First of all we note that that for all orbital periods the first block was obtained around orbital phase 0 (i.e. when the donor star is closest to us). However, the second block was taken around orbital phase 0.25-0.35 for the shorter periods (0.68 and 0.41 days), but is still around phase 0 for the longer periods (2.17 and 1.65 days).
Under the assumption that the narrow components in the Bowen region arise on the irradiated surface of the donor star and the inclination is relatively high (see Sect. 4), the simplest way to explain the change in line strength between the two blocks of X-Shooter observations is that our view of the irradiated surface has changed. Since the first block was obtained around orbital phase 0, the irradiated side should be least visible, and this could explain the weakness of the narrow lines. To explain the increase in line strength, the second block must then have been observed at a significantly different orbital phase where our view of the irradiated surface must have improved. Since the orbital phase only changes significantly between the X-Shooter blocks for the shorter orbital periods (0.68 and 0.41 days), we conclude that the orbital period must be one of these two. Combined with the fact that the timing of the eclipse-like features (see Sect. 4 for a discussion on their nature) also suggests a period $\le$1.2 days, we propose that the true orbital period is either 0.684$\pm$0.005 or 0.406$\pm$0.002 days.
Unfortunately it is not possible to distinguish between these two final candidate periods and we assume that both are equally likely. In Fig.\[rv\] we show the phase-folded radial velocity curve for both candidate periods, and in Table\[parm\] we list the best fitting parameters of their radial velocity curves.
Finally, due to the highly variable nature of the line profiles and strengths of HeII $\lambda$4686 and H$\beta$ (see Table\[ew\] and also illustrated in Fig.\[change\]) we did not attempt to obtain an estimate of the radial velocity semi-amplitude of the compact object. However, we do point out that both lines appear to be red-shifted compared to the narrow lines in the Bowen region. Whereas NIII $\lambda$4641 shows a systemic velocity of 130 km s$^{-1}$ (slightly dependent on the assumed period, see Table\[parm\]), HeII shows a mean velocity of 340 km s$^{-1}$ and for H$\beta$ it is already 400 km s$^{-1}$. Whereas for H$\beta$ such behavior has been observed before (see e.g. Cornelisse et al. 2007), and could be due to the presence of other non-resolved emission lines, it is uncommon for HeII. Since the lines do not appear to be asymmetric and there is also no indication for the presence of a PCygni-like profile, we currently do not have a good explanation for this shift.
Discussion
==========
We obtained optical spectroscopic observations of MAXIJ0556 during its outburst in 2011 and have shown that the average spectrum is not only dominated by strong H, He emission lines but also N. Although the presence of Hydrogen does rule out an ultra-compact nature as was suggested by Maitra et al. (2011b), it does support their suggestion that MAXIJ0556 has an unusually high N/O abundance.
Another interesting result is the power-law decay of all the emission lines over the time-span of our observations. This decay appears to be in contradiction with the results by Fender et al. (2009), which showed an anti-correlation between the EW of H$\alpha$ and the continuum emission. However, they were concerned with the fading phase of the transient outburst, while our observations are mainly during the peak of the outburst. It is interesting to note that in Fig.2 of Fender et al. (2009) the H$\alpha$ EW during the peak of the outburst also appears to be dropping for most transients. Fender et al. (2009) provide several explanations, and most of them can also be adopted here. For example, it could be possible that the outer accretion disk becomes hotter over time, thereby changing the optical depth of the emission lines, although saturation effects that hamper the production of the emission lines or even spectral and geometrical changes cannot be ruled out.
Furthermore we have detected narrow Bowen lines in the optical spectrum that moved from night to night. Following other LMXBs for which similar narrow features have been seen (e.g. Cornelisse et al. 2008), we think it likely that these lines arise on the irradiated surface of the donor star of MAXIJ0556. Our period analysis shows that there are two possible orbital periods, 0.68d and 0.41d. From these periods we derived a radial velocity semi-amplitude of 101-104 km s$^{-1}$.
16 hrs 9.75 hrs
---------------------------- --------------- ---------------
$P_{\rm orb}$ (day) 0.684(5) 0.406(2)
$T_0$ (HJD-2,450,000) 5,581.520(3) 5,581.541(2)
$K_{\rm em}$ (km s$^{-1}$) 104.2$\pm$3.8 100.8$\pm$3.3
$\gamma$ (km s$^{-1}$) 133.7$\pm$1.9 131.5$\pm$1.8
$f$($M_X$) ($M_\odot$) $\ge$0.07 $\ge$0.04
: Overview of the system parameters of MAXIJ0556 for both candidate orbital periods. $T_0$ indicates the time of inferior conjunction of the donor star. \[parm\]
If the narrow components truly trace the irradiated companion we thereby obtain a lower limit to the mass-function of $f$$(M_X)$=$M_X$sin$^3i$/(1+$q$)$^2$$\ge$0.07$M_\odot$ (for $P_{\rm
orb}$=0.68 day) or $\ge$0.04$M_\odot$ (for $P_{\rm orb}$=0.41 day), where $i$ is the inclination and $q$(=$M_X$/$M_{\rm donor}$) is the mass ratio. These mass functions are true lower-limits, since the lines should trace the radial velocity of the surface and not the center of mass of the donor star, and its true orbital velocity ($K_2$) should therefore always be higher (Muñoz-Darias et al. 2005).
One way we could further constrain the system parameters is by taking account of the X-ray properties of MAXIJ0556, and in particular the reported eclipse-like features (Strohmayer 2011; Maitra et al. 2011a). Strohmayer (2011) reported that the eclipse was not total, nor “sharp”, as might be expected if the X-ray source is obscured by the donor star. Belloni (private communication) confirmed this and suggested that these features more resemble dips, as are also observed in the high inclination system 4U1254$-$69 (e.g. Diaz-Trigo 2009). In 4U1254$-$69 these dips are thought to be caused by obscuration of the central X-ray source by the outer accretion disk edge, and the depth and strength strongly varies (sometimes the dips are even completely absent). Diaz-Trigo (2009) suggested the presence of a tilted/precessing accretion disk in 4U1254$-$69 and if something similar is present in MAXIJ0556 it implies a relatively high inclination.
The presence of such a tilted/precessing accretion disk would make MAXIJ0556 similar to the dwarf novae that show superhumps, and could also explain why neither of our two potential orbital periods is close to any period suggested by the X-ray dips (Strohmayer 2011). Superhumps are commonly observed in dwarf novae during a superoutburst (see e.g. Patterson et al. 2005 for an overview), and are thought to be caused by an instability at the 3:1 resonance that forces the eccentric accretion disk to precess (e.g. Whitehurst 1988). Due to these deformations of the disk shape the photometric period in dwarf novae is typically a few percent longer than the orbital period, although superhump periods shorter than the orbital period have also been observed if the precessing disk is counter-rotating (Patterson et al. 2002).
Assuming that MAXIJ0556 is a high inclination system (but not high enough for the donor to obscure the X-ray source) with a precessing accretion disk that sometimes partly obscures the central X-ray source, we can try to further constrain its system parameters. First of all, one of the original reported periods by Strohmayer (2011) must then be the superhump period. If the orbital period is 0.68 days (16 hrs) the superhump period would most likely be 0.58 days. Following Patterson et al. (2005), this would lead to an observed fractional period excess of $\epsilon$=0.14 and suggests a mass ratio of $q$$\simeq$0.45. We note that this mass ratio is high for systems that typically show precession. However, Osaki (2005) pointed out that sufficiently hot accretion disks may expand beyond the 3:1 resonance radius and start precessing. Since MAXIJ0556 is thought to be a Z-source and should have accretion rates close to Eddington (Homan et al. 2011), therefore it might be reasonable to expect a such hot accretion disk and therefore precession.
Using $q$=0.45 we can correct the velocity of the irradiated surface (our observed $K_{\rm em}$ of 104 km s$^{-1}$) to the center of mass velocity of the donor star using the $K$-correction developed by Muñoz-Darias et al. (2005). This would lead to a maximum $K_2$ velocity of 190 km s$^{-1}$ (for a disk with an opening angle of 0$^\circ$). Furthermore, $q$=0.45 suggests that eclipses by the secondary will occur for an inclination $\ge$72$^{\circ}$ (Paczynski 1974), and we use this as a first estimate of the inclination. Combining all these system parameters we obtain a mass for the compact object of $\simeq$1.2$M_\odot$, which is close to the canonical 1.4$M_\odot$ neutron star mass. We therefore believe that for a disk opening angle $\simeq$6$^{\circ}$ and a relatively high inclination it is also possible to obtain a 1.2-1.4$M_\odot$ compact object.
Given the system parameters above, we would need an inclination of $\simeq$40$^{\circ}$ to obtain a black hole with a mass of 3.2$M_\odot$. This is too low to realistically expect any dipping behavior.
For an orbital period of 0.4 days (with a superhump period of 9.33 hrs), and making the same assumptions as above, we would obtain a mass for the compact object of $\simeq$0.2$M_\odot$. Again, this would lead to unrealistically low inclinations to obtain any sensible compact object mass. We therefore think that our data are most consistent with a neutron star LMXB in a 0.68 days orbit and being observed at moderately high inclination.
Conclusions
===========
We have presented the results of a spectroscopic campaign on the X-ray transient MAXIJ0556 close to peak of outburst and have shown that strong NIII emission lines are present in the spectrum, while C and O show no enhancement. This is in agreement with Maitra et al. (2011b) who have suggested an unusually high N/O abundance. Furthermore, all emission lines show a continuing decay over the $\simeq$one month of our observations, for which we have no good explanation.
Our radial velocity study has shown that only two orbital periods (0.68 and 0.41 days) are possible. From our dataset only we cannot distinguish the true orbital period so both periods are equally likely. We suggest that MAXIJ0556 harbors a precessing accretion disk to explain not only the disappearance of the dip-like features observed in X-rays, but also the discrepancy between the periods suggested by Strohmayer (2011) and ourselves. If the presence of a precessing accretion disk can be proved, then the X-ray dips suggest not only a reasonably high inclination but also that the longer period is most likely the true orbital period. This would imply that the compact object is a neutron star, strengthening the suggestion of Homan et al. (2011).
Since a superhump is usually not observed when a system is in quiescence, the scenario outlined above can only be confirmed with photometric observations obtained during a future outburst. Finding the true orbital period on the other hand can be obtained by observing MAXIJ0556 when it is back in quiescence. With a quiescent optical magnitude of $R$$\simeq$20, it is easily accessible for both photometric and spectroscopic studies with medium-sized telescopes. Such studies would not only constrain the true orbital period and the radial velocity semi-amplitude of the center of mass of the donor star (instead of its irradiated surface), but hopefully constrain the inclination and binary mass ratio. This would offer solid dynamical constraints on the masses of the binary components, in order to verify that MAXIJ0556 indeed harbors a neutron star and more accurately determine its mass.
Acknowledgments {#acknowledgments .unnumbered}
===============
This work is based on data collected at the European Southern Observatory Paranal, Chile \[Obs.Ids. 286.D-5037(A) and 086.D-0318(A)\]. We cordially thank the ESO director for granting Director’s Discretionary Time. RC wants to thank Tomaso Belloni for providing important information on the X-ray properties of MAXIJ0556. We acknowledge the use of PAMELA and MOLLY which were developed by T.R. Marsh, and the use of the on-line atomic line list at http://www.pa.uky.edu/$\sim$peter/atomic. RC acknowledges a Ramon y Cajal fellowship (RYC-2007-01046) and a Marie Curie European Reintegration Grant (PERG04-GA-2008-239142). RC and JC acknowledge support by the Spanish Ministry of Science and Innovation (MICINN) under the grant AYA 2010-18080. This program is also partially funded by the Spanish MICINN under the consolider-ingenio 2010 program grant CSD 2006-00070. TMD acknowledges funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement number ITN 215212. DS acknowledges an STFC Advanced Fellowship.
[99]{} Belloni T., Motta S., Muñoz-Darias T., Stiele H., 2011, ATel, 3112 Charles P.A., & Coe M.J., 2006, in “Compact stellar X-ray sources”, Cambridge Astrophysics series, No. 39, Cambridge University Press, p. 215 Coriat M., Tzioumis A.K., Corbel S., Fender R., Brocksopp C., Broderick J., Casella P., Maccarone T., 2011, ATel, 3119 Cornelisse R., Casares J., Steeghs D., Barnes A.D., Charles P.A., Hynes R. I., O’Brien K., 2007, MNRAS, 375, 1463 Cornelisse R., Casares J., Muñoz-Darias T., Steeghs D., Charles P.A., Hynes R.I., O’Brien K., Barnes A.D., 2008, “A Population Explosion: The Nature & Evolution of X-ray Binaries in Diverse Environments”, AIP Conf. Proc., Vol. 1010, p. 148 Diaz-Trigo M., Parmar A.N., Boirin L., Motch C., Talavera A., Balman S., 2009, A&A, 493, 145 Fender R.P., Russell D.M., Knigge C., Soria R., Hynes R.I., Goad M. 2009, MNRAS, 393, 1608 Goldoni P., Royer F., Francois P., Horrobin M., Blanc G., Vernet J., Modigliani A., Larsen J., 2006, SPIE, 6269, 80 Gray, D.F., 1992, “The Observation and Analysis of Stellar Photospheres”, CUP, 20 Halpern J.P., 2011, ATel, 3104 Homan J., Linares M., van den Berg M., Fridriksson J. 2011, ATel, 3650 Horne K., 1986, PASP, 98, 609 van Hoof P.A.M., Verner D. 1997, in “Proceedings of the first ISO workshop on Analytical Spectroscopy”, Eds. A.M. Heras, K. Leech, N.R. Trams and M. Perry, ESA Publications Division, 1997, p. 273 Kennea J.A., Evans P.A., Krimm H., Romano P., Mangano V., Curran P. Yamaoka K., 2011, ATel, 3103 Maitra D., Reynolds M.T., Miller J.M., Raymond J., 2011a, ATel, 3349 Maitra D., Miller J.M., Raymond J.C., Reynolds M.T. 2011b, accepted for ApJ Letters, astro-ph/1110.6918 Matsumura T., Negoro H., Suwa F., Nakahira S., Ueno S., Tomida H., Kohama M., Ishikawa M., et al. 2011, ATel, 3102 Muñoz-Darias T., Casares J., Martinez-Pais I.G., 2005, ApJ, 635, 502 Nelemans G., Jonker P.G., Marsh T.R., van der Klis M., 2004, MNRAS, 348, L7 Osaki, Y., in Proc. Japan Acad. Ser. B, Physical and Biological Sciences, Tokyo, p. 291 Paczynski B. 1974, A&A, 34, 161 Patterson J., Fenton W.H., Thorstensen J.R., Harvey D.A., Skillman D.R., Fried R.E., Monard B., et al., 2002, PASP, 114, 802 Patterson J., Kemp J., Harvey D.A., Fried R.E., Rea R., Monard B., Cook L.M., Skillman D.R., et al., 2005, PASP, 117, 1204 Psaltis D., 2006, in “Compact stellar X-ray sources”, Cambridge Astrophysics series, No. 39, Cambridge University Press, p. 1 Russell D.M., Lewis F., Doran R., Roberts S., 2011, ATel, 3116 Scargle, J.D., 1982, ApJ, 263, 835 Shahbaz T., Smale A.P., Naylor T., Charles P.A., van Paradijs J., Hassall B.J.M., Callanan P., 1996, MNRAS, 282, 1437 Steeghs D., & Casares J., 2002, ApJ, 568, 273 Strohmayer T.E., 2011, ATel, 3110 Wade R.A., Horne K., 1988, ApJ, 324, 411 Whitehurst R., 1988, MNRAS, 232, 35
\[lastpage\]
[^1]: E-mail: corneli@iac.es
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Heteroclinic cycles involving two saddle-foci, where the saddle-foci share both invariant manifolds, occur persistently in some symmetric differential equations on the 3-dimensional sphere. We analyse the dynamics around this type of cycle in the case when trajectories near the two equilibria turn in the same direction around a 1-dimensional connection — the saddle-foci have the same chirality. When part of the symmetry is broken, the 2-dimensional invariant manifolds intersect transversely creating a heteroclinic network of Bykov cycles. We show that the proximity of symmetry creates heteroclinic tangencies that coexist with hyperbolic dynamics. There are $n$-pulse heteroclinic tangencies — trajectories that follow the original cycle $n$ times around before they arrive at the other node. Each $n$-pulse heteroclinic tangency is accumulated by a sequence of $(n+1)$-pulse ones. This coexists with the suspension of horseshoes defined on an infinite set of disjoint strips, where the first return map is hyperbolic. We also show how, as the system approaches full symmetry, the suspended horseshoes are destroyed, creating regions with infinitely many attracting periodic solutions.'
address: |
Centro de Matemática da Universidade do Porto\
and Faculdade de Ciências, Universidade do Porto\
Rua do Campo Alegre, 687, 4169-007 Porto, Portugal
author:
- 'Isabel S. Labouriau Alexandre A. P. Rodrigues'
title: Global bifurcations close to symmetry
---
[^1]
Introduction
============
A Bykov cycle is a heteroclinic cycle between two hyperbolic saddle-foci of different Morse index, where one of the connections is transverse and the other is structurally unstable — see Figure \[orientations\]. There are two types of Bykov cycle, depending on the way the flow turns around the two saddle-foci, that determine the *chirality* of the cycle. Here we study the non-wandering dynamics in the neighbourhood of a Bykov cycle where the two nodes have the same chirality. This is also studied in [@LR], and the case of different chirality is discussed in [@LR3]. A simplified version of the arguments presented here appears in [@LR_proc].
![A Bykov cycle with nodes of the same chirality. There are two possibilities for the geometry of the flow around a Bykov cycle depending on the direction trajectories turn around the connection $[{{\rm\bf v}}\rightarrow {{\rm\bf w}}]$. We assume here that the nodes have the same chirality: trajectories turn in the same direction around the connection. When the endpoints of a nearby trajectory are joined, the closed curve is always linked to the cycle.[]{data-label="orientations"}](BykovSameChirality)
The object of study
-------------------
Our starting point is a fully $({{\rm\bf Z}}_2\times{{\rm\bf Z}}_2)$-symmetric differential equation $\dot{x}=f_0(x)$ in the three-dimensional sphere ${{\rm\bf S}}^3$ with two saddle-foci that share all the invariant manifolds, of dimensions one and two, both contained in flow-invariant submanifolds that come from the symmetry. This forms an attracting heteroclinic network $\Sigma^0$ with a non-empty basin of attraction $V^0$. We study the global transition of the dynamics from this fully symmetric system $\dot{x}=f_0(x)$ to a perturbed system $\dot{x}=f_\lambda(x)$, for a smooth one-parameter family that breaks part of the symmetry of the system. For small perturbations the set $V^0$ is still positively invariant.
When $\lambda\neq 0$, the one-dimensional connection persists, due to the remaining symmetry, and the two dimensional invariant manifolds intersect transversely, because of the symmetry breaking. This gives rise to a network $\Sigma^\lambda$, that consists of a union of Bykov cycles, contained in $V^0$. For partial symmetry-breaking perturbations of $f_0$, we are interested in the dynamics in the maximal invariant set contained in $V^0$. It contains, but does not coincide with, the suspension of horseshoes accumulating on $\Sigma^\lambda$ described in [@ACL; @NONLINEARITY; @KLW; @LR; @Rodrigues2]. Here, we show that close to the fully symmetric case it contains infinitely many heteroclinic tangencies. Under an additional assumption, we show that $V_0$ contains attracting limit cycles with long periods, coexisting with sets with positive entropy.
History
-------
Homoclinic and heteroclinic bifurcations constitute the core of our understanding of complicated recurrent behaviour in dynamical systems. This starts with Poincaré on the late $19^{th}$ century, with major subsequent contributions by the schools of Andronov, Shilnikov, Smale and Palis. These results rely on a combination of analytical and geometrical tools used to understand the qualitative behaviour of the dynamics.
Heteroclinic cycles and networks are flow-invariant sets that can occur robustly in dynamical systems with symmetry, and are frequently associated with intermittent behaviour. The rigorous analysis of the dynamics associated to the structure of the nonwandering sets close to heteroclinic networks is still a challenge. We refer to [@HS] for an overview of heteroclinic bifurcations and for details on the dynamics near different kinds of heteroclinic cycles and networks.
Bykov cycles have been found analytically in the Lorenz model in [@ABS; @PY] and the nearby dynamics was studied by Bykov in in [@Bykov93; @Bykov99; @Bykov]. The point in parameter space where this cycle occurs is called a *T-point* in [@GSpa]. Recently, there has been a renewal of interest in this type of heteroclinic bifurcation in the reversible [@DIK; @DIKS; @Lamb2005], equivariant [@ACL; @NONLINEARITY; @LR; @Rodrigues4] and conservative cases [@BessaRodrigues]. See also [@KLW]. The transverse intersection of the two-dimensional invariant manifolds of the two equilibria implies that the set of trajectories that remain for all time in a small neighbourhood of the Bykov cycle contains a locally-maximal hyperbolic set admitting a complete description in terms of symbolic dynamics, reminiscent of the results of Shilnikov [@Shilnikov67]. An obstacle to the global symbolic description of these trajectories is the existence of tangencies that lead to the birth of stable periodic solutions, as described for the homoclinic case in [@Afraimovich83; @GavS; @Newhouse74; @Newhouse79; @YA].
All dynamical models with quasi-stochastic attractors were found, either analytically or by computer simulations, to have tangencies of invariant manifolds [@Afraimovich83; @Gonchenko96; @Gonchenko2007]. As a rule, the sinks in a quasi-stochastic attractor have very long periods and narrow basins of attraction, and they are hard to observe in applied problems because of the presence of noise [@Gonchenko2012].
Motivated by the analysis of Lamb *et al* [@Lamb2005] in the context of the Michelson system, Bykov cycles have been considered by Knobloch [*et al*]{} [@KLW]. Using the Lin’s method, the authors extend the analysis to cycles in spaces of arbitrary dimension, while restricting it to trajectories that remain for all time inside a small tubular neighbourhood of the cycle. We also refer the reader to [@KLW2015], where the authors consider non-elementary $T$-points in reversible differential equations. The leading eigenvalues at the two equilibria are real and the two-dimensional invariant manifolds meet tangentially. They found chaos in the unfolding of this $T$-point and bifurcations of periodic solutions in the process of annihilation of the shift dynamics.
Bykov cycles appear in many applications like the Kuramoto-Sivashinsky systems [@DIK; @Lamb2005], magnetoconvection [@Rodrigues2; @Ruck] and travelling waves in reaction-diffusion dynamics [@AGH; @GH].
Chirality
---------
For a Bykov cycle in a 3-dimensional manifold, we say that the nodes have the same chirality if trajectories turn in the same direction around the common 1-dimensional invariant manifold. If they turn in opposite directions we say the nodes have different chirality. A more formal definition, using links, will be given in Section \[object\] below, showing this to be a global topological invariant of the connection that is well defined only in a 3-dimensional ambient space.
These cycles have been studied by different authors who were not aware of the chirality, ignoring what looked like a very small and unimportant choice in local coordinates. For instance, Bykov in [@Bykov99; @Bykov] addresses the case of different chirality without mentioning it explicitly — see a discussion in Section 7 of [@LR3]. Cycles with the same chirality are treated in [@ACL; @NONLINEARITY; @LR] and they occur naturally in reversible differential equations [@KLW2015]. Dynamical features that are irrespective of chirality are described in [@KLW].
Arbitrarily close to Bykov cycles of any chirality there are suspended horseshoes and multi-pulse heteroclinic cycles — see Theorem \[teorema T-point switching\] below. Around Bykov cycles where the nodes have different chirality, heteroclinic tangencies occur generically in trajectories that remain close to the cycle for all time, as shown in [@LR3]. This is not the case when the nodes have the same chirality, but we show here that heteroclinic tangencies appear when the equations are approaching a more symmetric one.
Symmetry
--------
Heteroclinic cycles involving equilibria are not a generic feature in differential equations, but they can be structurally stable in systems which are equivariant under the action of a symmetry group, due to the existence of flow-invariant subspaces [@GH]. Thus, perturbations that preserve the symmetry will not destroy the cycle. Explicit examples of equivariant vector fields for which such cycles may be found are reported in [@ACL2; @ALR; @KR; @LR3; @MPR; @Rodrigues2; @LR2]. Symmetry, exact or approximate, plays an essential role in the analysis of nonlinear physical phenomena [@AGH; @GS; @Rodrigues2]. It is often incorporated in models either because it occurs approximately in the phenomena being modelled, or because its presence simplifies the analysis. Since reality has not perfect symmetry, it is desirable to understand the dynamics that are being created under small symmetry breaking perturbations.
Symmetry plays two roles here. First, it creates flow-invariant subspaces where non-transverse heteroclinic connections are persistent, and hence Bykov cycles are robust in this context. Second, we use the proximity of the fully symmetric case to capture more global dynamics. Symmetry constrains the geometry of the invariant manifolds of the saddle-foci and allows us some control of their relative positions, and we find infinitely many heteroclinic tangencies corresponding to trajectories that make an excursion away from the original cycle. For Bykov cycles with the same chirality, tangencies only take place near symmetry.
In the analysis of the annihilation of hyperbolic horseshoes associated to tangencies, on the one hand, the symmetry adds complexity to the problem, because the analysis is not so standard as in [@PT; @YA]. On the other hand, symmetry simplifies the analytic expression of the return map. It is clear that, for Bykov cycles of the same chirality, the annihilation of hyperbolic horseshoes associated to tangencies only takes place near symmetry. In the fully asymmetric case, the general study seems to be analitically untreatable.
Many questions remain for future work. One obvious question is to assume different chirality of the nodes. Due to the infinite number of reversals described in [@LR3] , other types of bifurcations may occur. The second question is to observe whether the non-wandering set may be reduced to a homoclinic class.
This article
------------
We study the dynamics arising near a symmetric differential equation with a specific type of heteroclinic network. We show that when part of the symmetry is broken, the dynamics undergoes a global transition from hyperbolicity coexisting with infinitely many sinks, to the emergence of regular dynamics. We discuss the global bifurcations that occur as a parameter $\lambda$ is used to break part of the symmetry. We complete our results by reducing our problem to a symmetric version of the structure of Palis and Takens’ result [@PT §3] on homoclinic bifurcations. Being close to symmetry adds complexity to the dynamics. Chirality is an essential information in this problem.
This article is organised as follows. In Section \[secObject\], after some basic definitions, we describe precisely the object of study and we review some of our recent results related to it. In Section \[secStatement\] we state the main results of the present article. The coordinates and other notation used in the rest of the article are presented in Section \[localdyn\], where we also obtain a geometrical description of the way the flow transforms a curve of initial conditions lying across the stable manifold of an equilibrium. In Section \[sec tangency\], we prove that there is a sequence of parameter values $\lambda_i$ accumulating on $0$ such that the associated flow has heteroclinic tangencies. In Section \[bif\], we discuss the geometric constructions that describe the global dynamics near a Bykov cycle as the parameter varies. We also describe the limit set that contains nontrivial hyperbolic subsets and we explain how the horseshoes disappear as the system regains full symmetry. We show that under an additional condition this creates infinitely many attracting periodic solutions.
The object of study and preliminary results {#secObject}
===========================================
In the present section, after some preliminary definitions, we state the hypotheses for the system under study together with an overview of results obtained in [@LR], emphasizing those that will be used to explain the loss of hyperbolicity of the suspended horseshoes and the emergence of heteroclinic tangencies near the cycle.
Definitions {#preliminaries}
-----------
Let $f$ be a $C^r$ vector field on the unit three-sphere ${{\rm\bf S}}^3$, $r\geq 3$, with flow given by the unique solution $x(t)=\varphi(t,x_{0})\in {{\rm\bf S}}^{3}$ of $$\label{general}
\dot{x}=f_\lambda(x) \qquad \text{and} \qquad x(0)=x_{0}.$$ where $r\geq 3$ and $\lambda \in {{\rm\bf R}}$. Suppose that ${{\rm\bf v}}$ and ${{\rm\bf w}}$ are two hyperbolic saddle-foci of (\[general\]) with different Morse indices (dimension of the unstable manifold), say 1 and 2. There is a [*heteroclinic cycle* ]{}associated to $\{{{\rm\bf v}}, {{\rm\bf w}}\}$ if $$W^{u}({{\rm\bf v}})\cap W^{s}({{\rm\bf w}})\neq \emptyset \qquad \text{and} \qquad W^{u}({{\rm\bf w}})\cap W^{s}({{\rm\bf v}})\neq \emptyset$$ where $W^s(\star)$ and $W^u(\star )$ refer to the stable and unstable manifolds of the hyperbolic saddle $\star$, respectively. The terminology $[{{\rm\bf v}}\to {{\rm\bf w}}]$ or $[{{\rm\bf w}}\to {{\rm\bf v}}]$ denotes a solution contained in $W^{u}({{\rm\bf v}})\cap W^{s}({{\rm\bf w}})$ or $W^{u}({{\rm\bf v}})\cap W^{s}({{\rm\bf w}})$, respectively. A *heteroclinic network* is a finite connected union of heteroclinic cycles.
For $\lambda=0$, there is a 1-dimensional trajectory in $W^{u}({{\rm\bf v}})\cap W^{s}({{\rm\bf w}})$ and a $2$-dimensional connected flow-invariant manifold contained in $W^{u}({{\rm\bf w}})\cap W^{s}({{\rm\bf v}})$, meaning that there are a continuum of solutions connecting ${{\rm\bf w}}$ and ${{\rm\bf v}}$. For $\lambda\neq 0$, the one-dimensional manifolds of the equilibria coincide and the two-dimensional invariant manifolds have a transverse intersection. This second situation is what we call a *Bykov cycle*.
These objects are known to exist in several settings and are structurally stable within certain classes of ${\mathbf G}$-equivariant systems, where ${\mathbf G}\subset \textbf{O}(n)$ is a compact Lie group. Here we consider differential equations (\[general\]) with the equivariance condition: $$f_\lambda(\gamma x)=\gamma f_\lambda(x), \qquad \text{for all } \gamma \in {\mathbf G}, \lambda \in {{\rm\bf R}}.$$ An *isotropy subgroup* of ${\mathbf G}$ is a set $\widetilde{{\mathbf G}}=\{\gamma\in{\mathbf G}:\ \gamma x=x\}$ for some $x$ in phase space; we write $\operatorname{Fix}(\widetilde{{\mathbf G}})$ for the vector subspace of points that are fixed by the elements of $\widetilde{{\mathbf G}}$. For ${\mathbf G}$-equivariant differential equations each subspace $\operatorname{Fix}(\widetilde{{\mathbf G}})$ is flow-invariant. The group theoretical methods developed in [@GS] are a powerful tool for the analysis of systems with symmetry.
Suppose there is a cross-section $S$ to the flow of (\[general\]), such that $S$ contains a compact invariant set $\Lambda$ where the first return map is well defined and conjugate to a full shift on a countable alphabet, we call the flow-invariant set $\widetilde\Lambda=\{\varphi(t,q)\,:\,t\in{{\rm\bf R}},q\in\Lambda\}$ a *suspended horseshoe*.
The organising centre {#object}
---------------------
The starting point of the analysis is a differential equation $\dot{x}=f_0(x)$ on the unit sphere ${{\rm\bf S}}^3 =\{X=(x_1,x_2,x_3,x_4) \in {{\rm\bf R}}^4: ||X||=1\}$ where $f_0: {{\rm\bf S}}^3 \rightarrow \mathbf{T}{{\rm\bf S}}^3$ is a $C^3$ vector field with the following properties:
1. \[P1\] The vector field $f_0$ is equivariant under the action of $ {\mathbf G} ={{\rm\bf Z}}_2 \oplus {{\rm\bf Z}}_2$ on ${{\rm\bf S}}^3$ induced by the linear maps on ${{\rm\bf R}}^4$: $$\gamma_1(x_1,x_2,x_3,x_4)=(-x_1,-x_2,x_3,x_4)
\qquad
\text{and}
\qquad
\gamma_2(x_1,x_2,x_3,x_4)=(x_1,x_2,-x_3,x_4).$$
2. \[P2\] The set $\operatorname{Fix}( {{\rm\bf Z}}_2 \oplus {{\rm\bf Z}}_2)=\{x \in {{\rm\bf S}}^3:\gamma_1 x=\gamma_2 x = x \}$ consists of two equilibria ${{\rm\bf v}}=(0,0,0,1)$ and ${{\rm\bf w}}=(0,0,0,-1)$ that are hyperbolic saddle-foci, where:
- the eigenvalues of $df_0({{\rm\bf v}})$ are $-C_{{{\rm\bf v}}} \pm \alpha_{{{\rm\bf v}}}i$ and $E_{{{\rm\bf v}}}$ with $\alpha_{{{\rm\bf v}}} \neq 0$, $C_{{{\rm\bf v}}}>E_{{{\rm\bf v}}}>0$;
- the eigenvalues of $df_0({{\rm\bf w}})$ are $E_{{{\rm\bf w}}} \pm \alpha_{{{\rm\bf w}}} i$ and $-C_{{{\rm\bf w}}}$ with $\alpha_{{{\rm\bf w}}} \neq 0$, $C_{{{\rm\bf w}}}>E_{{{\rm\bf w}}}>0$.
3. \[P3\] The flow-invariant circle $\operatorname{Fix}({\langle\gamma_{1}\rangle})=\{x \in {{\rm\bf S}}^3:\gamma_1 x = x \}$ consists of the two equilibria ${{\rm\bf v}}$ and ${{\rm\bf w}}$, a source and a sink, respectively, and two heteroclinic trajectories $[{{\rm\bf v}}\rightarrow {{\rm\bf w}}]$.
4. \[P4\] The $f_0$-invariant sphere $\operatorname{Fix}({\langle\gamma_{2}\rangle})=\{x \in {{\rm\bf S}}^3:\gamma_2 x = x \}$ consists of the two equilibria ${{\rm\bf v}}$ and ${{\rm\bf w}}$, and a two-dimensional heteroclinic connection from ${{\rm\bf w}}$ to ${{\rm\bf v}}$. Together with the connections in (P\[P3\]) this forms a heteroclinic network that we denote by $\Sigma^0$.
Given two small open neighbourhoods $V$ and $W$ of ${{\rm\bf v}}$ and ${{\rm\bf w}}$ respectively, consider a piece of trajectory $\varphi$ that starts at $\partial V$, goes into $V$ and then goes once from $V$ to $W$, enters $W$ and ends at $\partial W$. Joining the starting point of $\varphi$ to its end point by a line segment, one obtains a closed curve, the *loop* of $\varphi$. For almost all starting positions in $\partial V$, the loop of $\varphi$ does not meet the network $\Sigma^0$. If there are arbitrarily small neighbourhoods $V$ and $W$ for which the loop of every trajectory is linked to $\Sigma_0$, we say that *the nodes have the same chirality* as illustrated in Figure \[orientations\]. This means that near ${{\rm\bf v}}$ and ${{\rm\bf w}}$, all trajectories turn in the same direction around the one-dimensional connections $[{{\rm\bf v}}\rightarrow {{\rm\bf w}}]$. This is our last hypothesis on $f_0$:
1. \[P6\] The saddle-foci ${{\rm\bf v}}$ and ${{\rm\bf w}}$ have the same chirality.
Condition (P\[P6\]) means that the curve $\varphi$ and the cycle $\Sigma^0$ cannot be separated by an isotopy. This property is persistent under small smooth perturbations of the vector field that preserve the one-dimensional connection. An explicit example of a family of differential equations where this assumption is valid has been constructed in [@LR2]. The rigorous analysis of a case where property (P\[P6\]) does not hold has been done by the authors in [@LR3].
The heteroclinic network of the organising centre
-------------------------------------------------
The heteroclinic connections in the network $\Sigma^0$ are contained in fixed point subspaces satisfying the hypothesis (H1) of Krupa and Melbourne [@KM1]. Since the inequality $C_{{\rm\bf v}}C_{{\rm\bf w}}>E_{{\rm\bf v}}E_{{\rm\bf w}}$ holds, the stability criterion [@KM1] may be applied to $\Sigma^0$ and we have:
\[propNetworkIstStable\] Under conditions (P\[P1\])–(P\[P4\]) the heteroclinic network $\Sigma^0$ is asymptotically stable.
As a consequence of Lemma \[propNetworkIstStable\] there exists an open neighbourhood $V^0$ of the network $\Sigma^0$ such that every trajectory starting in $V^0$ remains in it for all positive time and is forward asymptotic to the network. The neighbourhood may be taken to have its boundary transverse to the vector field $f_0$. The flow associated to any $C^1$-perturbation of $f_0$ that breaks the one-dimensional connection should have some attracting feature.
Breaking the ${{\rm\bf Z}}_2({\langle\gamma_{1}\rangle})$-symmetry
------------------------------------------------------------------
When the symmetry ${{\rm\bf Z}}_2({\langle\gamma_{1}\rangle})$ is broken, the two one-dimensional heteroclinic connections are destroyed and the cycle $\Sigma^0$ disappears. Each cycle is replaced by a hyperbolic sink that lies close to the original cycle [@LR]. For sufficiently small $C^1$-perturbations, the existence of solutions that go several times around the cycles is ruled out.
The fixed point hyperplane defined by $\text{Fix}({\langle\gamma_{2}\rangle})=\{(x_1, x_2, x_3, x_4) \in {{\rm\bf S}}^3: x_3=0\}$ divides ${{\rm\bf S}}^3$ in two flow-invariant connected components, preventing arbitrarily visits to both cycles in $\Sigma^0$. Trajectories whose initial condition lies outside the invariant subspaces will approach one of the cycles in positive time. Successive visits to both cycles require breaking this symmetry [@LR].
Breaking the ${{\rm\bf Z}}_2({\langle\gamma_{2}\rangle})$-symmetry
------------------------------------------------------------------
From now on, we consider $f_0$ embedded in a generic one-parameter family of vector fields, breaking the ${\langle\gamma_{2}\rangle}$-equivariance as follows:
1. \[P5 1/2\] The vector fields $f_\lambda: {{\rm\bf S}}^3 \rightarrow \mathbf{T}{{\rm\bf S}}^3$ are a $C^3$ family of ${\langle\gamma_{1}\rangle}$-equivariant $C^3$ vector fields.
Since the equilibria ${{\rm\bf v}}$ and ${{\rm\bf w}}$ lie on $\operatorname{Fix}({\langle\gamma_{1}\rangle})$ and are hyperbolic, they persist for small $\lambda>0$ and still satisfy Properties (P\[P2\]) and (P\[P3\]). Their invariant two-dimensional manifolds generically meet transversely along two trajectories. The generic bifurcations from a manifold are discussed by Chillingworth [@Chillingworth], under these conditions we assume:
1. \[P5.\] For $\lambda \neq 0$, the two-dimensional manifolds $W^u({{\rm\bf w}})$ and $W^s({{\rm\bf v}})$ intersect transversely at two trajectories that we will call *primary connections*. Together with the connections in (P\[P3\]) this forms a Bykov heteroclinic network that we denote by $\Sigma^\lambda$.
The network $\Sigma^\lambda$ consists of four copies of the simplest heteroclinic cycle between two saddle-foci of different Morse indices, where one heteroclinic connection is structurally stable and the other is not, a *Bykov cycle*. Property (P\[P5.\]) is natural since the heteroclinic connections of (P\[P5.\]), as well as those of assertion of Theorem \[teorema T-point switching\] below, occur at least in symmetric pairs. In what follows, we describe the dynamics near a subnetwork consisting of the two primary connections together with one of the unstable connections. The results obtained concern each one of the two subnetworks of this type.
For small $\lambda\ne 0$, the neighbourhood $V^0$ is still positively invariant and contains the network $\Sigma^\lambda$. Since the closure of $V^0$ is compact and positively invariant it contains the $\omega$-limit sets of all its trajectories. The union of these limit sets is a maximal invariant set in $V^0$. For $f_0$ this is the cycle $\Sigma^0$, by Lemma \[propNetworkIstStable\], whereas for symmetry-breaking perturbations of $f_0$ it contains $\Sigma^\lambda$ but does not coincide with it. Our aim is to describe this set and its sudden appearance.
We proceed to review the dynamics in a small tubular neighbourhood of the cycle. In order to do this we introduce some concepts. Let ${\Gamma\subset \Sigma^\lambda}$ be one Bykov cycle involving ${{\rm\bf v}}$ and ${{\rm\bf w}}$, and the connections $[{{\rm\bf v}}\rightarrow{{\rm\bf w}}]$ and $[{{\rm\bf w}}\rightarrow{{\rm\bf v}}]$ [given by (P\[P3\]) and (P\[P5.\])]{}, respectively. Let $V,W \subset V^0$ be disjoint neighbourhoods of the equilibria as above. Consider two local cross-sections of $\Gamma$ at two points $p$ and $q$ in the connections $[{{\rm\bf v}}\rightarrow{{\rm\bf w}}]$ and $[{{\rm\bf w}}\rightarrow{{\rm\bf v}}]$, respectively, with $p, q\not\in V\cup W$. Saturating the cross-sections by the flow, one obtains two flow-invariant tubes joining $V$ and $W$ that contain the connections in their interior. We call the union of these tubes with $V$ and $W$ a *tubular neighbourhood* $\mathcal{T}$ of the Bykov cycle. More details will be provided in Section \[localdyn\].
Let $V,W \subset V^0$ be two disjoint neighbourhoods of ${{\rm\bf v}}$ and ${{\rm\bf w}}$, respectively. A one-dimensional connection $[{{\rm\bf w}}\to{{\rm\bf v}}]$ that, after leaving $W$, enters and leaves both $V$ and $W$ precisely $n \in {{\rm\bf N}}$ times is called an *$n$-pulse heteroclinic connection* with respect to $V$ and $W$. When there is no ambiguity, we omit the expression “with respect to $V$ and $W$”. If $n>1$ we call it a *multi-pulse heteroclinic connection*. If $W^u({{\rm\bf w}})$ and $W^s({{\rm\bf v}})$ meet tangentially along $[{{\rm\bf w}}\rightarrow {{\rm\bf v}}]$, we say that the connection $[{{\rm\bf w}}\rightarrow {{\rm\bf v}}]$ is an *$n$-pulse heteroclinic tangency*, otherwise we call it a *transverse $n$-pulse heteroclinic connection*.
The primary connections $[{{\rm\bf w}}\rightarrow {{\rm\bf v}}]$ in $\Sigma^\lambda$ of (P\[P5.\]) are transverse $0$-pulse heteroclinic connections. With these conventions we have:
\[teorema T-point switching\] If a vector field $f_0$ satisfies (P\[P1\])–(P\[P6\]), then the following properties are satisfied generically by vector fields in an open neighbourhood of $f_0$ in the space of ${\langle\gamma_{1}\rangle}$-equivariant vector fields of class $C^r$ on ${{\rm\bf S}}^3$, $r\geq 3$:
1. \[item0\] there is a heteroclinic network $\Sigma^*$ consisting of four Bykov cycles involving two equilibria ${{\rm\bf v}}$ and ${{\rm\bf w}}$, two 0-pulse heteroclinic connections $[{{\rm\bf v}}\rightarrow{{\rm\bf w}}]$ and two 0-pulse heteroclinic connections $[{{\rm\bf w}}\rightarrow{{\rm\bf v}}]$;
2. \[item1\] the only heteroclinic connections from ${{\rm\bf v}}$ to ${{\rm\bf w}}$ are those in the Bykov cycles and there are no homoclinic connections;
3. \[item4\] any tubular neighbourhood of a Bykov cycle $\Gamma$ in $\Sigma^*$ contains infinitely many $n$-pulse heteroclinic connections $[{{\rm\bf w}}\to{{\rm\bf v}}]$ for each $n\in{{\rm\bf N}}$, that accumulate on the cycle;
4. \[item5\] for any tubular neighbourhood $\mathcal{T}$, given a cross-section $S_q\subset \mathcal{T}$ at a point $q$ in $[{{\rm\bf w}}\rightarrow{{\rm\bf v}}]$, there exist sets of points such that the dynamics of the first return to $S_q$ is uniformly hyperbolic and conjugate to a full shift over a finite number of symbols. These sets accumulate on $\Sigma^*$ and the number of symbols coding the return map tends to infinity as we approach the network.
Notice that assertion of Theorem \[teorema T-point switching\] implies the existence of a bigger network: beyond the original transverse connections $[{{\rm\bf w}}\rightarrow{{\rm\bf v}}]$, there exist infinitely many subsidiary heteroclinic connections turning around the original Bykov cycle. It also follows from and that any tubular neighbourhood $\mathcal{T}$ of a Bykov cycle $\Gamma$ in $\Sigma^*$ contains points not lying on $\Gamma$ whose trajectories remain in $\mathcal{T}$ for all time. In contrast to the findings of [@Shilnikov65; @Shilnikov67; @Shilnikov67A], the suspended horseshoes of (\[item5\]) arise due to the presence of two saddle-foci together with transversality of invariant manifolds, and does not depend on any additional condition on the eigenvalues at the equilibria. A hyperbolic invariant set of a $C^2$-diffeomorphism has zero Lebesgue measure [@Bowen75]. Nevertheless, since the authors of [@LR] worked in the $C^1$ category, this set of horseshoes might have positive Lebesgue measure. Rodrigues [@Rodrigues3] proved that this is not the case:
\[zero measure\] Let $\mathcal{T}$ be a tubular neighbourhood of one of the Bykov cycles $\Gamma$ of Theorem \[teorema T-point switching\]. Then in any cross-section $S_q\subset \mathcal{T}$ the set of initial conditions in $S_q$ that do not leave $\mathcal{T}$ for all time, has zero Lebesgue measure.
The shift dynamics does not trap most solutions in the neighbourhood of the cycle. In particular, none of the cycles is Lyapunov stable.
Statement of results {#secStatement}
====================
Heteroclinic cycles connecting saddle-foci with a transverse intersection of two-dimensional invariant manifolds imply the existence of hyperbolic suspended horseshoes. In our setting, when $\lambda$ varies close to zero, we expect the creation and the annihilation of these horseshoes. When the symmetry ${{\rm\bf Z}}_2(\left\langle \gamma_2\right\rangle)$ is broken, heteroclinic tangencies are reported in the next result.
\[teorema tangency\] In the set of families $f_\lambda$ of vector fields satisfying (P\[P1\])–(P\[P5.\]) there is a subset ${\mathcal C}$, open in the $C^3$ topology, for which there is a sequence $\lambda_i>0$ of real numbers, with $\lim_{i\to\infty}\lambda_i=0$ such that for $\lambda>\lambda_i$, there are two 1-pulse heteroclinic connections for the flow of $\dot{x}=f_\lambda(x)$, that collapse into a 1-pulse heteroclinic tangency at $\lambda=\lambda_i$ and then disappear for $\lambda<\lambda_i$. Moreover, the 1-pulse heteroclinic tangency approaches the original $[{{\rm\bf w}}\to{{\rm\bf v}}]$ connection when $\lambda_i$ tends to zero.
The explicit description of the open set ${\mathcal C}$ is given in Section \[sec tangency\], after establishing some notation for the proof. Although a tangency may be removed by a small smooth perturbation, the presence of tangencies is densely persistent, a phenomenon similar to the Cocoon bifurcations observed in the Michelson system [@DIK; @DIKS] observed by Lau.
\[teorema multitangency\] For a family $f_\lambda$ in the open set ${\mathcal C}$ of Theorem \[teorema tangency\], and for each parameter value $\lambda_i$ corresponding to a 1-pulse heteroclinic tangency, there is a sequence of parameter values $\lambda_{ij}$ accumulating at $\lambda_i$ for which there is a 2-pulse heteroclinic tangency. This property is recursive in the sense that each $n$-pulse heteroclinic tangency is accumulated by $(n+1)$-pulse heteroclinic tangencies for nearby parameter values.
Associated to the heteroclinic tangencies of Theorem \[teorema multitangency\], suspended horseshoes disappear as $\lambda$ goes to zero.
\[newhouse\] For a family $f_\lambda$ in the open set ${\mathcal C}$ of Theorem \[teorema tangency\], there is a sequence of closed intervals $\Delta_n= [c_n,d_n] $, with $0<d_{n+1},c_n<d_n$ and ${\displaystyle}\lim_{n\to\infty}d_n=0$, such that as $ \lambda$ decreases in $\Delta_n$, a suspended horseshoe is destroyed.
A similar result has been formulated by Newhouse [@Newhouse74] and Yorke and Alligood [@YA] for the case of two dimensional diffeomorphisms in the context of homoclinic bifurcations in dissipative dynamics, with no references to the equivariance. A more precise formulation of the result is given in Section \[bif\]. Applying the results of [@PT; @YA] to this family, we obtain:
\[Consequences\] For a family $f_\lambda$ in the open set ${\mathcal C}$ of Theorem \[teorema tangency\], and $\lambda$ in one of the intervals $\Delta_n$ of Theorem \[newhouse\] the flow of $\dot{x}=f_\lambda(x)$ undergoes infinitely many saddle-node and period doubling bifurcations.
With the additional hypothesis that the first return map to a suitable cross-section contracts area, we get:
\[Sinks\] For a family $f_\lambda$ in the open set ${\mathcal C}$ of Theorem \[teorema tangency\], if the first return to a transverse section is area-contracting, then for parameters $\lambda$ in an open subset of $\Delta_n$ with sufficiently large $n$, infinitely many attracting periodic solutions coexist.
In Section \[bif\] we also describe a setting where the additional hypothesis holds. When $\lambda$ decreases, the Cantor set of points of the horseshoes that remain near the cycle is losing topological entropy, as the set loses hyperbolicity, a phenomenon similar to that described in [@Gonchenko2007]. For $\lambda \approx 0$, return maps to appropriate domains close to the tangency are conjugate to Hénon-like maps [@Colli; @Kiriki; @MV]. As $\lambda \rightarrow 0$, in $V^0$, infinitely many wild attractors may coexist with suspended horseshoes that are being destroyed.
The complete description of the dynamics near these bifurcations is an unsolvable problem: arbitrarily small perturbations of any differential equation with a quadratic heteroclinic tangency may lead to the creation of new tangencies of higher order, and to the birth of quasi-stochastic attractors [@GST1993; @GTS2001; @GST2007; @Gonchenko2012].
Local geometry and transition maps {#localdyn}
==================================
We analyse the dynamics near the network by deriving local maps that approximate the dynamics near and between the two nodes in the network. In this section we establish the notation that will be used in the rest of the article and the expressions for the local maps. We start with appropriate coordinates near the two saddle-foci.
Local coordinates
-----------------
In order to describe the dynamics around the Bykov cycles of $\Sigma^\lambda$, we use the local coordinates near the equilibria ${{\rm\bf v}}$ and ${{\rm\bf w}}$ introduced by Ovsyannikov and Shilnikov [@OS]. Without loss of generality we assume that $\alpha_{{\rm\bf v}}=\alpha_{{\rm\bf w}}=1$.
![Cylindrical neighbourhoods of the saddle-foci ${{\rm\bf w}}$ (a) and ${{\rm\bf v}}$ (b). []{data-label="neigh_vw"}](neigh_vw_global1){height="6cm"}
In these coordinates, we consider cylindrical neighbourhoods $V$ and $W$ in ${{{\rm\bf S}}}^3$ of ${{\rm\bf v}}$ and ${{\rm\bf w}}$, respectively, of radius $\rho=\varepsilon>0$ and height $z=2\varepsilon$ — see Figure \[neigh\_vw\]. After a linear rescaling of the variables, we may also assume that $\varepsilon=1$. Their boundaries consist of three components: the cylinder wall parametrised by $x\in {{\rm\bf R}}\pmod{2\pi}$ and $|y|\leq 1$ with the usual cover $$(x,y)\mapsto (1 ,x,y)=(\rho ,\theta ,z)$$ and two discs, the top and bottom of the cylinder. We take polar coverings of these disks $$(r,\varphi )\mapsto (r,\varphi , \pm 1)=(\rho ,\theta ,z)$$ where $0\leq r\leq 1$ and $\varphi \in {{\rm\bf R}}\pmod{2\pi}$. The local stable manifold of ${{\rm\bf v}}$, $W^s({{\rm\bf v}})$, corresponds to the circle parametrised by $ y=0$. In $V$ we use the following terminology suggested in Figure \[neigh\_vw\]:
- ${{\text{In}}}({{\rm\bf v}})$, the cylinder wall of $V$, consisting of points that go inside $V$ in positive time;
- ${{\text{Out}}}({{\rm\bf v}})$, the top and bottom of $V$, consisting of points that go outside $V$ in positive time.
We denote by ${{\text{In}}}^+({{\rm\bf v}})$ the upper part of the cylinder, parametrised by $(x,y)$, $y\in[0,1]$ and by ${{\text{In}}}^-({{\rm\bf v}})$ its lower part.
The cross-sections obtained for the linearisation around ${{\rm\bf w}}$ are dual to these. The set $W^s({{\rm\bf w}})$ is the $z$-axis intersecting the top and bottom of the cylinder $W$ at the origin of its coordinates. The set $W^u({{\rm\bf w}})$ is parametrised by $z=0$, and we use:
- ${{\text{In}}}({{\rm\bf w}})$, the top and bottom of $W$, consisting of points that go inside $W$ in positive time;
- ${{\text{Out}}}({{\rm\bf w}})$, the cylinder wall of $W$, consisting of points that go inside $W$ in negative time, with ${{\text{Out}}}^+({{\rm\bf w}})$ denoting its upper part, parametrised by $(x,y)$, $y\in[0,1]$ and ${{\text{Out}}}^-({{\rm\bf w}})$ its lower part.
We will denote by $W^u_{{{\text{loc}}}}({{\rm\bf w}})$ the portion of $W^u({{\rm\bf w}})$ that goes from ${{\rm\bf w}}$ up to ${{\text{In}}}({{\rm\bf v}})$ not intersecting the interior of $V$ and by $W^s_{{{\text{loc}}}}({{\rm\bf v}})$ the portion of $W^s({{\rm\bf v}})$ outside $W$ that goes directly from ${{\text{Out}}}({{\rm\bf w}})$ into ${{\rm\bf v}}$. The flow is transverse to these cross-sections and the boundaries of $V$ and of $W$ may be written as the closure of ${{\text{In}}}({{\rm\bf v}}) \cup {{\text{Out}}}({{\rm\bf v}})$ and ${{\text{In}}}({{\rm\bf w}}) \cup {{\text{Out}}}({{\rm\bf w}})$, respectively.
Local maps near the saddle-foci
-------------------------------
Following [@Deng; @OS], the trajectory of a point $(x,y)$ with $y>0$ in ${{\text{In}}}({{\rm\bf v}})$, leaves $V$ at ${{\text{Out}}}({{\rm\bf v}})$ at $$\Phi_{{{\rm\bf v}}}(x,y)=\left(y^{\delta_{{\rm\bf v}}} + S_1(x,y; \lambda),-\frac{\ln y}{E_{{\rm\bf v}}}+x+S_2(x,y; \lambda) \right)=(r,\phi)
\qquad \mbox{where}\qquad
\delta_{{\rm\bf v}}=\frac{C_{{{\rm\bf v}}}}{E_{{{\rm\bf v}}}} > 1,
\label{local_v}$$ where $S_1$ and $S_2$ are smooth functions which depend on $\lambda$ and satisfy: $$\label{diff_res}
\left| \frac{\partial^{k+l+m}}{\partial x^k \partial x^l \partial \lambda ^m } S_i(x, y;\lambda)
\right| \leq C y^{\delta_{{\rm\bf v}}+ \sigma - l},$$ and $C$ and $\sigma$ are positive constants and $k, l, m$ are non-negative integers. Similarly, a point $(r,\phi)$ in ${{\text{In}}}({{\rm\bf w}}) \backslash W^s_{{{\text{loc}}}}({{\rm\bf w}})$, leaves $W$ at ${{\text{Out}}}({{\rm\bf w}})$ at $$\Phi_{{{\rm\bf w}}}(r,\varphi )=\left(-\frac{\ln r}{E_{{\rm\bf w}}}+\varphi+ R_1(r,\varphi ; \lambda),r^{\delta_{{\rm\bf w}}}+R_2(r,\varphi; \lambda )\right)=(x,y)
\qquad \mbox{where}\qquad
\delta_{{\rm\bf w}}=\frac{C_{{{\rm\bf w}}}}{E_{{{\rm\bf w}}}} >1
\label{local_w}$$ where $R_1$ and $R_2$ satisfy a condition similar to (\[local\_v\]). The terms $S_1$, $S_2$, $R_1$, $R_2$ correspond to asymptotically small terms that vanish when $y$ and $r$ go to zero. A better estimate under a stronger eigenvalue condition has been obtained in [@Homburg Prop. 2.4].
Transition map along the connection $[{{\rm\bf v}}\rightarrow{{\rm\bf w}}]$
---------------------------------------------------------------------------
Points in ${{\text{Out}}}({{\rm\bf v}})$ near $W^u_{{{\text{loc}}}}({{\rm\bf v}})$ are mapped into ${{\text{In}}}({{\rm\bf w}})$ in a flow-box along the each one of the connections $[{{\rm\bf v}}\rightarrow{{\rm\bf w}}]$. Without loss of generality, we will assume that the transition $\Psi_{{{\rm\bf v}}\rightarrow {{\rm\bf w}}}: {{\text{Out}}}({{\rm\bf v}}) \rightarrow {{\text{In}}}({{\rm\bf w}})$ does not depend on $\lambda$ and is modelled by the identity, which is compatible with hypothesis (P\[P6\]). Using a more general form for $\Psi_{{{\rm\bf v}}\rightarrow {{\rm\bf w}}}$ would complicate the calculations without any change in the final results. The coordinates on $V$ and $W$ are chosen to have $[{{\rm\bf v}}\rightarrow{{\rm\bf w}}]$ connecting points with $z>0$ in $V$ to points with $z>0$ in $W$. We will denote by $\eta$ the map $\eta=\Phi_{{{\rm\bf w}}} \circ \Psi_{{{\rm\bf v}}\rightarrow {{\rm\bf w}}} \circ \Phi_{{{\rm\bf v}}}$. Up to higher order terms, from and , its expression in local coordinates, for $y>0$, is $$\label{eqeta}
\eta(x,y)=\left(x-K \ln y , y^{\delta} \right)
\qquad\mbox{with}\qquad
\delta=\delta_{{\rm\bf v}}\delta_{{\rm\bf w}}>1
\qquad\mbox{and}
\qquad K= \frac{C_{{\rm\bf v}}+E_{{\rm\bf w}}}{E_{{\rm\bf v}}E_{{\rm\bf w}}} > 0.$$
The choice of $\Psi_{{{\rm\bf v}}\rightarrow {{\rm\bf w}}}$ as the identity reflects the fact that the node have the same chirality. In the case where the nodes have different chirality, them map $\Psi_{{{\rm\bf v}}\rightarrow {{\rm\bf w}}}$ reverses orientation. This affects the form of the map $\eta$ and any subsequent results that use it.
Geometry of the transition map
------------------------------
Consider a cylinder $C$ parametrised by a covering $(\theta,y )\in {{\rm\bf R}}\times[-1,1]$, where $\theta $ is periodic. A *helix* on the cylinder $C$ *accumulating on the circle* $y=0$ is a curve on $C$ without self-intersections, that is the image, by the parametrisation $(\theta,y)$, of a continuous map $H:(a,b)\rightarrow {{\rm\bf R}}\times[-1,1]$, $H(s)=\left(H_\theta(s),H_y(s)\right)$, satisfying: $$\lim_{s\to a^+}H_\theta(s)=\lim_{s\to b^-}H_\theta(s)=\pm\infty
\qquad
\lim_{s\to a^+}H_y(s)=\lim_{s\to b^-}H_y(s)=0$$ and such that there are $\tilde{a}\le \tilde{b}\in (a,b)$ for which both $H_\theta(s)$ and $H_y(s)$ are monotonic in each of the intervals $(a,\tilde{a})$ and $(\tilde{b},b)$. It follows from the assumptions on the function $H_\theta$ that it has either a global minimum or a global maximum, since $\lim_{s\to a^+}H_\theta(s)=\lim_{s\to b^-}H_\theta(s)$. At the corresponding point, the projection of the helix into the circle $y=0$ is singular, a *fold point* of the helix. Similarly, the function $H_y$ always has a global maximum, that will be called the *maximum height* of the helix. See Figure \[homoclinic1\].
![A helix is defined on a covering of the cylinder by a smooth curve $(H_\theta(s),H_y(s))$ that turns around the cylinder infinitely many times as its height $H_y$ tends do zero. It always contains a fold point, indicated here by a diamond, and a point of maximum height, shown as a black dot.[]{data-label="homoclinic1"}](helices){width="15cm"}
\[Structures\] Consider a curve on one of the cylinder walls ${{\text{In}}}({{\rm\bf v}})$ or ${{\text{Out}}}({{\rm\bf w}})$, parametrised by the graph of a smooth function $h:[a,b]\rightarrow{{\rm\bf R}}$, where $b-a<2\pi$ with $h(a)=h(b)=0$, $h^\prime(a)>0$, $h^\prime(b)<0$ and $h(x)>0$ for all $x\in(a,b)$. Let $M$ be the global maximum value of $h$, attained at a point $x_M\in(a,b)$. Then, for the transition maps defined above, we have:
1. \[helixOut\] if the curve lies in ${{\text{In}}}({{\rm\bf v}})$ then it is mapped by $\eta=\Phi _{{{\rm\bf w}}}\circ \Psi_{{{\rm\bf v}}\rightarrow {{\rm\bf w}}}\circ \Phi _{{{\rm\bf v}}}$ into a helix on ${{\text{Out}}}({{\rm\bf w}})$ accumulating on the circle ${{\text{Out}}}({{\rm\bf w}}) \cap W^{u}_{{{\text{loc}}}}({{\rm\bf w}})$, its maximum height is $M^\delta$ and it has a fold point at $\eta(x_*,h(x_*))$ for some $x_*\in (a,x_M)$;
2. \[helixIn\] if the curve lies in ${{\text{Out}}}({{\rm\bf w}})$ then it is mapped by $\eta^{-1}$ into a helix on ${{\text{In}}}({{\rm\bf v}})$ accumulating on the circle ${{\text{In}}}({{\rm\bf v}}) \cap W^{s}_{{{\text{loc}}}}({{\rm\bf v}})$, its maximum height is $M^{1/\delta}$ and it has a fold point at $\eta^{-1}(x_*,h(x_*))$ for some $x_*\in (x_M,b)$.
This result depends strongly on the form of $\eta$, and therefore, on the chirality of the nodes. If the nodes had different chirality, the graph of $h$ would no longer be mapped into a helix; the curve $\eta\left(x, h(x)\right)$ would instead have a vertical tangent at infinitely many points, see [@LR].
The graph of $h$ defines a curve on ${{\text{In}}}({{\rm\bf v}})$ without self-intersections. Since $\eta$ is the transition map of a differential equation, hence a diffeomorphism, this curve is mapped by $\eta$ into a curve $H (x)=\eta\left(x, h(x)\right)=\left(H _\theta(x),H _y(x)\right)$ in ${{\text{Out}}}({{\rm\bf w}})$ without self-intersections. Using the expression for $\eta$, we get $$\label{alphaPrime}
H (x)=\left(x-K\ln h (x),\left(h (x)\right)^\delta\right) \qquad \text{and}
\qquad
H ^\prime(x)=\left(1-\frac{K h^\prime(x)}{h(x)}, \delta\frac{ \left(h (x)\right)^\delta h^\prime(x)}{h (x)}\right).$$ From this expression it is immediate that $$\lim_{x\to a^+}H_\theta(x)=\lim_{x\to b^-}H_\theta(x)=+\infty \quad \text{and} \quad
\lim_{x\to a^+}H_y(x)=\lim_{x\to b^-}H _y(x)=0.$$ Also, $$H _y(x_M)=M^\delta \ge \left(h(x)\right)^\delta=H _y(x)$$ for all $x\in \left(a ,b\right)$, so the curve lies below the level $y=M^\delta$ in ${{\text{Out}}}({{\rm\bf w}})$. Since $h^\prime(b)<0$, there is an interval $(\tilde{b},b)$ where $h^\prime(x)<0$ and hence $H_\theta^\prime(x)>0$ and $H_y^\prime(x)<0$. Similarly, $h^\prime(x)>0$ on some interval $(a,\hat{a})$, where $H_y^\prime(x)>0$. For the sign of $H _\theta^\prime(x)$, note that $K h^\prime(a )>0=h (a )$ and $K h^\prime(x_M)=0<h (x_M)$. Thus, there is a point $x_*\in\left(a ,x_M\right)$ where $h (x_*)=Kh^\prime(x_*)$ and $H _\theta^\prime(x)$ changes sign, this is a fold point. If $x_*$ is the minimum value of $x$ for which this happens, then locally the helix lies to the right of this point. This proves assertion . The proof of assertion is similar, using the expression $$\label{eqEtaInverse}
\eta^{-1}(x,y)=\left(x+\frac{K}{\delta} \ln y , y^{1/\delta} \right).$$ In this case we get $$\lim_{x\to a^+}H_\theta(x)=\lim_{x\to b^-}H_\theta(x)=-\infty$$ and if $x_*$ is the largest value of $x$ for which the helix has a fold point, then locally the helix lies to the left of $\eta^{-1}(x_*,h(x_*))$.
Note that if, instead of the graph of a smooth function, we consider continuous, positive and piecewise smooth curve $\alpha(s)$ without self-intersections from $\alpha(0)=(a,0)$ to $\alpha(1)=(b,0)$, we can apply similar arguments to show that both $\eta(\alpha)$ and $\eta^{-1}(\alpha)$ are helices.
Geometry of the invariant manifolds {#subsecInvariantManifolds}
-----------------------------------
There is also a well defined transition map $$\Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^\lambda:{{\text{Out}}}({{\rm\bf w}})\longrightarrow {{\text{In}}}({{\rm\bf v}})$$ that depends on the $\textbf{Z}_2{\langle\gamma_{2}\rangle}$-symmetry breaking parameter $\lambda$, where $\Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^0$ is the identity map. We will denote by $R_\lambda$ the map $\Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^\lambda \circ \eta$, where it is well defined. When there is no risk of ambiguity, we omit the superscript $\lambda$. In this section we investigate the effect of $\Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^\lambda$ on the two-dimensional invariant manifolds of ${{\rm\bf v}}$ and ${{\rm\bf w}}$ for $\lambda\ne 0$, under the assumption (P\[P5.\]).
For this, let $f_\lambda$ be an unfolding of $f_0$ satisfying (P\[P1\])–(P\[P5.\]). For $\lambda\ne 0$, we introduce the notation, see Figure \[elipse\]:
- $(P_{{\rm\bf w}}^1,0)$ and $(P_{{\rm\bf w}}^2,0)$ with $0<P_{{\rm\bf w}}^1<P_{{\rm\bf w}}^2<2\pi$ are the coordinates of the two points where the connections $[{{\rm\bf w}}\rightarrow {{\rm\bf v}}]$ of Property (P\[P5.\]) meet ${{\text{Out}}}({{\rm\bf w}})$;
- $(P_{{\rm\bf v}}^1,0)$ and $(P_{{\rm\bf v}}^2,0)$ with $0<P_{{\rm\bf v}}^1<P_{{\rm\bf v}}^2<2\pi$ are the coordinates of the two points where $[{{\rm\bf w}}\rightarrow {{\rm\bf v}}]$ meets ${{\text{In}}}({{\rm\bf v}})$;
- $(P_{{\rm\bf w}}^j,0)$ and $(P_{{\rm\bf v}}^j,0)$ are on the same trajectory for each $j=1,2$.
![For $\lambda$ close to zero, $W^s({{\rm\bf v}})$ intersects the wall ${{\text{Out}}}({{\rm\bf w}})$ of the cylinder $W$ on a closed curve, given in local coordinates as the graph of a periodic function. Similarly, $W^u({{\rm\bf w}})$ meets ${{\text{In}}}({{\rm\bf v}})$ on a closed curve — this is the expected unfolding from the coincidence of the invariant manifolds at $\lambda=0$. []{data-label="elipse"}](elipse1){height="12cm"}
By (P\[P5.\]), for $\lambda \neq 0$, the manifolds $W^u({{\rm\bf w}})$ and $W^s({{\rm\bf v}})$ intersect transversely along the primary connections. For $\lambda$ close to zero, we are assuming that $W^s_{{{\text{loc}}}}({{\rm\bf v}})$ intersects the wall ${{\text{Out}}}({{\rm\bf w}})$ of the cylinder $W$ on a closed curve as in Figure \[elipse\]. It corresponds to the expected unfolding from the coincidence of the manifolds $W^s({{\rm\bf v}})$ and $W^u({{\rm\bf w}})$ at $f_0$. Similarly, $W^u_{{{\text{loc}}}}({{\rm\bf w}})$ intersects the wall ${{\text{In}}}({{\rm\bf v}})$ of the cylinder $V$ on a closed curve. For small $\lambda>0$, these curves can be seen as graphs of smooth $2\pi$-periodic functions, for which we make the following conventions:
- $W^s_{{{\text{loc}}}}({{\rm\bf v}})\cap {{\text{Out}}}({{\rm\bf w}})$ is the graph of $y={h_{{\rm\bf v}}}(x,\lambda)$, with ${h_{{\rm\bf v}}}(P_{{\rm\bf w}}^j,\lambda)=0$, $j=1,2$;
- $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$ is the graph of $y={h_{{\rm\bf w}}}(x,\lambda)$, with ${h_{{\rm\bf w}}}(P_{{\rm\bf v}}^j,\lambda)=0$, $j=1,2$;
- ${h_{{\rm\bf v}}}(x,0)\equiv 0$ and ${h_{{\rm\bf w}}}(x,0)\equiv 0$
- for $\lambda>0$, we have ${h_{{\rm\bf v}}}^\prime(P_{{\rm\bf w}}^1,\lambda)>0$, hence $ {h_{{\rm\bf v}}}^\prime(P_{{\rm\bf w}}^2,\lambda)<0$ and ${h_{{\rm\bf w}}}^\prime(P_{{\rm\bf v}}^1,\lambda)<0,{h_{{\rm\bf w}}}^\prime(P_{{\rm\bf v}}^2,\lambda)>0$.
The two points $(P_{{\rm\bf w}}^1,0)$ and $(P_{{\rm\bf w}}^2,0)$ divide the closed curve $y={h_{{\rm\bf v}}}(x,\lambda)$ in two components, corresponding to different signs of the second coordinate. With the conventions above, we get ${h_{{\rm\bf v}}}(x,\lambda)>0$ for $x\in\left(P_{{\rm\bf w}}^1,P_{{\rm\bf w}}^2\right)$. Then the region $W^-$ in ${{\text{Out}}}({{\rm\bf w}})$ delimited by $W_{{{\text{loc}}}}^s({{\rm\bf v}})$ and $W_{{{\text{loc}}}}^u({{\rm\bf w}})$ between $P_{{\rm\bf w}}^1$ and $P_{{\rm\bf w}}^2$ gets mapped by $ \Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^\lambda$ into ${{\text{In}}}^-({{\rm\bf v}})$, while all other points in ${{\text{Out}}}^+({{\rm\bf w}})$ are mapped into ${{\text{In}}}^+({{\rm\bf v}})$. We denote by $W^+$ the latter set, of points in ${{\text{Out}}}({{\rm\bf w}})$ with $0<y<1$ and $y>{h_{{\rm\bf v}}}(x,\lambda)$ for $x\in\left(P_{{\rm\bf w}}^1,P_{{\rm\bf w}}^2\right)$. The maximum value of ${h_{{\rm\bf v}}}(x,\lambda)$ is attained at some point $$(x,y)= ({x_{{\rm\bf v}}}(\lambda),{M_{{\rm\bf v}}}(\lambda)) \qquad \text{with} \qquad P_{{\rm\bf w}}^1<{x_{{\rm\bf v}}}(\lambda)<P_{{\rm\bf w}}^2.$$
Finally, let ${M_{{\rm\bf w}}}(\lambda)$ be the maximum of ${h_{{\rm\bf w}}}(x,\lambda)$, attained at a point ${x_{{\rm\bf w}}}(\lambda)\in \left(P_{{\rm\bf v}}^2,P_{{\rm\bf v}}^1\right)$. In order to simplify the writting, the following analysis is focused on the two Bykov cycles whose connection $[{{\rm\bf v}}\to {{\rm\bf w}}]$ lies in the subspace defined by $ x_3\geq0$. The same results, with minimal adaptations, hold for cycles obtained from the other connection. With this notation, we have:
\[PropWuw\] Let $f_\lambda$ be family of vector fields satisfying (P\[P1\])–(P\[P5.\]). For $\lambda\ne 0$ sufficiently small, the portion of $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$ that lies in ${{\text{In}}}^+({{\rm\bf v}})$ is mapped by $\eta$ into a helix in ${{\text{Out}}}({{\rm\bf w}})$ accumulating on $W^u_{{{\text{loc}}}}({{\rm\bf w}})$. If ${M_{{\rm\bf w}}}(\lambda)$ is the maximum height of $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}^+({{\rm\bf v}})$, then the maximum height of the helix is ${M_{{\rm\bf w}}}(\lambda)^\delta$. For each $\lambda>0$ there is a fold point in the helix that, as $\lambda$ tends to zero, turns around the cylinder wall ${{\text{Out}}}({{\rm\bf w}})$ infinitely many times.
That $\eta$ maps $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}^+({{\rm\bf v}})$ into a helix, and the statement about its maximum height follow directly by applying assertion of Lemma \[Structures\] to ${h_{{\rm\bf w}}}$. For the fold point in the helix, let $x_*(\lambda)$ be its first coordinate. From the expression of $\eta$ it follows that $$x_*(\lambda)=x_\lambda-K\ln {h_{{\rm\bf w}}}(x_\lambda)$$ for some $x_\lambda\in(P_{{\rm\bf v}}^2, x_{{\rm\bf w}}(\lambda))$ and with ${h_{{\rm\bf w}}}(x_\lambda, \lambda)\le {h_{{\rm\bf w}}}(x_{{\rm\bf w}}(\lambda),\lambda)=M_{{\rm\bf w}}(\lambda)$ and hence, $$-K\ln {h_{{\rm\bf w}}}(x_\lambda,\lambda)\ge -K\ln M_{{\rm\bf w}}(\lambda).$$ Since $f_\lambda$ unfolds $f_0$, then $\lim_{\lambda\to 0}M_{{\rm\bf w}}(\lambda)=0$, hence $\lim_{\lambda\to 0}-K\ln {h_{{\rm\bf w}}}(x_\lambda,\lambda)=\infty$ and therefore, the fold point turns around the cylinder ${{\text{Out}}}({{\rm\bf w}})$ infinitely many times.
The Hypothesis (P\[P6\]) about chirality of the nodes is essential for Lemma \[Structures\] and Proposition \[PropWuw\]. If we had taken the rotation in $W$ with the opposite orientation to that in $V$, the rotations would cancel out and the curve $H(x)$ defined in (\[alphaPrime\]) would no longer be a helix, since the angular coordinate $H_\theta$ would not be monotonic.
Heteroclinic Tangencies {#sec tangency}
=======================
Using the notation and results of Section \[localdyn\] we can now discuss the tangencies of the invariant manifolds and prove Theorems \[teorema tangency\] and \[teorema multitangency\]. As remarked in Section \[subsecInvariantManifolds\], since $f_\lambda$ unfolds $f_0$, then the maximum heights, ${M_{{\rm\bf v}}}(\lambda)$ of $W^s_{{{\text{loc}}}}({{\rm\bf v}})\cap {{\text{Out}}}({{\rm\bf w}})$, and ${M_{{\rm\bf w}}}(\lambda)$ of $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$, satisfy: $$\lim_{\lambda\to 0}{M_{{\rm\bf v}}}(\lambda)=\lim_{\lambda\to 0}{M_{{\rm\bf w}}}(\lambda)=0.$$ We make the additional assumption that $\left({M_{{\rm\bf w}}}(\lambda)\right)^\delta$ tends to zero faster than ${M_{{\rm\bf v}}}(\lambda)$. This condition defines the open set ${\mathcal C}$ of unfoldings $f_\lambda$ that we need for the statement of Theorem \[teorema tangency\].
The subset ${\mathcal C}$ of Theorem \[teorema tangency\] is the set of families $f_\lambda$ of vector fields satisfying (P\[P1\])–(P\[P5.\]) for which there is a value $\lambda_*>0$ such that for $0<\lambda<\lambda_*$ we have $\left({M_{{\rm\bf w}}}(\lambda)\right)^\delta<{M_{{\rm\bf v}}}(\lambda)$. Then ${\mathcal C}$ is an open subset, in the $C^3$ topology, of the set of families $f_\lambda$ of vector fields satisfying (P\[P1\])–(P\[P5.\]).
![Left: when $\lambda$ decreases, the fold point of the helix $\alpha^\lambda(x)\in {{\text{Out}}}({{\rm\bf w}})$ moves to the right and for $\lambda=\lambda_i$ it is tangent to $W^s_{{{\text{loc}}}}({{\rm\bf v}})$ creating a 1-pulse tangency. Right: $\Psi_{[{{\rm\bf w}}\rightarrow{{\rm\bf v}}]}$ maps the helix $\alpha^\lambda(x)$ close to $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$, creating several curves that satisfy the hypotheses of Lemma \[Structures\] . These curves are again mapped by $\eta$ into helices in ${{\text{Out}}}({{\rm\bf w}})$ creating 2-pulse heteroclinic tangencies.[]{data-label="homoclinic2"}](homoclinic3){width="14cm"}
Proof of Theorem \[teorema tangency\]
-------------------------------------
Suppose $f_\lambda\in{\mathcal C}$. By Proposition \[PropWuw\], the curve $$\alpha^\lambda(x)=\eta\left(x,{h_{{\rm\bf w}}}(x,\lambda),\lambda\right)=\left(\alpha_1^\lambda(x),\alpha_2^\lambda(x)\right), \qquad x\in\left(P_{{\rm\bf v}}^2,P_{{\rm\bf v}}^1\right)\pmod{2\pi}$$ is a helix in ${{\text{Out}}}^+({{\rm\bf w}})$ and has at least one fold point at $x=x_*(\lambda)$. The second coordinate of the helix satisfies $0<\alpha_2^\lambda(x)<{M_{{\rm\bf w}}}(\lambda)^\delta$ for all $x\in\left(P_{{\rm\bf v}}^2,P_{{\rm\bf v}}^1\right)\pmod{2\pi}$ and all positive $\lambda<\lambda_*$. Since $f_\lambda\in{\mathcal C}$, then $\alpha_2^\lambda(x)<{M_{{\rm\bf v}}}(\lambda)$ for all $x$ and all positive $\lambda<\lambda_*$.
Moreover, since the fold point $\alpha^\lambda(x_*(\lambda))$ turns around ${{\text{Out}}}({{\rm\bf w}})$ infinitely many times as $\lambda$ goes to zero, given any $\lambda_0<\lambda_*$ there exists a positive value $\lambda_R<\lambda_0$ such that $\alpha^\lambda(x_*(\lambda_R))$ lies in $W^+$, the region in ${{\text{Out}}}({{\rm\bf w}})$ between $W_{{{\text{loc}}}}^s({{\rm\bf v}})$ and $W_{{{\text{loc}}}}^u({{\rm\bf w}})$ that gets mapped into the upper part of ${{\text{In}}}({{\rm\bf v}})$. Since the second coordinate of the fold point is less than the maximum of ${h_{{\rm\bf v}}}$, there is a positive value $\lambda_L<\lambda_R$ such that $\alpha^\lambda(x_*(\lambda_L))$ lies in $W^-$, the region in ${{\text{Out}}}({{\rm\bf w}})$ that gets mapped into the lower part of ${{\text{In}}}({{\rm\bf v}})$, whose boundary contains the graph of ${h_{{\rm\bf v}}}$. Therefore, the curve $\alpha^\lambda(x_*(\lambda))$ is tangent to the graph of ${h_{{\rm\bf v}}}(x,\lambda)$ at some point $\alpha(x_*(\lambda_1))$ with $\lambda_1\in\left( \lambda_L,\lambda_R\right)$.
We have thus shown that given $\lambda_0>0$, there is some positive $\lambda_1<\lambda_0$, for which the image of the curve $W_{{{\text{loc}}}}^u({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$ by $\eta$ is tangent to $W_{{{\text{loc}}}}^s({{\rm\bf v}})\cap {{\text{Out}}}({{\rm\bf w}})$, creating a 1-pulse heteroclinic tangency. Two transverse 1-pulse heteroclinic connections exist for $\lambda>\lambda_1$ close to $\lambda_1$. These connections come together at the tangency and disappear.
By Proposition \[PropWuw\], as $\lambda$ goes to zero, the fold point $\alpha^\lambda(x_*(\lambda))$ turns around the cylinder ${{\text{Out}}}({{\rm\bf w}})$ infinitely many times, thus going in and out of $W^-$. Each times it crosses the boundary, a new tangency occurs. Repeating the argument above yields the sequence $\lambda_i$ of parameter values for which there is a 1-pulse heteroclinic tangency and this completes the proof of the main statement of Theorem \[teorema tangency\].
On the other hand, as $\lambda$ goes to zero, the maximum height of the helix, $\left({M_{{\rm\bf w}}}(\lambda)\right)^\delta$ also tends to zero. This implies that the second coordinate of the points $\alpha^\lambda(x_*(\lambda_i))\in {{\text{Out}}}({{\rm\bf w}})$ where there is a 1-pulse heteroclinic tangency tends to zero as $i$ goes to infinity. This shows that the tangency approaches the two-dimensional connection $[{{\rm\bf w}}\to{{\rm\bf v}}]$ that exists for $\lambda=0$.
A crucial fact in the proof of Theorem \[teorema tangency\] is that $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$ is the graph of a function with a single maximum, and that this maximum goes to 0 as $\lambda$ goes to 0. This follows because the family $f_\lambda$ unfolds the more symmetric vector field $f_0$. We could make the assumption on the invariant manifold directly, but it is the context of symmetry-breaking that makes it natural.
The construction in the proof of Theorem \[teorema tangency\] may be extended to obtain multipulse tangencies, as follows:
Proof of Theorem \[teorema multitangency\]
------------------------------------------
Look at $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$, the graph of ${h_{{\rm\bf w}}}(x,\lambda)$. Since ${h_{{\rm\bf w}}}' (P_{{\rm\bf v}}^1,\lambda)<0$, there is an interval $[\tilde{x}, P_{{\rm\bf v}}^1]\subset [{x_{{\rm\bf w}}}(\lambda), P_{{\rm\bf v}}^1]$ where the map ${h_{{\rm\bf w}}}$ is monotonically decreasing. Note that the hypothesis of different chirality is essential here. Therefore, we may define infinitely many intervals where $\alpha^\lambda(x)=\eta\left(x,{h_{{\rm\bf w}}}(x,\lambda)\right)$ lies in $W^+$. More precisely, we have two sequences, $(a_j)$ and $(b_j)$ in $[\tilde{x}, P_{{\rm\bf v}}^1]$ such that:
- $a_j<b_j<a_{j+1}$with $\lim_{j\to\infty} a_j=P_{{\rm\bf v}}^1$;
- $\alpha^\lambda(a_j)$ and $\alpha^\lambda(b_j)\in W^s_{loc}({{\rm\bf v}})\cap {{\text{Out}}}^+({{\rm\bf w}})$;
- if $x\in(a_j,b_j)$ then $\alpha^\lambda(x)\in W^+$;
- the curves $\alpha^\lambda\left([a_j,b_j]\right)$ accumulate uniformly on $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{Out}}}({{\rm\bf w}})$ as $j\to\infty$.
Hence, each one of the curves $\alpha^\lambda\left([a_j,b_j]\right)$ is mapped by $\Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}$ into the graph of a function $h$ in ${{\text{In}}}({{\rm\bf v}})$ satisfying the conditions of Lemma \[Structures\], and hence each one of these curves is mapped by $\eta$ into a helix $\xi_j(x)$, $x\in(a_j,b_j)$.
![Curves in the proof of Theorem \[teorema multitangency\]: for $\lambda=\lambda_i$ the curve $\alpha^\lambda(x)=\eta(W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}^+({{\rm\bf v}}))$ (solid black curve) is tangent to $W^s_{{{\text{loc}}}}({{\rm\bf v}})$ (gray curve) in ${{\text{Out}}}({{\rm\bf w}})$. The curves $\xi_j(x)$ (dotted) accumulate on $\alpha^\lambda(x)$. Very small changes in $\lambda$ make them tangent to $W^s_{{{\text{loc}}}}({{\rm\bf v}})$ creating 2-pulse heteroclinic tangencies.[]{data-label="acumula"}](acumula)
Let $\lambda_i$ be a parameter value for which $\dot{x}=f_{\lambda_i}(x)$ has a 1-pulse heteroclinic tangency as stated in Theorem \[teorema tangency\]. As $j \rightarrow + \infty$, the helices $\xi_j(x)$ accumulate on the helix of Theorem \[teorema tangency\] as drawn in Figure \[acumula\], hence the fold point of $\xi_j(x)$ is arbitrarily close to the fold point of $\eta(x,{h_{{\rm\bf w}}}(x,\lambda))$. The arguments in the proof of Theorem \[teorema tangency\] show that a small change in the parameter $\lambda$ makes the new helix tangent to $W^s_{{{\text{loc}}}}({{\rm\bf v}})$ as in Figure \[homoclinic2\]. For each $j$ this creates a 2-pulse heteroclinic tangency at $\lambda=\lambda_{ij}$. Since the the helices $\xi_j(x)$ accumulate on $\alpha^{\lambda_i}(x)$, it follows that $\lim_{j \in {{\rm\bf N}}} \lambda_{ij}=\lambda_i$.
Finally, the argument may be applied recursively to show that each $n$-pulse heteroclinic tangency is accumulated by $(n+1)$-pulse heteroclinic tangencies for nearby parameter values.
If $\lambda^\star \in {{\rm\bf R}}$ is such that the flow of $\dot{x}=f_{\lambda^\star}(x)$ has a heteroclinic tangency, when $\lambda$ varies near $\lambda^\star$, we find the creation and the destruction of horseshoes that is accompanied by Newhouse phenomena [@Gonchenko2007; @YA]. In terms of numerics, we know very little about the geometry of these attractors, we also do not know the size and the shape of their basins of attraction. Basin boundary metamorphoses and explosions of chaotic saddles as those described in [@RAOY] are expected.
Bifurcating dynamics {#bif}
====================
We discuss here the geometric constructions that determine the global dynamics near a Bykov cycle, in order to prove Theorem \[newhouse\]. For this we need some preliminary definitions and more information on the geometry of the transition maps. First, we adapt the definition of horizontal strip in [@GH] to serve our purposes: for $\tau>0$ sufficiently small, in the local coordinates of the walls of the cylinders $V$ and $W$, consider the rectangles: $${{\mathcal S}}_{{\rm\bf v}}= [P_{{\rm\bf v}}^2-\tau, P_{{\rm\bf v}}^1+\tau] \times [0, 1]\subset {{\text{In}}}({{\rm\bf v}})
\qquad \text{and} \qquad {{\mathcal S}}_{{\rm\bf w}}= [P_{{\rm\bf w}}^2-\tau, P_{{\rm\bf w}}^1+\tau] \times [0, 1] \subset {{\text{Out}}}({{\rm\bf w}})$$ with the conventions $-\pi<P_{{\rm\bf v}}^2-\tau< P_{{\rm\bf v}}^1+\tau\le\pi$ and $-\pi<P_{{\rm\bf w}}^2-\tau<P_{{\rm\bf w}}^1+\tau\le \pi$.
A *horizontal strip* in ${{\mathcal S}}_{{\rm\bf v}}$ is a subset of ${{\text{In}}}^+({{\rm\bf v}})$ of the form $${{\mathcal H}}=\left\{(x,y)\in In^+({{\rm\bf v}}):\quad x \in [P_{{\rm\bf v}}^2-\tau, P_{{\rm\bf v}}^1+\tau]\qquad \text{and} \quad y \in [u_1(x), u_2(x)]\right\},$$ where $u_1, u_2: [P_{{\rm\bf v}}^2-\tau, P_{{\rm\bf v}}^1+\tau] \rightarrow (0,1]$ are smooth maps such that $u_1(x)<u_2(x)$ for all $x \in [P_{{\rm\bf v}}^2-\tau, P_{{\rm\bf v}}^1+\tau]$. The graphs of the $u_j$ are called the *horizontal boundaries* of ${{\mathcal H}}$ and the segments $\left(P_{{\rm\bf v}}^2-\tau,y\right)$, $u_1(P_{{\rm\bf v}}^2-\tau)\le y \le u_2(P_{{\rm\bf v}}^2-\tau)$ and $\left(P_{{\rm\bf v}}^1+\tau,y\right)$, with $u_1(P_{{\rm\bf v}}^1+\tau)\le y \le u_2(P_{{\rm\bf v}}^1+\tau)$ are its *vertical boundaries*. Horizontal and vertical boundaries intersect at four *vertices*. The *maximum height* and the *minimum height* of ${{\mathcal H}}$ are, respectively $$\max_{x \in [P_{{\rm\bf v}}^2-\tau, P_{{\rm\bf v}}^1+\tau]}u_2(x)
\qquad\qquad
\min_{x \in [P_{{\rm\bf v}}^2-\tau, P_{{\rm\bf v}}^1+\tau]}u_1(x).$$ Analogously we may define a *horizontal strip* in ${{\mathcal S}}_{{\rm\bf w}}\subset {{\text{Out}}}^+({{\rm\bf w}})$.
A *horseshoe strip* in ${{\mathcal S}}_{{\rm\bf v}}$ is a subset of ${{\text{In}}}^+({{\rm\bf v}})$ of the form $$\left\{ (x,y)\in {{\text{In}}}^+({{\rm\bf v}}):\quad x\in[a_2,b_2], \quad y \in [u_1(x), u_2(x)]\right\},$$ where
- $[a_1, b_1] \subset [a_2, b_2] \subset [P_{{\rm\bf v}}^2-\tau, P_{{\rm\bf v}}^1+\tau]$;
- $u_1(a_1)=u_1(b_1)=0$;
- $u_2(a_2)=u_2(b_2)=0$.
The boundary of ${{\mathcal H}}$ consists of the graph of $u_2(x)$, $x\in[a_2,b_2]$, the graph of $u_1(x)$, $x\in[a_1,b_1]$, together with the two pieces of $W^s_{{{\text{loc}}}}({{\rm\bf v}})\cap {{\text{In}}}({{\rm\bf v}})$ with $x\in[a_2,a_1]$ and $x\in[b_1,b_2]$.
Horseshoe strips in ${{\mathcal S}}_{{\rm\bf v}}$
-------------------------------------------------
We are interested in the dynamics of points whose trajectories start in ${{\mathcal S}}_{{\rm\bf v}}$ and return to ${{\text{In}}}^+({{\rm\bf v}})$ arriving at ${{\mathcal S}}_{{\rm\bf v}}$.
![For $\lambda>0$ sufficiently small, the set $\eta^{-1}({{\mathcal S}}_{{\rm\bf w}})\cap {{\mathcal S}}_{{\rm\bf v}}$ has infinitely many connected components ([gray region]{}), all of which, except maybe for the top ones, define horizontal strips in ${{\mathcal S}}_{{\rm\bf v}}$ accumulating on $W^s_{loc}({{\rm\bf v}})\cap {{\text{In}}}({{\rm\bf v}})$ ([gray]{} curve).[]{data-label="strips"}](Strips)
\[lemaHorizontalStrips\] The set $\eta^{-1}({{\mathcal S}}_{{\rm\bf w}})\cap {{\mathcal S}}_{{\rm\bf v}}$ has infinitely many connected components all of which are horizontal strips in ${{\mathcal S}}_{{\rm\bf v}}$ accumulating on $W^s_{{{\text{loc}}}}({{\rm\bf v}})\cap {{\text{In}}}({{\rm\bf v}})$, except maybe for a finite number. The horizontal boundaries of these strips are graphs of monotonically increasing functions of $x$.
The boundary of ${{\mathcal S}}_{{\rm\bf w}}$ consists of the following:
1. \[baixo\] a piece of $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{Out}}}({{\rm\bf w}})$ parametrised by $y=0$, $x\in[P^1_{{\rm\bf w}}-\tau, P^2_{{\rm\bf w}}+\tau]$, where $\eta^{-1}$ is not defined;
2. \[cima\] the horizontal segment $(x,1)$ with $x\in[P_{{\rm\bf w}}^2-\tau, P_{{\rm\bf w}}^1+\tau]$;
3. \[lados\] two vertical segments $\left(P^1_{{\rm\bf w}}-\tau, y\right)$ and $\left(P^2_{{\rm\bf w}}+\tau,y\right)$ with $y\in\left(0,1\right)$.
Together, the components and form a continuous curve that, by the arguments of Lemma \[Structures\] , is mapped by $\eta$ into a helix on ${{\text{In}}}^+({{\rm\bf v}})$, accumulating on $W^s_{{{\text{loc}}}}({{\rm\bf v}})\cap {{\text{In}}}({{\rm\bf v}})$. As the helix approaches $W^s_{{{\text{loc}}}}({{\rm\bf v}})$, it crosses the vertical boundaries of ${{\mathcal S}}_{{\rm\bf v}}$ infinitely many times. The interior of ${{\mathcal S}}_{{\rm\bf w}}$ is mapped into the space between consecutive crossings, intersecting ${{\mathcal S}}_{{\rm\bf v}}$ in horizontal strips, as shown in Figure \[strips\]. From the expression of $\eta^{-1}$ it also follows that the vertical boundaries of ${{\mathcal S}}_{{\rm\bf w}}$ are mapped into graphs of monotonically increasing functions of $x$.
Here, as in the next two Lemmas, we are using the form of the map $\eta$ through Lemma \[Structures\], hence the result depends strongly on the chirality hypothesis (P\[P6\]).
Denote by ${{\mathcal H}}_n$ the strip that attains its maximum height $h_n$ at the vertex $\left(P^1_{{\rm\bf v}}+\tau,h_n \right)$ with $$\label{maxHn}
h_n={{\rm e}}^{(P^1_{{\rm\bf v}}-P^2_{{\rm\bf w}}+2\tau-2n\pi)/K} ,$$ then $\lim_{n\to\infty}h_n=0$, hence the strips ${{\mathcal H}}_n$ accumulate on $W^{s}_{{{\text{loc}}}}({{\rm\bf v}})\cap {{\text{In}}}({{\rm\bf v}})$. The minimum height of ${{\mathcal H}}_n$ is given by $$\label{minHn}
m_n={{\rm e}}^{(P^2_{{\rm\bf v}}-P^1_{{\rm\bf w}}-2\tau-2n\pi)/K}$$ and is attained at the vertex $\left(P^2_{{\rm\bf v}}-\tau,m_n\right)$. Moreover $n<m$ implies that ${{\mathcal H}}_n$ lies above ${{\mathcal H}}_m$.
![Horseshoe strips: for $\lambda>0$ sufficiently small, the set $R_\lambda(\eta^{-1}({{\mathcal S}}_{{\rm\bf w}})\cap {{\mathcal S}}_{{\rm\bf v}}) \subset {{\text{In}}}^+({{\rm\bf v}})$ defines a countably infinite number of horseshoe strips in ${{\mathcal S}}_{{\rm\bf v}}$, that accumulate on $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$.[]{data-label="transition1"}](new12){height="7.6cm"}
\[lemaEtaHorizontalStrips\] Let ${{\mathcal H}}_n$ be one of the horizontal strips in $\eta^{-1}({{\mathcal S}}_{{\rm\bf w}})\cap {{\mathcal S}}_{{\rm\bf v}}$. Then $\eta({{\mathcal H}}_n)$ is a horizontal strip in ${{\mathcal S}}_{{\rm\bf w}}$. The strips $\eta({{\mathcal H}}_n)$ accumulate on $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$ as $n\to\infty$ and the maximum height of $\eta({{\mathcal H}}_n)$ is $h_n^\delta$, where $h_n$ is the maximum height of ${{\mathcal H}}_n$.
The boundary of ${{\mathcal S}}_{{\rm\bf v}}$ consists of a piece of $W^s_{{{\text{loc}}}}({{\rm\bf v}})$ plus a curve formed by three segments, two of which are vertical and a horizontal one. From the arguments of Lemma \[Structures\] , it follows that the part of the boundary of ${{\mathcal S}}_{{\rm\bf v}}$ not contained in $W^s_{{{\text{loc}}}}({{\rm\bf v}})$ is mapped by $\eta$ into a helix.
Consider now the effect of $\eta$ on the boundary of ${{\mathcal H}}_n$. Each horizontal boundary gets mapped into a piece of one of the vertical boundaries of ${{\mathcal S}}_{{\rm\bf w}}$. The vertical boundaries of ${{\mathcal H}}_n$ are contained in those of ${{\mathcal S}}_{{\rm\bf v}}$ and hence are mapped into two pieces of a helix, that will form the horizontal boundaries of the strip $\eta({{\mathcal H}}_n)$, that may be written as graphs of decreasing functions of $x$.
A shown after Lemma \[lemaHorizontalStrips\], the maximum height of ${{\mathcal H}}_n$ tends to zero, hence the strips $\eta({{\mathcal H}}_n)$ have the same property. The maximum height of $\eta({{\mathcal H}}_n)$ is $h_n^\delta$, attained at the point $\left(P^2_{{\rm\bf w}}-\tau,h_n^\delta\right)$.
\[lemaHorseshoeStrips\] For each $\lambda>0$ sufficiently small, there exists $n_0(\lambda)$ such that for all $n\ge n_0$ the image $R_\lambda({{\mathcal H}}_n)$ of the horizontal strips in $\eta^{-1}({{\mathcal S}}_{{\rm\bf w}})\cap {{\mathcal S}}_{{\rm\bf v}}$ intersects ${{\mathcal S}}_{{\rm\bf v}}$ in a horseshoe strip. The strips $R_\lambda({{\mathcal H}}_n)$ accumulate on $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$ and, when $n\to\infty$, their maximum height tends to ${M_{{\rm\bf w}}}(\lambda)$, the maximum height of $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$.
The curve $W^s_{{{\text{loc}}}}({{\rm\bf v}})\cap {{\text{Out}}}({{\rm\bf w}})$ is the graph of the function ${h_{{\rm\bf v}}}(x,\lambda)$ that is positive for $x$ outside the interval $[P^2_{{\rm\bf w}},P^1_{{\rm\bf w}}]\pmod{2\pi}$. In particular, ${h_{{\rm\bf v}}}(P^2_{{\rm\bf w}}-\tau,\lambda)>0$ and ${h_{{\rm\bf v}}}(P^2_{{\rm\bf w}}+\tau,\lambda)>0$ for small $\tau>0$. Therefore there is a piece of the vertical boundary of ${{\mathcal S}}_{{\rm\bf w}}$ that lies below $W^s_{{{\text{loc}}}}({{\rm\bf v}})\cap {{\text{Out}}}({{\rm\bf w}})$, consisting of the two segments $(P^2_{{\rm\bf w}}-\tau,y)$ with $0<y<{h_{{\rm\bf v}}}(P^2_{{\rm\bf w}}-\tau,\lambda)$ and $(P^1_{{\rm\bf w}}+\tau,y)$ with $0<y<{h_{{\rm\bf v}}}(P^1_{{\rm\bf w}}+\tau,\lambda)$. For small $\lambda>0$, these segments are mapped by $\Psi_{{{\rm\bf w}}\to{{\rm\bf v}}}^\lambda$ inside ${{\text{In}}}^-({{\rm\bf v}})$.
Let $n_0$ be such that the maximum height of $\eta({{\mathcal H}}_{n_0})$ is less than the minimum of $${h_{{\rm\bf v}}}(P^2_{{\rm\bf w}}-\tau,\lambda)>0 \qquad \text{and} \qquad {h_{{\rm\bf v}}}(P^2_{{\rm\bf w}}+\tau,\lambda)>0.$$ Then for any $n\ge n_0$ the vertical sides of $\eta({{\mathcal H}}_{n})$ are mapped by $\Psi_{{{\rm\bf w}}\to{{\rm\bf v}}}^\lambda$ inside ${{\text{In}}}^-({{\rm\bf v}})$. The horizontal boundaries of $\eta({{\mathcal H}}_n)$ go across ${{\mathcal S}}_{{\rm\bf w}}$, so writing them them as graphs of $u_1(x)<u_2(x)$, there is an interval where the second coordinate of $\Psi_{{{\rm\bf w}}\to{{\rm\bf v}}}^\lambda(x,u_j(x))$ is more than ${M_{{\rm\bf w}}}(\lambda)>0$, the maximum height of $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}^+({{\rm\bf v}})$. Since the second coordinate of $\Psi_{{{\rm\bf w}}\to{{\rm\bf v}}}^\lambda(x,u_j(x))$ changes sign twice, then it equals zero at two points, hence $R_\lambda({{\mathcal H}}_n)\cap {{\mathcal S}}_{{\rm\bf v}}$ is a horseshoe strip.
We have shown that the maximum height of $\eta({{\mathcal H}}_n)$ tends to zero as $n\to\infty$, hence the maximum height of $R_\lambda({{\mathcal H}}_n)=\Psi_{{{\rm\bf w}}\to{{\rm\bf v}}}^\lambda(\eta({{\mathcal H}}_n))$ tends to ${M_{{\rm\bf w}}}(\lambda)$.
Regular Intersections of Strips
-------------------------------
We now discuss the global dynamics near the Bykov cycle. The structure of the non-wandering set near the network depends on the geometric properties of the intersection of ${{\mathcal H}}_n$ and $R_\lambda({{\mathcal H}}_n)$.
Let $A$ be a horseshoe strip and $B$ be a horizontal strip in ${{\mathcal S}}_{{\rm\bf v}}$. We say that $A$ and $B$ *intersect regularly* if $A\cap B \neq \emptyset$ and each one of the horizontal boundaries of $A$ goes across each one of the horizontal boundaries of $B$. Intersections that are neither empty nor regular, will be called *irregular*.
If the horseshoe strip $A$ and the horizontal strip $B$ intersect regularly, then $A\cap B$ has at least two connected components, see Figure \[transition1\]. In this and the next subsection, we will find that the horizontal strips ${{\mathcal H}}_n$ across ${{\mathcal S}}_{{\rm\bf v}}$ may intersect $R_\lambda({{\mathcal H}}_m)$ in the three ways: empty, regular and irregular, but there is an ordering for the type of intersection, as shown in Figure \[new2\].
\[Novo2\] For any given fixed $\lambda>0$ sufficiently small, there exists $N(\lambda) \in {{\rm\bf N}}$ such that for all $m,n>N(\lambda)$, the horseshoe strips $R_\lambda({{\mathcal H}}_m)$ in ${{\mathcal S}}_{{\rm\bf v}}$ intersect each one of the horizontal strips ${{\mathcal H}}_n$ regularly.
From Lemma \[lemaHorseshoeStrips\] we obtain $n_0(\lambda)$ such that all $R_\lambda({{\mathcal H}}_m)$ with $m\ge n_0(\lambda)$ are horseshoe strips and their lower horizontal boundary has maximum height bigger than the maximum height ${M_{{\rm\bf w}}}(\lambda)$ of $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$.
On the other hand, since the strips ${{\mathcal H}}_n$ accumulate uniformly on $W^s_{{{\text{loc}}}}({{\rm\bf v}})\cap {{\text{In}}}({{\rm\bf v}})$, with their maximum height $h_n$ tending to zero, then there exists $n_1(\lambda)$ such that $h_n<{M_{{\rm\bf w}}}(\lambda)$ for all $n\ge n_1(\lambda)$. Note that the $h_n$ do not depend on $\lambda$. Therefore, for $m,n>N(\lambda)=\max\{n_0(\lambda),n_1(\lambda)\}$ both horizontal boundaries of $R_\lambda({{\mathcal H}}_m)$ go across the two horizontal boundaries of ${{\mathcal H}}_n$.
The constructions of this section also hold for the backwards return map $R^{-1}$ with analogues to Lemmas \[lemaHorizontalStrips\], \[lemaEtaHorizontalStrips\], \[lemaHorseshoeStrips\] and \[Novo2\].
Generically, for $n>N(\lambda)$ each horizontal strip ${{\mathcal H}}_n$ intersects $R_\lambda({{\mathcal H}}_n)$ in two connected components. Thus the dynamics of points whose trajectories always return to ${{\text{In}}}({{\rm\bf v}})$ in ${{\mathcal H}}_n$ may be coded by a full shift on two symbols, that describe which component is visited by the trajectory on each return to ${{\mathcal H}}_n$. Similarly, trajectories that return to ${{\mathcal S}}_{{\rm\bf v}}$ inside ${{\mathcal H}}_n\cup\cdots\cup{{\mathcal H}}_{n+k}$ may be coded by a full shift on $2k$ symbols. As $k\to\infty$, the strips ${{\mathcal H}}_{n+k}$ approach $W^s_{{{\text{loc}}}}({{\rm\bf v}})\cap {{\text{In}}}({{\rm\bf v}})$ and the number of symbols tends to infinity. We have recovered the horseshoe dynamics described in assertion of Theorem \[teorema T-point switching\].
The regular intersection of Lemma \[Novo2\] implies the existence of an $R_\lambda$-invariant subset in $\eta^{-1}({{\mathcal S}}_{{\rm\bf w}})\cap {{\mathcal S}}_{{\rm\bf v}}$, the Cantor set of initial conditions: $$\Lambda=\bigcap_{j\in{{\rm\bf Z}}} \bigcup_{m,n\ge N(\lambda)} \left({R_\lambda}^j({{\mathcal H}}_m)\cap{{\mathcal H}}_n\right),$$ where the return map to $\eta^{-1}({{\mathcal S}}_{{\rm\bf w}})\cap {{\mathcal S}}_{{\rm\bf v}}$ is well defined in forward and backward time, for arbitrarily large times. We have shown here that the map $R_\lambda$ restricted to this set is semi-conjugate to a full shift over a countable alphabet. Results of [@ACL; @NONLINEARITY; @LR] show that the first return map is hyperbolic in each horizontal strip, implying the full conjugacy to a shift. The time of return of points in ${{\mathcal H}}_j$ tends to $+\infty$, as $j \to +\infty$. The set $\Lambda$ depends strongly on the parameter $\lambda$, in the next subsection we discuss its bifurcations when $\lambda$ decreases to zero.
Irregular intersections of strips
---------------------------------
The horizontal strips ${{\mathcal H}}_n$ that comprise $\eta^{-1}({{\mathcal S}}_{{\rm\bf w}})\cap{{\mathcal S}}_{{\rm\bf v}}$ do not depend on the bifurcation parameter $\lambda$, as shown in Lemmas \[lemaHorizontalStrips\] and \[lemaEtaHorizontalStrips\]. This is in contrast with the strong dependence on $\lambda$ shown by the first return of these points to ${{\mathcal S}}_{{\rm\bf v}}$ at the horseshoe strips $R_\lambda({{\mathcal H}}_n)$. In particular, the values of $n_0(\lambda)$ (Lemma \[lemaHorseshoeStrips\]) and $N(\lambda)$ (Lemma \[Novo2\]) vary with the choice of $\lambda$. For a small fixed $\lambda>0$ and for $m,n\ge N(\lambda)$ we have shown that ${{\mathcal H}}_n$ and $R_\lambda({{\mathcal H}}_m)$ intersect regularly.
The next result describes the bifurcations of these sets when $\lambda$ decreases. These global bifurcations have been described by Palis and Takens in [@PT] in a different context, where the horseshoe strips are translated down as a parameter varies. In our case, when $\lambda$ goes to zero the horseshoe strips are flattened into the common invariant two-dimensional manifoldsof ${{\rm\bf v}}$ and ${{\rm\bf w}}$.
\[regular\_intersections\] Given $\lambda_3>0$ sufficiently small, there exist $\lambda_1< \lambda_2< \lambda_3 \in {{\rm\bf R}}^+$, a horizontal strip ${{\mathcal H}}_a $ across ${{\mathcal S}}_{{\rm\bf v}}\subset {{\text{In}}}^+({{\rm\bf v}})$ and $b_0>a$ such that for any $b>b_0$ the horizontal strips ${{\mathcal H}}_a $ and ${{\mathcal H}}_b$ satisfy:
1. \[reg1\] for $\lambda=\lambda_3$ the sets ${{\mathcal H}}_i$ and $R_{\lambda_3}({{\mathcal H}}_j)$ intersect regularly for $i,j\in \{a,b\}$;
2. \[reg2\] for $\lambda=\lambda_2$ the intersection ${{\mathcal H}}_a \cap R_{\lambda_2}({{\mathcal H}}_a )$ is irregular;
3. \[reg4\] for $\lambda=\lambda_1$ the sets ${{\mathcal H}}_a$ and $R_{\lambda_1}({{\mathcal H}}_a )$ do not intersect at al;
4. \[reg5\] for $\lambda=\lambda_1$ and $\lambda=\lambda_2$ the set $R_{\lambda}({{\mathcal H}}_b)$ intersects both ${{\mathcal H}}_b$ and ${{\mathcal H}}_a$ regularly.
![Strips in Proposition \[regular\_intersections\]: (a) $\lambda=\lambda_3$; (b) $\lambda=\lambda_2$; (c) $\lambda=\lambda_1$. The position of the horizontal strips ${{\mathcal H}}_a$ and ${{\mathcal H}}_b$ does not depend on $\lambda$. The maximum height of the horseshoe strips $R_\lambda({{\mathcal H}}_a)$ and $R_\lambda({{\mathcal H}}_b)$ decreases when $\lambda$ decreases, and the suspended horseshoe in $R_\lambda({{\mathcal H}}_a)\cap {{\mathcal H}}_a$ is destroyed. []{data-label="new2"}](bifurcations2){width="14cm"}
As we have remarked before, the horizontal strips ${{\mathcal H}}_n$ do not depend on the parameter $\lambda$. In particular, they are well defined for $\lambda=0$. Their image by the return map $R_0$ is no longer a horseshoe strip, it is a horizontal strip across ${{\mathcal S}}_{{\rm\bf v}}$. Since for $\lambda=0$ the map $\Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^0$ is the identity (see \[subsecInvariantManifolds\]) and the network is asymptotically stable, it follows that the maximum height of $R_0({{\mathcal H}}_n)$ is no more than that of $\eta({{\mathcal H}}_n)$.
From Lemma \[lemaEtaHorizontalStrips\] and the expressions and for the maximum, $h_n$, and minimum $m_n$, height of ${{\mathcal H}}_n$, we get that the maximum height $h_n^\delta$ of $\eta({{\mathcal H}}_n)$ is less than $m_n$ if $$L_n=\delta(P^1_{{\rm\bf v}}-P^2_{{\rm\bf w}}) + 2\tau(\delta-1)-2n\pi (\delta-1) < P^2_{{\rm\bf v}}-P^1_{{\rm\bf w}}.$$ Since $\lim_{n\to\infty} L_n=-\infty$, there is $n_2$ such that $R_0({{\mathcal H}}_n)\cap {{\mathcal H}}_n=\emptyset$, for every $n>n_2$. Therefore, since $R_\lambda$ is continuous on $\lambda$, for each $n>n_2$ there is $\lambda_\star(n)>0$ such that the maximum height of $R_{\lambda(n)}({{\mathcal H}}_n)$ is less than that of ${{\mathcal H}}_n$.
Let $a=\max\{n_1,N(\lambda_3)\}$ for $N(\lambda_3)$ from Lemma \[Novo2\]. Then assertion is true for this ${{\mathcal H}}_a$ and for any ${{\mathcal H}}_n$ with $n>a$, by Lemma \[Novo2\]. We obtain assertion by taking $\lambda_1=\lambda_\star(a)$. Assertion follows from the continuous dependence of $R_\lambda$ on $\lambda$. Assertion holds for any ${{\mathcal H}}_b$ with $b>N(\lambda_1)=b_0$ by Lemma \[Novo2\].
Proof of Theorem \[newhouse\]
-----------------------------
In particular, Proposition \[regular\_intersections\] implies that there exists $ d \in[\lambda_2, \lambda_3)$ such that at $\lambda= d $, the lower horizontal boundary of $R_\lambda({{\mathcal H}}_a)$ is tangent to the upper horizontal boundary of ${{\mathcal H}}_a$. Analogously, there exists $ c \in(\lambda_1, \lambda_2]$ such that at $\lambda= c $, the upper horizontal boundary of $R_\lambda({{\mathcal H}}_a)$ is tangent to the lower horizontal boundary of ${{\mathcal H}}_a$. There are infinitely many values of $\lambda$ in $[ c , d ]$ for which the map $R_\lambda$ has a homoclinic tangency associated to a periodic point see [@PT; @YA].
The rigorous formulation of Theorem \[newhouse\] consists of Proposition \[regular\_intersections\] , with $\Delta_1= [ c , d ] $ as in the remarks above. Since Proposition \[regular\_intersections\] holds for any $\lambda_3>0$, it may be applied again with $\lambda_3$ replaced by $\lambda_1$, and the argument may be repeated recursively to obtain a sequence of disjoint intervals $\Delta_n$.
Proof of Corollary \[Consequences\]
-----------------------------------
The $\lambda$ dependence of the position of the horseshoe strips is not a translation in our case, but after Proposition \[regular\_intersections\] the constructions of Yorke and Alligood [@YA] and of Palis and Takens [@PT] can be carried over. Hence, when the parameter $\lambda$ varies between two consecutive regular intersections of strips, the bifurcations of the first return map can be described as the one-dimensional parabola-type map in [@PT]. Following [@PT; @YA], the bifurcations for $\lambda\in[\lambda_1,\lambda_3]$ are:
- for $\lambda> d $: the restriction of the map $R_{\lambda_3}$ to the non-wandering set on ${{\mathcal H}}_a$ is conjugate to the Bernoulli shift of two symbols and it no longer bifurcates as $\lambda$ increases.
- \[fixedPt\] at $\lambda \in (c , d ) $: a fixed point with multiplier equal to $\pm 1$ appears at a tangency of the horizontal boundaries. It undergoes a period-doubling bifurcation. A cascade of period-doubling bifurcations leads to chaotic dynamics which alternates with stability windows and the bifurcations stop at $\lambda= d $;
- at $\lambda< c $: trajectories with initial conditions in ${{\mathcal H}}_a$ approach the network and might be attracted to another basic set.
Proof of Corollary \[Sinks\]
----------------------------
If the first return map $R_\lambda$ is area-contracting, then the fixed point that appears for $\lambda \in (c , d )$ is attracting for the parameter in an open interval.
This attracting fixed point bifurcates to a sink of period 2 at a bifurcation parameter $\lambda> c $ close to $ c $. This stable orbit undergoes a second flip bifurcation, yielding an orbit of period 4. This process continues to an accumulation point in parameter space at which attracting orbits of period $2^k$ exist, for all $k\in {{\rm\bf N}}$. This completes the proof of Corollary \[Sinks\] .
Finally, we describe a setting in which the fixed point that appears for $\lambda \in (c , d )$ can be shown to be attracting. Recall that the set $W^u_{{{\text{loc}}}}({{\rm\bf w}})\cap {{\text{In}}}({{\rm\bf v}})$ is the graph of $y={h_{{\rm\bf w}}}(x,\lambda)$. Suppose the transition map $\Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^\lambda:{{\text{Out}}}({{\rm\bf w}})\longrightarrow {{\text{In}}}({{\rm\bf v}})$ is given by $\Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^\lambda(x,y)=\left(x,y+{h_{{\rm\bf w}}}(x,\lambda)\right)$, which is consistent with Section \[subsecInvariantManifolds\]. Since $$D \Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^\lambda(x,y)=\left(\begin{array}{cc} 1&0\\
\frac{\partial{h_{{\rm\bf w}}}}{\partial x}(x)&1\end{array}\right)
\quad\mbox{and}\quad
D\eta(x,y)=
\left(\begin{array}{cc} 1&{\displaystyle}\frac{-K}{y}\\ \\ 0&\delta y^{\delta-1}\end{array}\right)$$ then $$\det D R_\lambda(x,y)= \det D \Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^\lambda\left(\eta(x,y)\right) \cdot \det D\eta(x,y)=\delta y^{\delta-1}.$$ For sufficiently small $y$ (in ${{\mathcal H}}_n$ with sufficiently large $n$) this is less than 1, and hence $R_\lambda$ is contracting. However, if the first coordinate of $\Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^\lambda(x,y)$ depends on $y$, then $ \det D \Psi_{{{\rm\bf w}}\rightarrow {{\rm\bf v}}}^\lambda\left(\eta(x,y)\right)$ will contain terms that depend on $x+K\ln y$ and a more careful analysis will be required.
Acknowledgements {#acknowledgements .unnumbered}
================
We thank the two anonymous referees for the careful reading of the manuscript and the useful corrections and suggestions that helped to improve the work and for providing additional references.
[99]{}
V.S. Afraimovich, V.V. Bykov, L.P. Shilnikov, *The origin and structure of the Lorenz attractor*, Sov. Phys. Dokl., 22, 253–255, 1977
V.S. Afraimovich, L.P. Shilnikov, *Strange attractors and quasiattractors*, in: G.I. Barenblatt, G. Iooss, D.D. Joseph (Eds.), Nonlinear Dynamics and Turbulence, Pitman, Boston, 1–51, 1983
M.A.D. Aguiar, S.B.S.D. Castro, I.S. Labouriau, *Dynamics near a heteroclinic network,* Nonlinearity, No. 18, 391–414, 2005
M.A.D. Aguiar, S.B.S.D. Castro, I.S. Labouriau, *Simple Vector Fields with Complex Behaviour*, Int. J. Bif. Chaos, Vol. [16]{}, No. 2, 369–381, 2006
M.A.D. Aguiar, I.S. Labouriau, A.A.P. Rodrigues, *Switching near a heteroclinic network of rotating nodes*, Dynamical Systems, Vol. 25, 1, 75–95, 2010
D. Armbruster, J. Guckenheimer, P. Holmes, *Heteroclinic cycles and modulated travelling waves in systems with O(2) symmetry*, Physica D, No. 29, 257–282, 1988
M. Bessa, A.A.P. Rodrigues, *Dynamics of conservative Bykov cycles: tangencies, generalized Cocoon bifurcations and elliptic solutions*, preprint, 2015
R. Bowen, *Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms*, Lect. Notes in Math, Springer, 1975
V. V. Bykov, *The bifurcations of separatrix contours and chaos*, [Physica D]{}, 62, No.1-4, 290–299, 1993
V. V. Bykov, *On systems with separatrix contour containing two saddle-foci*, [J. Math. Sci.]{}, [95]{}, , 2513–2522, 1999
V. V. Bykov, *Orbit Structure in a Neighbourhood of a Separatrix Cycle Containing Two Saddle-Foci*, [Amer. Math. Soc. Transl]{}, [200]{}, 87–97, 2000
D.R.J. Chillingworth, *Generic multiparameter bifurcation from a manifold*, Dyn. Stab. Syst., Vol. 15, 2, 101–137, 2000
E. Colli, *Infinitely many coexisting strange attractors*, Ann. Inst. H. Poincaré, Anal. Non Linéaire, 15, 539–579, 1998
B. Deng, *Exponential expansion with Shilnikoc saddle-focus*, J. Diff. Eqns, 82, 156–173, 1989
F. Dumortier, S. Ibáñez, H. Kokubu, *Cocoon bifurcation in three-dimensional reversible vector fields*, Nonlinearity 19, 305–328, 2006
F. Dumortier, S. Ibáñez, H. Kokubu, C. Simó, *About the unfolding of a Hopf-zero singularity*, [Disc. Cont. Dyn. Syst.]{}, [ 33]{}, 10, 4435–4471, 2013
N.K. Gavrilov, L.P. Shilnikov, *On three-dimensional dynamical systems close to systems with a structurally unstable homoclinic curve*, Part I. Math. USSR Sbornik, 17, 467–485; 1972; Part II. [*ibid*]{}, [19]{}, 139–156, 1973
G. Glendinning, C. Sparrow, *T-points: a codimension two heteroclinic bifurcation*, [J. Statist. Phys.]{}, [43]{}, 479–488, 1986
M. Golubitsky, I. Stewart, *The Symmetry Perspective*, Birkhauser, 2000
S. V. Gonchenko, L.P. Shilnikov, D.V. Turaev, *On models with non-rough Poincaré homoclinic curves*, Physica D, 62, 1–14, 1993
S.V. Gonchenko, L.P. Shilnikov, D.V. Turaev, *Dynamical phenomena in systems with structurally unstable Poincaré homoclinic orbits*, Chaos 6, No. 1, 15–31, 1996
S.V. Gonchenko, L.P. Shilnikov, D.V. Turaev, *Quasiattractors and Homoclinic Tangencies*, Computers Math. Applic. Vol. 34, No. 2-4, 195–227, 1997
S. V. Gonchenko, L. P. Shilnikov, D.Turaev, *Homoclinic tangencies of arbitrarily high orders in conservative and dissipative two-dimensional maps*, [Nonlinearity]{}, [ 20]{}, 241–275, 2007
S.V. Gonchenko, I.I. Ovsyannikov, D.V. Turaev, *On the effect of invisibility of stable periodic orbits at homoclinic bifurcations*, Physica D, 241, 1115–1122, 2012
S.V. Gonchenko, D.V. Turaev, L.P. Shilnikov, *Homoclinic tangencies of an arbitrary order in Newhouse domains*, in Itogi Nauki Tekh., Ser. Sovrem. Mat. Prilozh. 67, 69–128, 1999 \[English translation in J. Math. Sci. 105, 1738–1778 (2001)\].
J. Guckenheimer, P. Holmes, *Nonlinear and Bifurcations of Vector Fields*, Applied Mathematical Sciences, No. 42, Springer-Verlag, 1983
A. J. Homburg, *Periodic attractors, strange attractors and hyperbolic dynamics near homoclinic orbit to a saddle-focus equilibria*. Nonlinearity 15, 411–428, 2002
A.J. Homburg, B. Sandstede, *Homoclinic and Heteroclinic Bifurcations in Vector Fields*, Handbook of Dynamical Systems, Vol. 3, North Holland, Amsterdam, 379–524, 2010
S. Kiriki, T. Soma, *Existence of generic cubic homoclinic tangencies for Hénon maps*, Ergodic Theory and Dynamical Systems, 33, 1029–1051, 2013
V. Kirk, A.M. Rucklidge, *The effect of symmetry breaking on the dynamics near a structurally stable heteroclinic cycle between equilibria and a periodic orbit*, Dyn. Syst. Int. J. 23, 43–74, 2008
J. Knobloch, J.S.W. Lamb, K.N. Webster, *Using Lin’s method to solve Bykov’s problems*, J. Diff. Eqs., 257(8), 2984–3047, 2014
J. Knobloch, J.S.W. Lamb, K.N. Webster, *Shift dynamics near non-elementary T-points with real eigenvalues*, preprint, 2015
M. Krupa, I. Melbourne, *Asymptotic Stability of Heteroclinic Cycles in Systems with Symmetry II,* Ergodic Theory and Dynam. Sys., Vol. [15]{}, 121–147, 1995
I.S. Labouriau, A.A.P. Rodrigues, *Global generic dynamics close to symmetry*, J. Diff. Eqs., Vol. 253 (8), 2527–2557, 2012
I.S. Labouriau, A.A.P. Rodrigues, *Partial symmetry breaking and heteroclinic tangencies*, in S. Ibáñez, J.S. Pérez del Río, A. Pumariño and J.A. Rodríguez (eds), Progress and challenges in dynamical systems, 281–299, 2013
I.S. Labouriau, A.A.P. Rodrigues, *Dense heteroclinic tangencies near a Bykov cycle*, J. Diff. Eqs., 259, 5875–5902, 2015
J.S.W. Lamb, M.A. Teixeira, K.N. Webster, *Heteroclinic bifurcations near Hopf-zero bifurcation in reversible vector fields in $\textbf{R}^3$*, J. Diff. Eqs., 219, 78–115, 2005
I. Melbourne, M.R.E. Proctor and A.M. Rucklidge, *A heteroclinic model of geodynamo reversals and excursions*, Dynamo and Dynamics, a Mathematical Challenge (eds. P. Chossat, D. Armbruster and I. Oprea, Kluwer: Dordrecht, 363–370, 2001
L. Mora, M. Viana, *Abundance of strange attractors*, Acta Math. 171, 1–71, 1993
S.E. Newhouse, *Diffeomorphisms with infinitely many sinks*, Topology 13 9–18, 1974
S.E. Newhouse, *The abundance of wild hyperbolic sets and non-smooth stable sets for diffeomorphisms*, Publ. Math. Inst. Hautes Etudes Sci. 50, 101–151, 1979
I. M. Ovsyannikov, L.P. Shilnikov, *On systems with a saddle-focus homoclinic curve*, Math. USSR Sb., 58, 557–574, 1987
J. Palis, F. Takens, *Hyperbolicity and sensitive chaotic dynamics at homoclinic bifurcations*, Cambridge University Press, Cambridge Studies in Advanced Mathematics 35, 1993
N. Petrovskaya, V. Yudovich, *Homoclinic loops of Zal’tsman-Lorenz system*, Methods of Qualitative Theory of Diff. Equations, Gorkii, 73–83, 1980
C. Robert, K. Alligood, E. Ott, J. Yorke, *Explosions of chaotic sets*, Physica D, 44–61, 2000
A.A.P. Rodrigues, *Persistent Switching near a Heteroclinic Model for the Geodynamo Problem*, Chaos, Solitons & Fractals, 47, 73–86, 2013
A.A.P. Rodrigues, *Repelling dynamics near a Bykov cycle*, J. Dyn. Diff. Eqs., Vol.25, Issue 3, 605–625, 2013
A.A.P. Rodrigues, *Moduli for heteroclinic connections involving saddle-foci and periodic solutions*, Discrete Contin. Dyn. Syst. A, Vol. 35(7), 3155–3182, 2015
A.A.P. Rodrigues, I.S. Labouriau, *Spiralling dynamics near heteroclinic networks*, Physica D, 268, 34-49, 2014
A. M. Rucklidge, *Chaos in a low-order model of magnetoconvection*, Physica D 62, 323– 337, 1993
L.P. Shilnikov, *A case of the existence of a denumerable set of periodic motions*, Sov. Math. Dokl, No. 6, 163–166, 1965
L.P. Shilnikov, *On a Poincaré-Birkhoff problem*, Math. USSR Sb. 3, 353–371, 1967
L.P. Shilnikov, *The existence of a denumerable set of periodic motions in four dimensional space in an extended neighbourhood of a saddle-focus*, Sov. Math. Dokl., 8(1), 54–58, 1967
J. A. Yorke, K. T. Alligood, *Cascades of period-doubling bifurcations: A prerequisite for horseshoes*, Bull. Am. Math. Soc. (N.S.) 9(3), 319–322, 1983
[^1]: CMUP is supported by Fundação para a Ciência e a Tecnologia — FCT (Portugal) with national (MEC) and European (FEDER) funds, under the partnership agreement PT2020. A.A.P. Rodrigues was supported by the grant SFRH/BPD/84709/2012 of FCT. Part of this work has been written during A.A.P. Rodrigues stay in Nizhny Novgorod University supported by the grant RNF 14-41-00044
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The performance of millimeter wave (mmWave) multiple-input multiple-output (MIMO) systems is limited by the sparse nature of propagation channels and the restricted number of radio frequency chains at transceivers. The introduction of reconfigurable antennas offers an additional degree of freedom on designing mmWave MIMO systems. This paper provides a theoretical framework for studying the mmWave MIMO with reconfigurable antennas. Based on the virtual channel model, we present an architecture of reconfigurable mmWave MIMO with beamspace hybrid analog-digital beamformers and reconfigurable antennas at both the transmitter and the receiver. We show that employing reconfigurable antennas can provide throughput gain for the mmWave MIMO. We derive the expression for the average throughput gain of using reconfigurable antennas in the system, and further derive the expression for the outage throughput gain for the scenarios where the channels are (quasi) static. Moreover, we propose a low-complexity algorithm for reconfiguration state selection and beam selection. Our numerical results verify the derived expressions for the throughput gains and demonstrate the near-optimal throughput performance of the proposed low-complexity algorithm.'
author:
- 'Biao He, and Hamid Jafarkhani, [^1] [^2] [^3]'
title: 'Low-Complexity Reconfigurable MIMO for Millimeter Wave Communications'
---
Reconfigurable antennas, millimeter wave, sparse channels, virtual channel model, throughput gain.
Introduction
============
The fast development of wireless technology has enabled the ubiquitous use of wireless devices in modern life. Consequently, a capacity crisis in wireless communications is being created. A promising solution to the capacity crisis is to increase the available spectrum for commercial wireless networks by exploring the millimeter wave (mmWave) band from 30 GHz to 300 GHz.
MmWave communications has drawn significant attention in recent years [@rappaport2014millimeter], as the large available bandwidth may offer multiple-Gbps data rates [@Pi11Aninmmvmbs]. In particular, the 3rd Generation Partnership Project (3GPP), which is a work in progress to serve as the international industry standard for 5G, has continuously published the technical report (TR) documents on the mmWave channels, e.g., [@3GPPTR389011411; @3GPPTR389001420; @3GPPR1-164975]. Different from low-frequency communications, mmWave carrier frequencies are relatively very high. The high frequency results in some propagation challenges, such as large pathloss and severe shadowing [@rappaport2014millimeter]. Meanwhile, the small wavelength enables a large number of antennas to be closely packed to form mmWave large multi-antenna systems, which can be utilized to overcome the propagation challenges and provide reasonable signal to noise ratios (SNRs) [@rappaport2013millimeter]. The multi-input multi-output (MIMO) technology has already been standardized and widely adopted in commercial WLAN and cellular systems at sub-6 GHz frequencies (IEEE 802.11n/ac, IEEE 802.16e/m, 3GPP cellular LTE, and LTE Advanced) [@Kim7060495; @Li5458368]. However, the performance of mmWave MIMO systems is still considerably limited due to the sparsity of the channels and the stringent constraint of using radio frequency (RF) chains in mmWave transceivers. The directional propagations and clustered scattering make the mmWave paths to be highly sparse [@Pi11Aninmmvmbs]. More importantly, the high cost and power consumption of RF components and data converters preclude the adoption of fully digital processing for mmWave MIMO to achieve large beamforming gains [@Pi11Aninmmvmbs; @Doan04dcf60gcmosra], and low-complexity transceivers relying heavily on analog or hybrid (analog-digital) processing are often adopted [@Venkateswaran10ancpsnoce; @Ayach_14_SpatiallySparsePrecodingmmMIMO; @Liu06STTrecbocpf].
The limited beamforming capability and performance of mmWave MIMO motivate us to investigate the potential benefits of employing reconfigurable antennas for mmWave MIMO in this work. Different from conventional antennas with fixed radiation characteristics, reconfigurable antennas can dynamically change their radiation patterns, polarizations, and/or frequencies [@Cetiner_04_MEMS_Magazine; @Grau_08_AreMIMOcym], and offer an additional degree of freedom for designing mmWave MIMO systems. The radiation characteristics of an antenna is directly determined by the distribution of its current [@balanis2005antenna], and the mechanism of reconfigurable antennas is to change the current flow in the antenna, so that the radiation pattern, polarization, and/or frequency can be modified. The detailed relationship between the geometry of an antenna’s current and how it radiates or collects the energy can be found in Equation (1) in [@Grau_08_AreMIMOcym]. The study of reconfigurable antennas for traditional low-frequency MIMO has received considerable attention, e.g., presented in [@Cetiner_04_MEMS_Magazine; @Grau_08_AreMIMOcym; @Fazel09stsbcmmioyra; @Christodoulou12Rafwsa; @Haupt13ra; @Pendharker14ocfrmalp]. From the practical antenna design perspective, different approaches to make antennas reconfigurable have been proposed and realized, such as Microelectrophoretical Systems (MEMS) switches, diodes, field-effect transistors (FETs), varactors, and optical switches [@Christodoulou12Rafwsa; @Haupt13ra; @Pendharker14ocfrmalp]. From the perspective of theoretical performance analysis, the array gain, diversity gain, and coding gain [@Jafarkhani_05_stctpbook] of employing reconfigurable antennas have been derived with space-time code designs [@Grau_08_AreMIMOcym; @Fazel09stsbcmmioyra]. More recently, reconfigurable antennas for communications at mmWave frequencies have been designed and realized, e.g., [@Jilani_16_FMMFRdsds] at 20.7–36 GHz, [@Ghassemiparvin_16A_Rmmsdedfs] at 92.6–-99.3 GHz, and [@Costa_17_OpticallCRmmAas] at 28–38 GHz. The design of space-time codes for a $2\times2$ mmWave MIMO with reconfigurable transmit antennas was investigated in [@Vakilian_15_SThmmra] and [@Vakilian_15_ThmmraAr], and the achieved diversity gain and coding gain were demonstrated. Due to the simple structure of a $2\times2$ MIMO, neither the important sparse nature of mmWave channels nor the transceivers with low-complexity beamforming were considered in [@Vakilian_15_SThmmra] and [@Vakilian_15_ThmmraAr].
In this work, we comprehensively study the mmWave MIMO systems with reconfigurable antennas. We consider that the radiation patterns and/or the polarizations of antennas are reconfigurable, while do not consider the frequency reconfigurations. We consider general mmWave systems over sparse channels and the transceivers with low-complexity beamforming are taken into account. Our analytical results are generally applicable for the whole mmWave frequency range where the channel matrices are sparse provided the models are reflective of real systems. We present a practical architecture of mmWave MIMO with beamspace hybrid beamformers and reconfigurable antennas. The presented architecture has a low-complexity structure for practical implementation, since it only requires a few RF chains. More importantly, as will be shown later in the paper, the presented architecture offers tractable analytical results on the throughput gains of employing reconfigurable antennas. The throughput gains of employing reconfigurable antennas for the mmWave system are investigated, and a fast selection algorithm for the antennas’ reconfiguration state and the hybrid beamformers’ beams is further proposed.
The primary contributions of the paper are summarized as follows:
1. We are the first to provide a theoretical framework for studying the reconfigurable antennas in mmWave MIMO systems. We take the sparse nature of mmWave channels into account, and present a practical architecture of the mmWave MIMO with low-complexity beamformers and reconfigurable antennas, in which beamspace hybrid beamformers and reconfigurable antennas are employed at both the transmitter and the receiver.
2. We investigate the throughput gains of employing reconfigurable antennas in mmWave MIMO systems. We derive the expression for the average throughput gain, which involves an infinite integral of the error function. We further consider the cases of small and large numbers of reconfiguration states, and derive the corresponding simplified expressions for the average throughput gains. We also derive the expression for the outage throughput gain of employing reconfigurable antennas for the (quasi) static channels. Moreover, we analyze the limiting growth rates of the average throughput gain and the outage throughput gain as the number of reconfiguration states becomes large. To the best of our knowledge, the throughput gains of employing reconfigurable antennas have never been derived in the literature, even in the case of low-frequency systems.
3. With the highly sparse nature of the channel, the number of non-vanishing rows and columns of the channel matrix in beamspace domain is relatively small, and the dominant beams usually significantly outperform the others. Taking those advantages of the sparse nature of mmWave channels, we propose a fast algorithm for selecting the reconfiguration state of the antennas and the beams for the beamspace hybrid beamformers. The proposed algorithm significantly reduces the complexity of the reconfiguration state selection and beam selection without a large throughput loss compared with the optimal selection of reconfiguration state and beams by exhaustive search.
4. We demonstrate the throughput gains of employing reconfigurable antennas in mmWave MIMO systems by numerical evaluations. A practical clustered multipath model is adopted for generating MIMO channels [@akdeniz2014millimeter]. Our results show that the employment of reconfigurable antennas provides both the average throughput gain and the outage throughput gain for mmWave MIMO systems, and confirm the accuracy of our derived expressions. For the outage throughput, an interesting finding is that the performance gain increases as the required outage level becomes more stringent.
The notations are summarized in Table \[table:Notations\].
------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------
symbol meaning
${\mathbf{X}}^T$ transpose of ${\mathbf{X}}$
${\mathbf{X}}^H$ conjugate transpose of ${\mathbf{X}}$
${\mathbf{X}}\left(m,n\right)$ entry of ${\mathbf{X}}$ in $m$-th row and $n$-th column
${\mathrm{Tr}}({\mathbf{X}})$ trace of ${\mathbf{X}}$
$\left|{\mathbf{X}}\right|$ determinant of ${\mathbf{X}}$
$\left\|{\mathbf{X}}\right\|_F$ Frobenius norm of ${\mathbf{X}}$
$\mathrm{Re}[x]$ real part of $x$
$\mathrm{Im}[x]$ imaginary part of $x$
$\odot$ Hadamard (element-wise) product
$\left|\mathcal{X}\right|$ cardinality of set $\mathcal{X}$
${\mathrm{sgn}}(\cdot)$ sign function
$\mathrm{erf}(\cdot)$ error function
$\mathrm{erf}^{-1}(\cdot)$ inverse error function
${\mathbb{E}}\{\cdot\}$ expectation operation
${\mathbb{P}}(\cdot)$ probability measure
$\mathrm{corr}(\cdot,\cdot) $ correlation coefficient
${\mathbf{I}}_n$ identity matrix of size $n$
$\mathcal{N}(\mu,\sigma^2)$ Gaussian distribution with mean $\mu$ and variance $\sigma^2$
$\mathcal{CN}(\mu,\sigma^2)$ complex Gaussian distribution with mean $\mu$ and variance $\sigma^2$
$\mathcal{CN}({\mathbf{a}},{\mathbf{A}})$ distribution of a circularly symmetric complex Gaussian random vector with mean ${\mathbf{a}}$ and covariance matrix ${\mathbf{A}}$
------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------
: Summary of notations.[]{data-label="table:Notations"}
Preliminaries
=============
Different from conventional antennas with fixed radiation characteristics, reconfigurable antennas can dynamically change their radiation characteristics. The mechanism of reconfigurable antennas is to change the current flow in the antenna, so that the radiation pattern, polarization, and/or frequency can be modified. Specifically, the manner how an antenna radiates or captures the energy can be mathematically expressed as [@balanis2005antenna; @Grau_08_AreMIMOcym] $$\label{eq:radiantennafo}
{\mathbf{F}}(\theta,\phi)=\hat{{\mathbf{r}}}\times\left[\hat{{\mathbf{r}}}\times\left[\int_{V'}{\mathbf{J}}_{V'({\mathbf{r}}')}e^{-j\beta\hat{{\mathbf{r}}}{\mathbf{r}}}\mathrm{d}v'\right]\right],$$ where ${\mathbf{F}}(\theta,\phi) = \left(F_{\theta}(\theta,\phi), F_{\phi}(\theta,\phi)\right)^\dagger$ denotes the normalized complex amplitude radiation pattern of the antenna, ${\mathbf{J}}_{V'({\mathbf{r}}')}$ denotes the current distribution in the antenna, ${\mathbf{r}}'$ denotes the vector from the coordinate system’s origin to any point on the antenna, $\hat{{\mathbf{r}}}$ denotes the unit vector in the direction of propagation (from the origin to the observation point), $\beta$ denotes the propagation constant, $V'$ denotes the volume of the antenna containing the volumetric current densities, and $\theta$ and $\phi$ denote the elevation and azimuth angles in the spherical coordinate system, respectively. It is evident from that one can change the current distribution ${\mathbf{J}}_{V'({\mathbf{r}}')}$ by altering the antenna’s physical configuration ${\mathbf{r}}'$, and finally modify the antenna’s radiation characteristics ${\mathbf{F}}(\theta,\phi)$. There are different approaches to change the antenna’s physical configuration in practice, e.g., microelectrophoretical systems (MEMS), diodes, field-effect transistors (FETs), varactors, and optical switches [@Christodoulou12Rafwsa; @Haupt13ra; @Pendharker14ocfrmalp]. One can find examples of how the change in current distribution affects the antenna’s radiation pattern, polarization, and frequency in [@Cetiner_04_MEMS_Magazine]. The design of reconfigurable antennas for mmWave frequencies has been specifically investigated in, e.g., [@Jilani_16_FMMFRdsds; @Ghassemiparvin_16A_Rmmsdedfs; @Costa_17_OpticallCRmmAas]. It is worth mentioning that the concept of reconfigurable antennas in this paper is different from the concept of reconfigurable antenna arrays in, e.g., [@Sayeed_07_maxMcsparseRAA]. As previously explained, the reconfigurable antennas are realized by changing the current flow in the antenna, while the reconfigurable antenna arrays in, e.g., [@Sayeed_07_maxMcsparseRAA], are achieved by changing array configurations (antenna spacings).
Although existing studies have shown that mmWave reconfigurable antennas can be realized [@Jilani_16_FMMFRdsds; @Ghassemiparvin_16A_Rmmsdedfs; @Costa_17_OpticallCRmmAas], there still exist a number of challenges in practical implementation. In the following, we specifically discuss four issues of reconfigurable antennas that one may encounter in practical implementation, which are (1) switching delay, (2) power consumption, (3) form factor, and (4) cost. A major issue of implementing reconfigurable antennas in practice for not only sub-6 GHz systems but also mmWave communications is the switching delay incurred by changing the reconfiguration states. In practical implementation, switching the reconfigurable antennas from one state to another takes non-negligible switching time [@Fazel09stsbcmmioyra]. For example, a switch device with a switching time of 100 microsecond can cause considerable delays for mmWave systems. Another issue for the implementation of reconfigurable antennas is the power consumption. Compared with the traditional antennas, the reconfigurable antennas require an extra power consumption to enable the switches for reconfiguration. This issue becomes more critical considering the high power consumption of mmWave circuits and systems. The third issue for the implementation of reconfigurable antennas is their relatively large size. As the wavelength becomes small, mmWave transceivers can pack a large number of antennas in a compact size to overcome the propagation challenges. However, the employment of switches for reconfiguration makes the size of reconfigurable antennas relatively larger than traditional antennas. Thus, packing a large number of reconfigurable antennas together may result in a large form factor. Last but not least, the additional cost of circuits and switches to operate and control the antenna reconfiguration is also an issue for the implementation. In particular, the requirement of low cost would make it more challenging to address the aforementioned three issues in practical implementation.
To address the implementation issues of reconfigurable antennas, considerable research efforts on the circuit and switch designs are still needed, since the switching time, power consumption, form factor, and cost all highly depend on specific circuit topologies and switch technologies. For example, the pin diodes and MEMS switches have a relatively low power consumption and fast switching speed. However, these devices experience larger losses at high frequencies. A possible solution is to explore smart material based switches, via metal-insulator transition compounds such as vanadium oxide ($VO_2$), which are designed to operate at mmWave frequencies [@Khalat10259653]. One advantage is that smart materials do not require energy to maintain either the ON (crystalline state) or OFF (amorphous state) state, which thus reduces power consumption in switching the reconfiguration states. Also, relatively fast switching speed may be achievable by $VO_2$ based switches, where the demonstrated fastest switching speed is about 5 nanosecond [@Zhou6403505].
System Model {#sec:sysmod}
============
We consider a mmWave system where a transmitter with $N_t$ antennas sends messages to a receiver with $N_r$ antennas. The antennas at both the transmitter and the receiver are reconfigurable. We assume that the transmit antennas can be reconfigured into $Q$ distinct radiation states and the receive antennas can be reconfigured into $W$ distinct radiation states. Here two radiation states are distinct when the antenna’s radiation characteristics associated with the two states are orthogonal to each other. From an electromagnetic point of view, orthogonal radiation characteristics can be generated by using polarization, pattern, space, or frequency diversity techniques, or any combination of them [@Waldschmidt1321326; @Grau_08_AreMIMOcym; @Huff1589416; @Aissat1643626]. For example, polarization diversity imposes orthogonality by creating orthogonal polarizations and pattern diversity imposes orthogonality by producing spatially disjoint radiation patterns. From , we note that one can modify the antenna’s radiation characteristics ${\mathbf{F}}(\theta,\phi)$ by altering the antenna’s physical configuration ${\mathbf{r}}'$. Thus, it is possible to have orthogonal radiation characteristics, and the numbers of distinct radiation states at the transmitter and the receiver, $Q$ and $W$, correspond to the orthogonal radiation characteristics that the reconfigurable antennas can generate, which are determined by the practical circuit and antenna designs. We further assume that all the antenna ports can be reconfigured simultaneously. Then, the total number of possible combinations in which the transmit and receive ports can be reconfigured is given by $\Psi=QW$. We refer to each one of these combinations as a reconfiguration state. For brevity, we further refer to the $\psi$-th reconfiguration state as reconfiguration state $\psi$. Note that the transmitter and the receiver may have the same or different reconfiguration capabilities in practice, and our analysis is applicable to both cases by adjusting the number of radiation states at the transmitter $Q$ and the number of radiation states at the receiver $W$. When $Q=W$, the transmitter and the receiver have the same reconfiguration capability. Otherwise, the transmitter and the receiver have different reconfiguration capabilities. We consider the narrowband block-fading channels. Denote the transmitted signal vector from the transmitter as ${\mathbf{x}}\in {\mathbb{C}}^{N_t\times 1}$ with a transmit power constraint ${\mathrm{Tr}}\left(\mathbb{E}\{{\mathbf{x}}{\mathbf{x}}^H\}\right)=P$. The received signal at the receiver with reconfiguration state $\psi$ is given by $$\label{eq:yhxbasic}
{\mathbf{y}}={\mathbf{H}}_{\psi}{\mathbf{x}}+{\mathbf{n}},$$ where ${\mathbf{H}}_{\psi}\in{\mathbb{C}}^{N_r\times N_t}$ denotes the channel matrix corresponding to the reconfiguration state $\psi$ and ${\mathbf{n}}\sim\mathcal{CN}(\mathbf{0};\sigma^2_n{\mathbf{I}}_{N_r})$ denotes the additive white Gaussian noise (AWGN) vector at the receive antennas. Without loss of generality, the noise variance is taken to be unity, i.e., $\sigma^2_n=1$. Note that ${\mathbf{H}}_{\psi}(i,j)$ represents the channel coefficient that contains the gain and phase information of the path between the $i$-th receive antenna and the $j$-th transmit antenna in the reconfiguration state $\psi$. We assume that the channel matrices for different reconfiguration states are independent [@Grau_08_AreMIMOcym; @Fazel09stsbcmmioyra; @Vakilian_15_SThmmra; @Vakilian_15_ThmmraAr], and have the same average channel power such that $\mathbb{E}\{\left\|{\mathbf{H}}_{1}\right\|^2_F\}=\cdots=\mathbb{E}\{\left\|{\mathbf{H}}_{\Psi}\right\|^2_F\}=N_rN_t$. It is worth mentioning that the assumption of orthogonal radiation characteristics is crucial for our analysis, since our results are based on the assumption of independent channel matrices associated with different reconfiguration states. If the radiation patterns are non-orthogonal, the channel matrices associated with different reconfiguration states would be correlated, and we would expect a relatively smaller throughput gain compared with the results in this paper.
We further assume that full channel state information (CSI) of all reconfiguration states is known at the receiver. The full CSI is not necessarily known at the transmitter. We assume that a limited feedback from the receiver to the transmitter is available for the reconfiguration state selection and transmit beam selection, which will be detailed later in Section \[Sec:Architecture\]. We would like to point out that the assumption of full CSI at the receiver or even at both the transmitter and the receiver has often been adopted in the existing papers on mmWave systems and reconfigurable antennas, see, e.g., [@Sohrabi7913599; @Rusu7579557; @Sohrabi7389996; @Chen7055330; @Amadori_15_LowRDBStion; @Ayach_14_SpatiallySparsePrecodingmmMIMO; @Grau_08_AreMIMOcym; @Fazel09stsbcmmioyra; @Vakilian_15_SThmmra; @Vakilian_15_ThmmraAr]. On the other hand, it is worth mentioning that the full CSI assumption is not easy to achieve in practice. The full CSI requires user equipments to conduct channel estimation. However, channel estimation is relatively challenging for mmWave MIMO systems. Different from sub-6 GHz systems, the precoder for mmWave MIMO with a limited number of RF chains is usually not fully digital. Due to the constrained precoding structure, the channel has to be estimated via the use of a certain number of RF beams, and each beam only presents a projection of the channel matrix rather than the full channel matrix itself. To obtain the full CSI, the channel estimation needs to be conducted over enough number of RF beams so that the full channel matrix can be estimated. The same process has to be repeatedly conducted for all the reconfiguration states as well. Thus, the channel estimation process to obtain full CSI would incur a considerable latency, and the assumption of full CSI is not easy to achieve in practice.
Channel Model
-------------
For low-frequency $N_r\times N_t$ MIMO systems in an ideally rich scattering environment, the channel for each reconfiguration state is usually modelled by a full rank $N_r\times N_t$ matrix with i.i.d. entries [@Grau_08_AreMIMOcym], e.g., i.i.d. complex Gaussian entries for Rayleigh fading channels. For mmWave communications in the clustered scattering environment, it is no longer appropriate to model the channel for each reconfiguration state as a full-rank matrix with i.i.d. entries due to the sparse nature of mmWave channels.[^4] In the following, we present the channel model of mmWave MIMO systems with reconfigurable antennas.
### Physical Channel Representation
The physical channel representation is also often known as Saleh-Valenzuela (S-V) geometric model. The mmWave MIMO channel can be characterized by physical multipath models. In particular, the clustered channel representation is usually adopted as a practical model for mmWave channels [@akdeniz2014millimeter; @Gustafson_14_ommcacm; @Health_16_OverviewSPTmmMIMO]. The channel matrix for reconfiguration state $\psi$ is contributed by $N_{\psi,{\mathrm{cl}}}$ scattering clusters, and each cluster contains $N_{\psi,{\mathrm{ry}}}$ propagation paths. The 2D physical multipath model for the channel matrix ${\mathbf{H}}_{\psi}$ is given by $$\label{eq:H_PhyscialModeling2D}
\mathbf{H_\psi}=
\sum^{N_{\psi,{\mathrm{cl}}}}_{i=1}\sum^{N_{\psi,{\mathrm{ry}}}}_{l=1}
\alpha_{\psi,i,l} \mathbf{a}_{R}\left(\theta^r_{\psi,i,l}\right)
\mathbf{a}_{T}^H\left(\theta^t_{\psi,i,l}\right),$$ where $\alpha_{\psi,i,l}$ denotes the path gain, $\theta^r_{\psi,i,l}$ and $\theta^t_{\psi,i,l}$ denote the angle of arrival (AOA) and the angle of departure (AOD), respectively, $\mathbf{a}_{R}\left(\theta^r_{\psi,i,l}\right)$ and $\mathbf{a}_{T}^H\left(\theta^t_{\psi,i,l}\right)$ denote the steering vectors of the receive antenna array and the transmit antenna array, respectively. In this work, we consider the 1D uniform linear array (ULA) at both the transmitter and the receiver. Then, the steering vectors are given by $$\label{}
\mathbf{a}_{R}\left(\theta^r_{\psi,i,l}\right)=\left[1,e^{-j2\pi\vartheta^r_{\psi,i,l}},\cdots,e^{-j2\pi\vartheta^r_{\psi,i,l}(N_r-1)}\right]^T,$$ and $$\label{}
\mathbf{a}_{T}\left(\theta^t_{\psi,i,l}\right)=\left[1,e^{-j2\pi\vartheta^t_{\psi,i,l}},\cdots,e^{-j2\pi\vartheta^t_{\psi,i,l}(N_t-1)}\right]^T,$$ where $\vartheta$ denotes the normalized spatial angle. The normalized spatial angle is related to the physical AOA or AOD $\theta\in\left[-\pi/2,\pi/2\right]$ by $
\vartheta={d\sin(\theta)}/{\lambda},
$ where $d$ denotes the antenna spacing and $\lambda$ denotes the wavelength.
We assume that $N_{1,{\mathrm{cl}}}=\cdots=N_{\Psi,{\mathrm{cl}}}$ and $N_{1,{\mathrm{ry}}}=\cdots=N_{\Psi,{\mathrm{ry}}}$, which implies that the sparsity of the mmWave MIMO channel remains the same for all reconfiguration states. By transmitting and receiving with orthogonal radiation patterns, the propagated signals undergo different reflections and diffractions such that different reconfiguration states lead to different multipath parameters. That is, the values of $\alpha_{\psi,i,l}$, $\theta^r_{\psi,i,l}$, and $\theta^t_{\psi,i,l}$ change as the reconfiguration state changes.
### Virtual Channel Representation (VCR)
The virtual (beamspace) representation is a natural choice for modelling mmWave MIMO channels due to the highly directional nature of propagation [@Health_16_OverviewSPTmmMIMO]. The virtual model characterizes the physical channel by coupling between the spatial beams in fixed virtual transmit and receive directions, and represents the channel in beamspace domain.
The VCR of $\mathbf{H_\psi}$ in is given by [@Sayeed_02_Deconstuctingmfc; @Tse_05_Fundamentals] $$\begin{aligned}
\label{eq:H_VirtualModeling}
\mathbf{H_{\psi}}&=\sum^{N_r}_{i=1}\sum^{N_t}_{j=1} H_{\psi,V}(i,j)\mathbf{a}_R\left(\ddot{\theta}_{R,i}\right)
\mathbf{a}_T^H\left(\ddot{\theta}_{T,j}\right) \notag\\
&={\mathbf{A}}_R\mathbf{H}_{\psi,V}{\mathbf{A}}_T^H,\end{aligned}$$ where $\ddot{\theta}_{R,i}=\arcsin\left(\lambda\ddot{\vartheta}_{R,i}/d\right)$ and $\ddot{\theta}_{T,j}=\arcsin\left(\lambda\ddot{\vartheta}_{T,j}/d\right)$ are fixed virtual receive and transmit angles corresponding to uniformly spaced spatial angles[^5] $$\label{}
\ddot{\vartheta}_{R,i}=\frac{i-1-(N_r-1)/2}{N_r}$$ and $$\label{}
\ddot{\vartheta}_{T,j}=\frac{j-1-(N_t-1)/2}{N_t},$$ respectively, $$\label{}
{\mathbf{A}}_R=\frac{1}{\sqrt{N_r}}\left[ \mathbf{a}_R\left(\ddot{\theta}_{R,1}\right),\cdots,\mathbf{a}_R\left(\ddot{\theta}_{R,N_r}\right)\right]^T$$ and $$\label{}
{\mathbf{A}}_T=\frac{1}{\sqrt{N_t}}\left[ \mathbf{a}_T\left(\ddot{\theta}_{T,1}\right),\cdots,\mathbf{a}_T\left(\ddot{\theta}_{T,N_t}\right)\right]^T$$ are unitary DFT matrices, and ${\mathbf{H}}_{\psi,V}\in{\mathbb{C}}^{N_r\times N_t}$ is the virtual channel matrix. Since $\mathbf{A}_R\mathbf{A}_R^H=\mathbf{A}_R^H\mathbf{A}_R={\mathbf{I}}_{N_r}$ and $\mathbf{A}_T \mathbf{A}_T^H= \mathbf{A}_T^H \mathbf{A}_T={\mathbf{I}}_{N_t}$, the virtual channel matrix and the physical channel matrix are unitarily equivalent, such that $$\label{}
{\mathbf{H}_{\psi,V}}=\mathbf{A}_R^H\mathbf{H}_{\psi}\mathbf{A}_T.$$
### Low-Dimensional VCR
For MIMO systems, the link capacity of the reconfiguration state $\psi$ is directly related to the amount of scattering and reflection in the multipath environment. As discussed in [@Tse_05_Fundamentals Chapter 7.3], the number of non-vanishing rows and columns of ${\mathbf{H}_{\psi,V}}$ depends on the amount of scattering and reflection. In the clustered scattering environment of mmWave MIMO, the dominant channel power is expected to be captured by a few rows and columns of the virtual channel matrix, i.e., a low-dimensional submatrix of ${\mathbf{H}_{\psi,V}}$.
The discussion above motivates the development of low-dimensional virtual representation of mmWave MIMO channels and the corresponding low-complexity beamforming designs for mmWave MIMO transceivers [@Brady_13_BeamspaceSAMAM; @Amadori_15_LowRDBStion; @Sayeed_07_maxMcsparseRAA; @Raghavan_11_SublinearSparse]. Specifically, a low-dimensional virtual channel matrix, denoted by ${\widetilde{\mathbf{H}}_{\psi,V}}\in{\mathbb{C}}^{L_r\times L_t}$, is obtained by beam selection from ${\mathbf{H}_{\psi,V}}$, such that ${\widetilde{\mathbf{H}}_{\psi,V}}$ captures $L_t$ dominant transmit beams and $L_r$ dominant receive beams of the full virtual channel matrix. The low-dimensional virtual channel matrix is defined by $$\label{eq:sHv1}
{\widetilde{\mathbf{H}}_{\psi,V}}=\left[{\mathbf{H}_{\psi,V}}(i,j)\right]_{i\in{\mathcal{M}_{\psi,r}},j\in\mathcal{M}_{\psi,t}},$$ where $\mathcal{M}_{\psi,r}=\left\{i:(i,j)\in\mathcal{M}_{\psi}\right\}$, $\mathcal{M}_{\psi,t}=\left\{j:(i,j)\in\mathcal{M}_{\psi}\right\}$, and $\mathcal{M}_{\psi}$ is the beam selection mask. The beam selection mask $\mathcal{M}$ is related to the criterion of beam selection. For example, a common beam selection is based on the criterion of maximum magnitude, and the corresponding beam selection mask is defined as [@Brady_13_BeamspaceSAMAM] $$\label{eq:Mmagnit}
\mathcal{M}_{\psi}=\left\{(i,j):\left|{\mathbf{H}_{\psi,V}}(i,j)\right|^2\ge\gamma_{\psi}\max_{(i,j)}\left|{\mathbf{H}_{\psi,V}}(i,j)\right|^2\right\},$$ where $0<\gamma_{\psi}<1$ is a threshold parameter used to ensure that ${\widetilde{\mathbf{H}}_{\psi,V}}$ has the dimension of $L_r\times L_t$. Using , the resulting ${\widetilde{\mathbf{H}}_{\psi,V}}$ captures a fraction $\gamma_{\psi}$ of the power of ${\mathbf{H}_{\psi,V}}$.
Although the VCR and the beamspace hybrid beamforming for mmWave systems have been widely adopted and studied in the literature [@Health_16_OverviewSPTmmMIMO; @Gao8284058; @Brady_13_BeamspaceSAMAM; @Amadori_15_LowRDBStion; @Wang7974749; @Mo7094595], it is worth mentioning that there are some potential limitations on the utility of VCR in practical mmWave systems. For sub-6 GHz systems, motivated by the limitations of the S-V model and the statistical model, the intermediate VCR was introduced to keep the essence of S-V model without its complexity and to provide a tractable channel characterization [@Sayeed_02_Deconstuctingmfc; @Raghavan4487419; @Huang5074791]. The VCR for sub-6 GHz systems offers a simple and transparent interpretation of the effects of scattering and array characteristics [@Sayeed_02_Deconstuctingmfc]. However, the interpretation of VCR is based on the assumption of (approximately) unitary bases at both the transmit and the receive sides, which may not be possible in many practical mmWave systems. Note that, different from sub-6 GHz systems, mmWave systems usually do not have a large number of dominant clusters. Also, the numbers of antennas for practical mmWave transceivers are still far from the asymptotics. Thus, there may not be a solid case for the VCR in practical mmWave systems, and the statistics of the VCR for sub-6 GHz systems do not hold for mmWave systems. Consequently, the optimality of the beamspace hybrid beamforming, which is based on the VCR, in practical mmWave systems may not be easy to verify. The losses related to the application of the simple DFT and IDFT are not clear. In addition, the power leakage issue due to the beam selection may be serious in practical mmWave systems. While the DFT and IDFT are fixed, the actual angles of departure and arrival of different paths are continuously distributed. Thus, the power of a path can leak into multiple different beams [@Brady_13_BeamspaceSAMAM], and selecting only a few number of beams may result in a serious issue of power loss.
Transceiver Architecture {#Sec:Architecture}
------------------------
For wireless communications at low frequencies, signal processing happens in the baseband, and MIMO relies on digital beamforming. At mmWave frequencies, it is difficult to employ a separate RF chain and data converter for each antenna, especially in large MIMO systems, due to the complicated hardware implementation, the high power consumption, and/or the prohibitive cost [@Pi11Aninmmvmbs; @Doan04dcf60gcmosra; @Ayach_14_SpatiallySparsePrecodingmmMIMO]. To reduce the number of RF chains and the number of data converters, mmWave MIMO transceivers usually have low-complexity architectures and signal processors, e.g., analog beamformer, hybrid analog-digital beamformer, and low resolution transceivers. In this work, we adopt the reconfigurable beamspace hybrid beamformer as the architecture of low-complexity transceivers for mmWave MIMO systems with reconfigurable antennas.
At the transmitter, the symbol vector ${\mathbf{s}}\in{\mathbb{C}}^{N_s\times 1}$ is first processed by a low-dimensional digital precoder ${\mathbf{F}}\in{\mathbb{C}}^{ L_t\times N_s}$, where $L_t$ denotes the number of RF chains at the transmitter. The obtained $L_t\times 1$ signal vector is denoted by ${\widetilde{\mathbf{x}}_{V}}={\mathbf{F}}{\mathbf{s}}$, which is then converted to analog signals by $L_t$ digital-to-analog converters (DACs). Next, the $L_t$ signals go through the beam selector to obtain the $N_t\times 1$ (virtual) signal vector ${\mathbf{x}}_V$. For a given beam selection mask ${\mathcal{M}}$, ${\mathbf{x}}_V$ is constructed by $\left[{\mathbf{x}}_V(j)\right]_{j\in{\mathcal{M}_{t}}}={\widetilde{\mathbf{x}}_{V}}$ and $\left[{\mathbf{x}}_V(j)\right]_{j\notin{\mathcal{M}_{t}}}=\mathbf{0}$, where ${\mathcal{M}_{t}}=\left\{j:(i,j)\in{\mathcal{M}}\right\}$. The beam selector can be easily realized by switches in practice. ${\mathbf{x}}_V$ is further processed by the DFT analog precoder ${\mathbf{A}}_T\in{\mathbb{C}}^{N_t\times N_t}$, and the obtained signal vector is given by ${\mathbf{x}}={\mathbf{A}}_T{\mathbf{x}}_V$. Note that ${\mathrm{Tr}}\left(\mathbb{E}\{{\widetilde{\mathbf{x}}_{V}}{\widetilde{\mathbf{x}}_{V}}^H\}\right)={\mathrm{Tr}}\left(\mathbb{E}\{{\mathbf{x}}_V{\mathbf{x}}_V^H\}\right)={\mathrm{Tr}}\left(\mathbb{E}\{{\mathbf{x}}{\mathbf{x}}^H\}\right)=P$. Finally, the transmitter sends ${\mathbf{x}}$ with the reconfigurable antennas. The received signal vector at the receive antennas with a given reconfiguration state $\psi$ is given by $$\label{}
{\mathbf{y}}={\mathbf{H}}_{\psi}{\mathbf{x}}+{\mathbf{n}}={\mathbf{A}}_R{\mathbf{H}_{\psi,V}}{\mathbf{A}}_T^H{\mathbf{x}}+{\mathbf{n}}={\mathbf{A}}_R{\mathbf{H}_{\psi,V}}{\mathbf{x}}_V+{\mathbf{n}}.$$ At the receiver side, ${\mathbf{y}}$ is first processed by the IDFT analog decoder ${\mathbf{A}}_R^H\in{\mathbb{C}}^{N_r\times N_r}$, and the obtained (virtual) signal vector is given by $$\label{eq:virtualsystemrepre}
{\mathbf{y}}_V={\mathbf{A}}_R^H{\mathbf{y}}={\mathbf{H}_{\psi,V}}{\mathbf{x}}_V+{\mathbf{n}}_V,$$ where the distribution of ${\mathbf{n}}_V={\mathbf{A}}_R^H{\mathbf{n}}$ is $\mathcal{CN}(\mathbf{0};\sigma^2_n{\mathbf{I}}_{N_r})$. Note that the system representation in is unitarily equivalent to the antenna domain representation in . According to the given beam selection mask ${\mathcal{M}}$, the receiver then uses the beam selector to obtain the low-dimensional $L_r\times 1$ signal vector ${\widetilde{\mathbf{y}}_{V}}=\left[{\mathbf{y}}_V(i)\right]_{i\in{\mathcal{M}_{r}}}$, where $L_r$ denotes the number of the RF chains at the receiver and ${\mathcal{M}_{r}}=\left\{i:(i,j)\in{\mathcal{M}}\right\}$. The low-dimensional virtual system representation for a given reconfiguration state $\psi$ is formulated as $$\label{eq:lowvirsyseq}
{\widetilde{\mathbf{y}}_{V}}={\widetilde{\mathbf{H}}_{\psi,V}}{\widetilde{\mathbf{x}}_{V}}+{\widetilde{\mathbf{n}}_{V}},$$ where ${\widetilde{\mathbf{H}}_{\psi,V}}=\left[{\mathbf{H}_{\widehat{\psi},V}}(i,j)\right]_{i\in{{\mathcal{M}_{r}}},j\in{\mathcal{M}_{t}}}$, ${\widetilde{\mathbf{n}}_{V}}=\left[{\mathbf{n}}_V(i)\right]_{i\in{\mathcal{M}_{r}}}$, and ${\widetilde{\mathbf{n}}_{V}}\sim\mathcal{CN}(\mathbf{0};\sigma^2_n{\mathbf{I}}_{L_r})$. The analog signals are finally converted to digital signals by $L_r$ analog-to-digital converters (ADCs) for the low-dimensional digital signal processing.
As mentioned earlier, we assume that the full CSI is perfectly known at the receiver, and a limited feedback is available from the receiver to the transmitter to enable the beam selection and the reconfiguration state selection. Per the number of all possible combinations of selected beams and reconfiguration states, the number of the feedback bits is equal to $\log_2\left(\Psi\right)+\log_2\left(\binom{N_t}{L_t}\binom{N_r}{L_r}\right)$. We assume that $N_s=L_t\le L_r$ to maximize the multiplexing gain of the system. The digital precoder at the transmitter is then given by ${\mathbf{F}}={\mathbf{I}}_{N_s}$ with equal power allocation between the $N_s$ data streams, since the transmitter does not have the full CSI. At the receiver, the digital decoder is the joint ML decoder for maximizing the throughput.[^6] With the aforementioned transceiver architecture and CSI assumptions, the system throughput with a selected ${\widetilde{\mathbf{H}}_{\psi,V}}$ is given by [@Tse_05_Fundamentals] $$\label{eq:Coptimal}
R_{{\widetilde{\mathbf{H}}_{\psi,V}}}=\log_2\left|{\mathbf{I}}_{L_r}+\frac{\rho}{L_t}{\widetilde{\mathbf{H}}_{\psi,V}}{\widetilde{\mathbf{H}}_{\psi,V}}^H\right|,$$ where $\rho=P/\sigma^2_n$ denotes the transmit power to noise ratio.
Throughput Gain of Employing Reconfigurable Antennas {#sec:thrgainana}
====================================================
In this section, we analyze the performance gain of employing the reconfigurable antennas in terms of the throughput. With the optimal reconfiguration state selection, the instantaneous system throughput is given by $$\label{eq:defRsc}
R_{{\widehat{\psi}}}=\max_{\psi\in\left\{1,\cdots,\Psi\right\}}R_{\psi},$$ where $$\begin{aligned}
\label{eq:defRs}
R_{\psi}&=\log_2\left|{\mathbf{I}}_{L_r}+\frac{\rho}{L_t}{\widehat{\widetilde{\mathbf{H}}}_{\psi,V}}{\widehat{\widetilde{\mathbf{H}}}_{\psi,V}}^H\right|\notag\\
&=\max_{{\widetilde{\mathbf{H}}_{\psi,V}}\in\left\{\tilde{\mathcal{H}}_\psi\right\}}\log_2\left|{\mathbf{I}}_{L_r}+\frac{\rho}{L_t}{\widetilde{\mathbf{H}}_{\psi,V}}{\widetilde{\mathbf{H}}_{\psi,V}}^H\right|\end{aligned}$$ represents the maximum achievable throughput under the reconfiguration state $\psi$, ${\widehat{\widetilde{\mathbf{H}}}_{\psi,V}}$ denotes the optimal low-dimensional virtual channel of ${\mathbf{H}_{\psi,V}}$, and $\tilde{\mathcal{H}}_\psi$ denotes the set of all possible $L_r\times L_t$ submatrices of ${\mathbf{H}_{\psi,V}}$. Here, the optimal reconfiguration state is the reconfiguration state that maximizes the throughput.
Average Throughput Gain {#sec:appAvethrougai}
-----------------------
The average throughput gain of employing the reconfigurable antennas is given by $$\label{eq:th_gain}
G_{\bar{R}}={\bar{R}_{{\widehat{\psi}}}}/{\bar{R}_{\psi}}, $$ where $\bar{R}_{{\widehat{\psi}}}=\mathbb{E}\{R_{{\widehat{\psi}}}\}$, $\bar{R}_{\psi}=\mathbb{E}\{R_{\psi}\}$, and the expectation is over different channel realizations. As mentioned before, we assume that the channel matrices for different reconfiguration states have the same average channel power, and hence, $\bar{R}_{1}=\cdots=\bar{R}_{\Psi}$.
With the key property of VCR, each entry of ${\widetilde{\mathbf{H}}_{\psi,V}}$, i.e., ${\widetilde{\mathbf{H}}_{\psi,V}}(i,j)$, is associated with a set of physical paths [@Sayeed_02_Deconstuctingmfc], and it is approximated equal to the sum of the complex gains of the corresponding paths [@Sayeed_07_maxMcsparseRAA]. When the number of distinct paths associated with ${\widetilde{\mathbf{H}}_{\psi,V}}(i,j)$ is sufficiently large, we note from the central limit theorem that ${\widetilde{\mathbf{H}}_{\psi,V}}(i,j)$ tends toward a complex Gaussian random variable. As observed in [@Gustafson_14_ommcacm] for the practical mmWave propagation environment at 60 GHz, the average number of distinct clusters is 10, and the average number of rays in each cluster is 9. The 802.11ad model has a fixed value of 18 clusters for the 60 GHz WLAN systems [@Maltsev_10_cmf60gwsmodl]. With the aforementioned numbers of clusters and rays, the entries of ${\widetilde{\mathbf{H}}_{\psi,V}}$ can be approximated by zero-mean complex Gaussian variables. Different from the rich scattering environment for low-frequency communication, the associated groups of paths to different entries of ${\widetilde{\mathbf{H}}_{\psi,V}}$ may be correlated in the mmWave environment. As a result, the entries of ${\widetilde{\mathbf{H}}_{\psi,V}}$ can be correlated, and the entries of ${\widetilde{\mathbf{H}}_{\psi,V}}$ are then approximated by correlated zero-mean complex Gaussian variables. In the literature, it has been shown that the instantaneous capacity of a MIMO system whose channel matrix has correlated zero-mean complex Gaussian entries can be approximated by a Gaussian variable [@Moustakas_03_Mctccitpocian; @Martin_03_aedacfcucf]. Based on the discussion above, the distribution of $R_{\psi}$ is approximated by a Gaussian distribution, and the accuracy of the approximation will also be numerically shown later in Section \[sec:numersim\]. It is worth mentioning that practical mmWave channels may have relatively small numbers of clusters and paths. Different from the results in [@Gustafson_14_ommcacm; @Maltsev_10_cmf60gwsmodl] for 60 GHz, one may observe only 3-6 clusters at 28 GHz [@Raghavan8255763; @Raghavan8053813]. Although the distribution of ${\widetilde{\mathbf{H}}_{\psi,V}}(i,j)$ may deviate from Gaussian when the number of distinct paths associated with it becomes small, we find that $R_{\psi}$ is still approximately Gaussian distributed. Note that Gaussian approximated distribution for the rate of MIMO systems has been shown many times in the literature under various assumptions [@Telatar_99_CmGaucs; @Moustakas_03_Mctccitpocian; @Smith_04_Aappcdfms; @Martin_03_aedacfcucf], and the Gaussianity of $R_{\psi}$ with randomness in the angular profiles of the clusters is reasonable.
Denoting the approximated Gaussian distribution of $R_{\psi}$ as $\mathcal{N}({\bar{R}_{\psi}},{\sigma^2_{R_\psi}})$, where ${\bar{R}_{\psi}}$ and ${\sigma^2_{R_\psi}}$ denote the mean and the variance of $R_\psi$, respectively, we have the following proposition giving the approximated average throughput gain of employing the reconfigurable antennas.
\[Prop:1\] The average throughput gain of employing the reconfigurable antennas with $\Psi$ distinct reconfiguration states is approximated by $$\label{eq:th_gain_close}
G_{\bar{R}} \approx\int_0^\infty \frac{1}{{\bar{R}_{\psi}}}-\frac{1}{2^\Psi{\bar{R}_{\psi}}}\left(1+\mathrm{erf}\left(\frac{x-{\bar{R}_{\psi}}}{\sqrt{2{\sigma^2_{R_\psi}}}}\right)\right)^\Psi\mathrm{d}x. $$
See Appendix \[App:proofaveGa\]
To the best of our knowledge, the expression for $G_{\bar{R}}$ in cannot be simplified for general values of $\Psi$. Thus, the calculation of the average throughput gain for a general number of reconfiguration states involves an infinite integral of a complicated function. We then further consider two special cases, in which the relatively simple expressions for the average throughput gain are tractable.
### $\Psi\le5$
In practice, the number of distinct reconfiguration states is usually small, due to the complicated hardware design and the limited size of antennas. The following Corollary gives the approximated average throughput gain for the case of $\Psi\le5$.
\[Cor:aveGst5\] The approximated expressions for the average throughput gain, $G_{\bar{R}}$, for the case of $\Psi\le5$ are given by $$\begin{aligned}
\label{eq:th_gain_close_125}
&G_{\bar{R}}(\psi=1)\approx1,~
G_{\bar{R}}(\psi=2)\approx1+\frac{1}{{\bar{R}_{\psi}}}\sqrt{\frac{{\sigma^2_{R_\psi}}}{\pi}},\notag\\
&G_{\bar{R}}(\psi=3)\approx1+\frac{3}{2{\bar{R}_{\psi}}}\sqrt{\frac{{\sigma^2_{R_\psi}}}{\pi}},\notag\\
&G_{\bar{R}}(\psi=4)\approx1+\frac{3}{{\bar{R}_{\psi}}}\sqrt{\frac{{\sigma^2_{R_\psi}}}{\pi^3}}\arccos\left(-\frac{1}{3}\right),\notag\\
&G_{\bar{R}}(\psi=5)\approx1+\frac{5}{2{\bar{R}_{\psi}}}\sqrt{\frac{{\sigma^2_{R_\psi}}}{\pi^3}}\arccos\left(-\frac{23}{27}\right).\end{aligned}$$
See Appendix \[App:proofaveGast5\]
### $\Psi$ Is Large
Although $\Psi$ is relatively small in practice, it is of theoretical importance to study the case of large $\Psi$ to capture the limiting performance gain of reconfigurable antennas. We present the approximated average throughput gain for large $\Psi$ in the corollary below.
\[Cor:aveGlsn\] The approximated average throughput gain, $G_{\bar{R}}$, for the case of large $\Psi$ is given by $$\begin{aligned}
\label{eq:GavelargePsi}
&G_{\bar{R}}\approx 1+\frac{\sqrt{2{\sigma^2_{R_\psi}}}}{{\bar{R}_{\psi}}}\notag\\ &\left((1-\beta)\mathrm{erf}^{-1}\left(1-\frac{2}{\Psi}\right)
+\beta \mathrm{erf}^{-1}\left(1-\frac{2}{e\Psi}\right)\right),\end{aligned}$$ where $\beta\simeq0.5772$ denotes the Euler’s constant.
See Appendix \[App:proofaveGlsn\]
Based on , we further obtain the limiting behavior of the average throughput gain when $\Psi$ becomes large in the following corollary.
\[Cor:aveGlsngrO\] As $\Psi\rightarrow\infty$, $G_{\bar{R}}(\Psi)$ is asymptotically equivalent to $$\label{eq:galarpsiasy}
G_{\bar{R}}(\Psi)\sim\frac{\sqrt{2{\sigma^2_{R_\psi}}}}{{\bar{R}_{\psi}}}\sqrt{\ln(\Psi)}.
$$
See Appendix \[App:proofCor:aveGlsngrO\]
From we find that $G_{\bar{R}}(\Psi)=O\left(\sqrt{\ln(\Psi)}\right)$ as $\Psi\rightarrow\infty$. Thus, the growth of the average throughput from adding reconfiguration states becomes small when the number of reconfiguration states is already large, although having more distinct reconfiguration states always benefits the average throughput.
Outage Throughput Gain
----------------------
In the above analysis, we focused on the performance gain in terms of the average throughput. However, it is insufficient to use the average throughput as the sole measure of the rate performance of the systems with multiple antennas. For scenarios where the channel remains (quasi) static during the transmission, it is appropriate to evaluate the system performance by the outage throughput, since every possible target transmission rate is associated with an unavoidable probability of outage. In the following, we analyze the performance gain of employing the reconfigurable antennas in terms of the outage throughput.
For a given target rate $R$, an outage event happens when the maximum achievable throughput is less than the target rate, and the outage probabilities for the systems without and with the reconfigurable antennas are given by ${\mathbb{P}}(R_{{\widehat{\psi}}}<R)$ and ${\mathbb{P}}(R_{\psi}<R)$, respectively. At a required outage level $0<\epsilon<1$, the outage throughputs for the systems with and without the reconfigurable antennas are given by [@Tse_05_Fundamentals] $$\begin{aligned}
\label{eq:outthsin}
R_{{\widehat{\psi}}}^{{\mathrm{out}}}=\max~R,~~\mathrm{s.t.}~~{\mathbb{P}}(R_{{\widehat{\psi}}}<R)\le\epsilon\end{aligned}$$ and $$\begin{aligned}
\label{eq:outthrec}
R_{\psi}^{{\mathrm{out}}}=\max~R,~~\mathrm{s.t.}~~{\mathbb{P}}(R_{\psi}<R)\le\epsilon,
\end{aligned}$$ respectively. The outage throughput gain of employing the reconfigurable antennas at an outage level $\epsilon$ is given by $$\label{eq:Defth_gain_out}
G_{R^{\mathrm{out}}}={R_{{\widehat{\psi}}}^{{\mathrm{out}}}}/{R_{\psi}^{{\mathrm{out}}}}, $$ where $R_{{\widehat{\psi}}}^{{\mathrm{out}}}$ and $R_{\psi}^{{\mathrm{out}}}$ are given in and , respectively. The outage throughput gain of employing the reconfigurable antennas is given in the following proposition.
\[Prop:2\] The outage throughput gain of employing the reconfigurable antennas with $\Psi$ distinct reconfiguration states is approximated by $$\label{eq:th_gain_out}
G_{R^{\mathrm{out}}}\approx\frac{{\bar{R}_{\psi}}-\sqrt{2{\sigma^2_{R_\psi}}}\mathrm{erf}^{-1}\left(1-2\epsilon^{\frac{1}{\Psi}}\right)}{{\bar{R}_{\psi}}-\sqrt{2{\sigma^2_{R_\psi}}}\mathrm{erf}^{-1}\left(1-2\epsilon\right)}.$$
See Appendix \[App:proofoutGa\]
### $\Psi$ Is Large
We now investigate the limiting performance gain of reconfigurable antennas in terms of the outage throughput as $\Psi\rightarrow\infty$. Based on , we present the limiting behavior of the outage throughput gain when $\Psi$ becomes large in the following corollary.
\[Cor:outGlsngrO\] As $\Psi\rightarrow\infty$, $G_{R^{\mathrm{out}}}(\Psi)$ is asymptotically equivalent to $$\label{eq:goutlarpsiasy}
G_{R^{\mathrm{out}}}(\Psi)\sim\frac{\sqrt{2{\sigma^2_{R_\psi}}}}{{\bar{R}_{\psi}}-\sqrt{2{\sigma^2_{R_\psi}}}\mathrm{erf}^{-1}\left(1-2\epsilon\right)}\sqrt{\ln(\Psi)}.
$$
See Appendix \[App:proofCor:outGlsngrO\]
Similar to the finding for the average throughput gain, we find from that $G_{R^{\mathrm{out}}}(\Psi)=O\left(\sqrt{\ln(\Psi)}\right)$ as $\Psi\rightarrow\infty$. Thus, the growth of the outage throughput from adding reconfiguration states becomes small when the number of reconfiguration states is already large. Comparing the limiting behaviors of the average throughput gain and the outage throughput gain, we further find that the growth rate of the average throughput and the growth rate of the outage throughput have the same order when the number of reconfiguration states is large.
Fast Selection Algorithm {#sec:fastalgrb}
========================
In the previous section, we analyzed the throughput gain of employing the reconfigurable antennas when the optimal reconfiguration state and beams are selected, while the problem of how to select the optimal reconfiguration state and beams has not been considered. In fact, as will be discussed later, selecting the optimal reconfiguration state and beams among all possible selections is extremely complicated and challenging for practical applications. To overcome the challenge, in this section, we propose a fast selection algorithm with low complexity and near-optimal throughput performance in the sparse mmWave MIMO environment. The objective of selecting the optimal reconfiguration state and beams is to obtain the corresponding optimal ${\widetilde{\mathbf{H}}_{\psi,V}}$ that maximizes the system throughput given in . The design problem of selecting ${\widetilde{\mathbf{H}}_{\psi,V}}$ is formulated as[^7] $$\label{eq:problemselect}
\max_{\psi\in\left\{1,\cdots,\Psi\right\}}\max_{{\widetilde{\mathbf{H}}_{\psi,V}}\in\left\{\tilde{\mathcal{H}}_\psi\right\}}\left|{\mathbf{I}}_{L_r}+\frac{\rho}{L_t}{\widetilde{\mathbf{H}}_{\psi,V}}{\widetilde{\mathbf{H}}_{\psi,V}}^H\right|.$$
A straightforward method to obtain the optimal ${\widetilde{\mathbf{H}}_{\psi,V}}$ is the exhaustive search among all possible selections of ${\widetilde{\mathbf{H}}_{\psi,V}}$. That is, we first search for the optimal beam selection for each reconfiguration state to obtain ${\widehat{\widetilde{\mathbf{H}}}_{\psi,V}}$, i.e., the optimal low-dimensional virtual channel of ${\mathbf{H}_{\psi,V}}$. Then, we compare the obtained ${\widehat{\widetilde{\mathbf{H}}}_{\psi,V}}$ among all reconfiguration states to complete the selection of optimal ${\widetilde{\mathbf{H}}_{\psi,V}}$, denoted by ${\widehat{\widetilde{\mathbf{H}}}_{\widehat{\psi},V}}$. Since there are $\Psi$ reconfiguration states and $\binom{N_t}{L_t}\binom{N_r}{L_r}$ possible submatrices for each state, the total number of possible selections to search is given by $$\label{}
N_{\mathrm{total}}=\Psi\binom{N_t}{L_t}\binom{N_r}{L_r}=\frac{\Psi N_r!N_t!}{L_r!L_t!\left(N_r-L_r\right)!\left(N_t-L_t\right)!}.$$ When $N_t\gg L_t$, $N_r\gg L_r$, and/or $\Psi\gg 1$, the total number to search, $N_{\mathrm{total}}$, would be too large for practical applications due to the high complexity. Thus, in what follows, we propose a low-complexity design to obtain ${\widehat{\widetilde{\mathbf{H}}}_{\widehat{\psi},V}}$ which achieves the near optimal throughput performance.
As discussed earlier, the mmWave MIMO channel has a sparse nature, and the number of non-vanishing rows and columns of the virtual channel matrix is relatively small in the clustered scattering environment. Now let us consider an extreme scenario such that all of the non-vanishing entries of ${\mathbf{H}_{\psi,V}}$ are contained in the low-dimensional submatrix, and ${\mathbf{H}_{\psi,V}}$ is approximated by $$\label{eq:ssHv}
{\mathbf{M}}\odot{\mathbf{H}_{\psi,V}},$$ where $$\label{eq:MaskM}
{\mathbf{M}}(i,j)=\left\{
\begin{array}{ll} 1\;, &\mbox{if}~(i,j)\in \widehat{{\mathcal{M}}}_{\psi},\\
0\;, &\mbox{otherwise,}
\end{array}
\right.$$ and $\widehat{{\mathcal{M}}}_{\psi}$ is the beam selection mask corresponding to ${\widehat{\widetilde{\mathbf{H}}}_{\psi,V}}$. Note that a similar approximation was adopted in, e.g., [@Sayeed_07_maxMcsparseRAA] to approximate the sparse virtual MIMO channel. With , we have $$\begin{aligned}
\label{eq:appHtoHvl}
&\left|{\mathbf{I}}_{L_r}+\frac{\rho}{L_t} {\widehat{\widetilde{\mathbf{H}}}_{\psi,V}}{\widehat{\widetilde{\mathbf{H}}}_{\psi,V}}^H\right|
\approx \left|{\mathbf{I}}_{N_r}+\frac{\rho}{L_t} {\mathbf{H}_{\psi,V}}{\mathbf{H}_{\psi,V}}^H\right|\notag\\
&= \left|{\mathbf{I}}_{N_r}+\frac{\rho}{L_t} {\mathbf{H}}_{\psi}{\mathbf{H}}_{\psi}^H\right|.\end{aligned}$$ Based on , we find that a fast selection of reconfiguration state can be achieved by directly comparing their (full) physical channel matrices. Instead of finding the optimal beam selection of each reconfiguration state first, we can directly determine the optimal reconfiguration state by $$\label{eq:problemselect_2}
\widehat{\psi}=\arg\max_{\psi\in\left\{1,\cdots,\Psi\right\}} \left|{\mathbf{I}}_{N_r}+\frac{\rho}{L_t} {\mathbf{H}}_{\psi}{\mathbf{H}}_{\psi}^H\right|.$$ Based on , the near-optimal reconfiguration state can be selected by calculating and comparing the throughput among only $\Psi$ possible channel matrices. Note that ${\mathbf{H}}_{\psi}$ in is the full channel matrix rather than a low-dimensional virtual channel matrix associated with a particular selection of beams. In addition, with the fast reconfiguration state selection, we can select beams from only the beams that are associated with the selected reconfiguration state. In contrast, the exhaustive search needs to examine the performance of all beams associated with all reconfigurable states. Although the performance of this fast-selection method depends on the accuracy of the approximation in , we will show later by numerical results that usually near optimal performance can be achieved.
\[Alg:1\]
$\widehat{\psi}:=\arg\max_{\psi\in\left\{1,\cdots,\Psi\right\}} \left|{\mathbf{I}}_{N_r}+\frac{\rho}{L_t} {\mathbf{H}}_{\psi}{\mathbf{H}}_{\psi}^H\right|$;
$\mathcal{I}_r:=\left\{1, \cdots, N_r\right\}$; $\mathcal{I}_t:=\left\{1, \cdots, N_t\right\}$; ${\mathbf{h}}_j:=j$-th row of ${\mathbf{H}_{\widehat{\psi},V}}$, $\forall j\in\mathcal{I}_r$;
$J:=\arg\max_{j\in\mathcal{I}_r}{\mathbf{h}}_j{\mathbf{h}}_j^H$;
${\mathcal{M}_{r}}:=\left\{J\right\}$; ${\widetilde{\mathbf{H}}_{\widehat{\psi},V}}:=\left[{\mathbf{H}_{\widehat{\psi},V}}\left(i,j\right)\right]_{i\in{{\mathcal{M}_{r}},j\in\mathcal{I}_t}}$; $\mathcal{I}_r:=\mathcal{I}_r-\left\{J\right\};$
$J:=\arg\max_{j\in \mathcal{I}_r}{\mathbf{h}}_j\left({\mathbf{I}}_{N_t}+\frac{\rho}{N_t}{\widetilde{\mathbf{H}}_{\widehat{\psi},V}}^H{\widetilde{\mathbf{H}}_{\widehat{\psi},V}}\right)^{-1}{\mathbf{h}}_j^H$;
${\mathcal{M}_{r}}:={\mathcal{M}_{r}}+\left\{J\right\}$; ${\widetilde{\mathbf{H}}_{\widehat{\psi},V}}:=\left[{\mathbf{H}_{\widehat{\psi},V}}\left(i,j\right)\right]_{i\in{{\mathcal{M}_{r}},j\in\mathcal{I}_t}}$; $\mathcal{I}_r:=\mathcal{I}_r-\left\{J\right\};$
${\mathbf{h}}_j:=j$-th column of ${\widetilde{\mathbf{H}}_{\widehat{\psi},V}}$, $\forall j\in\mathcal{I}_t$;
$J:=\arg\max_{j\in\mathcal{I}_r}{\mathbf{h}}_j^H{\mathbf{h}}_j$;
${\mathcal{M}_{t}}:=\left\{J\right\}$; ${\widehat{\widetilde{\mathbf{H}}}_{\widehat{\psi},V}}:=\left[{\widetilde{\mathbf{H}}_{\widehat{\psi},V}}\left(i,j\right)\right]_{i\in{{\mathcal{M}_{r}},j\in{\mathcal{M}_{t}}}}$; $\mathcal{I}_t:=\mathcal{I}_t-\left\{J\right\};$
$$\begin{aligned}
\label{}
& J:=\arg\max_{j\in \mathcal{I}_t}\notag\\
&{\mathbf{h}}_j^H\left({\mathbf{I}}_{L_r}-\frac{\rho}{L_t}{\widehat{\widetilde{\mathbf{H}}}_{\widehat{\psi},V}}\left({\mathbf{I}}_{l-1}+\frac{\rho}{L_t}{\widehat{\widetilde{\mathbf{H}}}_{\widehat{\psi},V}}^H{\widehat{\widetilde{\mathbf{H}}}_{\widehat{\psi},V}}\right)^{-1}{\widehat{\widetilde{\mathbf{H}}}_{\widehat{\psi},V}}^H\right){\mathbf{h}}_j;\end{aligned}$$
${\mathcal{M}_{t}}:={\mathcal{M}_{t}}+\left\{J\right\}$; ${\widehat{\widetilde{\mathbf{H}}}_{\widehat{\psi},V}}:=\left[{\widetilde{\mathbf{H}}_{\widehat{\psi},V}}\left(i,j\right)\right]_{i\in{{\mathcal{M}_{r}},j\in{\mathcal{M}_{t}}}}$; $\mathcal{I}_t:=\mathcal{I}_t-\left\{J\right\};$
To reduce the complexity of beam selection, we utilize the technique of separate transmit and receive antenna selection [@Sanayei_04_CMAHTRAS] and the incremental successive selection algorithm (ISSA) [@Alkhansari_04_fastassims]. Again due to the sparsity of mmWave channels, the dominant beams usually significantly outperform the other beams, and they can be easily selected by the fast beam selection scheme. Note that the transmitter does not have the full CSI in the considered system, and hence, the existing beamspace selection schemes in, e.g., [@Amadori_15_LowRDBStion], with the requirement of full CSI on the beamspace channel at the transmitter are not applicable in our work. Our fast beam selection method is explained next. The beam selection problem in fact includes both transmit and receive beam selections. We adopt a separate transmit and receive beam selection technique [@Sanayei_04_CMAHTRAS] for first selecting the best $L_r$ receive beams and then selecting the best $L_t$ transmit beams. For both the receive and transmit beam selections, a technique based on the incremental successive selection algorithm (ISSA) [@Alkhansari_04_fastassims] is utilized. We start from the empty set of selected beams, and then successively add an individual beam to this set at each step of the beam selection algorithm until the number of selected transmit and receive beams reach $L_t$ and $L_r$, respectively. Note that successively selecting one beam at each step for the algorithm does not mean that we successively turn on individual beams. Instead, we first determine the set of selected beams by successively selecting beams according to the algorithm. After determining the set of selected beams, the transmitter and the receiver selectively turn on all selected beams together by beam selectors [@Brady_13_BeamspaceSAMAM; @Health_16_OverviewSPTmmMIMO]. In each step of the algorithm, the objective is to select one of the unselected beams that leads to the highest increase of the throughput. The mechanism of ISSA-based receive beam selection is provided as follows. Denote the submatrix corresponding to the selected $n$ receive beams after $n$ steps of ISSA as ${\widetilde{\mathbf{H}}_{\psi,V}}^{n}\in{\mathbb{C}}^{n\times N_t}$ and the $j$-th row of ${\mathbf{H}_{\psi,V}}$ by ${\mathbf{h}}_{\psi,V}^{j}$. Since the contribution of the $j$-th receive beam to the throughput under the $\log$ function is given by $$\label{eq:alphapsinjR}
g_{\psi,n,j}={\mathbf{h}}_{\psi,V}^{j}\left({\mathbf{I}}_{N_t}+\frac{\rho}{N_t}\left({\widetilde{\mathbf{H}}_{\psi,V}}^{n}\right)^H{\widetilde{\mathbf{H}}_{\psi,V}}^{n,r}\right)^{-1}\left({\mathbf{h}}_{\psi,V}^{j}\right)^H,$$ we select the receive beam at the $n+1$ step by $$\label{}
{\mathbf{h}}_{\psi,V}^{J}=\arg\max_{{\mathbf{h}}_{\psi,V}^{j}} g_{\psi,n,j}.$$ The mechanism of ISSA-based transmit beam selection is omitted here, since it is similar to that of the ISSA-based receive beam selection.
The proposed fast selection algorithm is given as Algorithm 1. The outputs of the algorithm are the optimal reconfiguration state, the indices of the selected receive beams, the indices of the selected transmit beams, and the selected low-dimensional virtual channel, denoted by $\widehat{\psi}$, $\mathcal{M}_r$, $\mathcal{M}_t$, and ${\widehat{\widetilde{\mathbf{H}}}_{\widehat{\psi},V}}$, respectively.
We would like to highlight that the proposed algorithm significantly reduces the complexity of reconfiguration state selection and beam selection, and achieves the near-optimal throughput performance. It is worth pointing out that an important requirement of the proposed algorithm is the knowledge of full CSI of all reconfigurable states, and the associated channel estimation complexity has not been taken into account. Although the full CSI assumption has been widely-adopted in the literature, as mentioned earlier, the channel estimation is relatively challenging for mmWave systems with reconfigurable antennas.
Numerical Results {#sec:numersim}
=================
For all numerical results in this work, we adopt the clustered multipath channel model in to generate the channel matrix. We assume that $\alpha_{\psi,i,l}$ are i.i.d. $\mathcal{CN}\left(0,\sigma^2_{\alpha,\psi,i}\right)$, where $\sigma^2_{\alpha,\psi,i}$ denotes the average power of the $i$-th cluster, and $\sum_{i=1}^{N_{\psi,c}}\sigma^2_{\alpha,\psi,i}=\gamma_{\psi}$, where $\gamma_{\psi}$ is a normalization parameter to ensure that $\mathbb{E}\{\left\|{\mathbf{H}}_{\psi}\right\|^2_F\}=N_rN_t$. We also assume that $\theta^r_{\psi,i,l}$ are uniformly distributed with mean $\theta_{\psi,i}^r$ and a constant angular spread (standard deviation) $\sigma_{\theta^r}$. $\theta^t_{\psi,i,l}$ are uniformly distributed with mean $\theta_{\psi,i}^t$ and a constant angular spread (standard deviation) $\sigma_{\theta^t}$. We further assume that $\theta_{\psi,i}^r$ and $\theta_{\psi,i}^t$ are both uniformly distributed within the range of $[-\pi/2, \pi/2]$. Unless otherwise stated, the system parameters are $N_r=N_t=17, L_r=L_t=5, N_{\psi,{\mathrm{cl}}}=10, N_{\psi,{\mathrm{ry}}}=8,$ $\sigma_{\theta^r}=\sigma_{\theta^t}=3^\circ$, and $d/\lambda=1/2$. All average results are over 5,000 randomly generated channel realizations. Note that $N_{\psi,{\mathrm{cl}}}=10$ and $N_{\psi,{\mathrm{ry}}}=8$ are based on the existing observations at 60 GHz in the literature [@Gustafson_14_ommcacm; @Maltsev_10_cmf60gwsmodl], and practical mmWave channels at 28 GHz may have relatively small numbers of clusters and paths [@Raghavan8255763; @Raghavan8053813].
![PDF of $R_\psi$. The parameters are $N_r=N_t=17, L_r=L_t=5, \sigma_{\theta^r}=\sigma_{\theta^t}=3^\circ$, and $d/\lambda=1/2$.[]{data-label="fig:DisR"}](DisR){width=".9\columnwidth"}
\
![PDF of $R_\psi$. The parameters are $N_r=N_t=17, L_r=L_t=5, \sigma_{\theta^r}=\sigma_{\theta^t}=3^\circ$, and $d/\lambda=1/2$.[]{data-label="fig:DisR"}](DisR_c4p2){width=".9\columnwidth"}
We first demonstrate the accuracy of the Gaussian approximated probability density function (PDF) of $R_\psi$. Figure \[fig:DisR\] plots the simulated PDF and the Gaussian approximated PDF of $R_\psi$. Figure \[fig:DisR\](a) and Figure \[fig:DisR\](b) are for the case of $N_{\psi,{\mathrm{cl}}}=10$ and $N_{\psi,{\mathrm{ry}}}=8$ and the case of $N_{\psi,{\mathrm{cl}}}=4$ and $N_{\psi,{\mathrm{ry}}}=2$, respectively. Different transmit power to noise ratios are considered, i.e., $\rho=0$ dB and $\rho=10$ dB. The transmit powers of $30$ dBm and $40$ dBm are considered based on the existing studies on mmWave systems [@Khan5876482; @Pi11Aninmmvmbs; @akdeniz2014millimeter]. We note that the Gaussian approximations match the simulated PDFs. In particular, we observe from Figure \[fig:DisR\](b) that the distribution of $R_\psi$ can be well approximated by the Gaussian distribution even for relatively small numbers of clusters and paths.
![Average throughput gain versus number of reconfiguration states. The parameters are $\rho=0$ dB, $N_r=N_t=17, L_r=L_t=5, N_{\psi,{\mathrm{cl}}}=10, N_{\psi,{\mathrm{ry}}}=8,$ $\sigma_{\theta^r}=\sigma_{\theta^t}=3^\circ$, and $d/\lambda=1/2$.[]{data-label="fig:Gain"}](Gainresults){width=".9\columnwidth"}
We then show the average throughput gain of employing the reconfigurable antennas. Figure \[fig:Gain\] plots the average throughput gain, $G_{\bar{R}}$, versus the number of reconfiguration states, $\Psi$. The illustrated results are for the actual gain in by simulating the channels, ${\mathbf{H}}_{\psi}$, the theoretical approximation in , the simplified theoretical approximation for $\Psi\le5$ in , and the simplified theoretical approximation for large $\Psi$ in . As depicted in the figure, the derived theoretical approximations match precisely the simulated results. In particular, we note that the simplified approximation for large $\Psi$ in has good accuracy even when $\Psi$ is small. From all four curves, we find that the growth of $G_{\bar{R}}$ with $\Psi$ is fast when $\Psi$ is small, while it becomes slow when $\Psi$ is relatively large. This finding is consistent with the analysis in Section \[sec:appAvethrougai\], and it indicates that the dominant average throughput gain of employing the reconfigurable antennas can be achieved by having a few number of reconfiguration states.
![Outage throughput gain versus number of reconfiguration states. The parameters are $\rho=0$ dB, $N_r=N_t=17, L_r=L_t=5, N_{\psi,{\mathrm{cl}}}=10, N_{\psi,{\mathrm{ry}}}=8,$ $\sigma_{\theta^r}=\sigma_{\theta^t}=3^\circ$, and $d/\lambda=1/2$.[]{data-label="fig:GainOut"}](GainOut){width=".9\columnwidth"}
We now present the outage throughput gain of employing the reconfigurable antennas. Figure \[fig:GainOut\] plots the outage throughput gain, $G_{R^{\mathrm{out}}}$, versus the number of reconfiguration states, $\Psi$. Different outage levels are considered, i.e., $\epsilon=0.01, \epsilon=0.05$, and $\epsilon=0.1$. As the figure shows, $G_{R^{\mathrm{out}}}$ increases as $\Psi$ increases. Similar to the results in Figure \[fig:Gain\], we find that the dominant outage throughput gain of employing the reconfigurable antennas can be achieved by having a few number of reconfiguration states. To obtain the outage throughput gain of $G_{R^{\mathrm{out}}}=1.5$, we only need $\Psi=2$, $\Psi=3$, and $\Psi=4$ reconfiguration states for the systems requiring $\epsilon=0.01, \epsilon=0.05$, and $\epsilon=0.1$, respectively. In addition, we note that $G_{R^{\mathrm{out}}}$ increases as $\epsilon$ increases, which indicates that the outage throughput gain of employing the reconfigurable antennas is more significant when the required outage level becomes more stringent.
Finally, we examine the performance of the proposed algorithm for fast selection by evaluating the average throughput loss ratio, which is defined by $$\label{}
\Delta_R=\left(\bar{R}_{\mathrm{max}}-\bar{R}_{\mathrm{fast}}\right)/{\bar{R}_{\mathrm{opt}}},$$ where $\bar{R}_{\mathrm{max}}$ denotes the average throughput achieved by the exhaustive search and $\bar{R}_{\mathrm{fast}}$ denotes the average throughput achieved by the proposed fast selection algorithm. Figure \[fig:Algerror\] plots the throughput loss ratio, $\Delta_R$, versus the transmit power to noise ratio, $\rho$. Systems with different numbers of reconfiguration states are considered, i.e., $\Psi=2$, $\Psi=4$, and $\Psi=8$. As shown in the figure, the proposed fast selection algorithm achieves reasonably good performance compared with the maximum achievable throughput by the exhaustive search. Although $\Delta_R$ increases as $\Psi$ increases, the throughput loss ratio is less than $3.5\%$ even when $\Psi=8$.
![Average throughput loss ratio versus transmit power to noise ratio. The parameters are $N_r=N_t=17, L_r=L_t=5, N_{\psi,{\mathrm{cl}}}=10, N_{\psi,{\mathrm{ry}}}=8,$ $\sigma_{\theta^r}=\sigma_{\theta^t}=3^\circ$, and $d/\lambda=1/2$. ](Algerror){width=".9\columnwidth"}
\[fig:Algerror\]
Conclusions and Future Work {#sec:concls}
===========================
In this paper, we have presented a framework for the theoretical study of the mmWave MIMO with reconfigurable antennas, where the low-complexity transceivers and the sparse channels are considered. We have shown that employing reconfigurable antennas can provide both the average throughput gain and the outage throughput gain for mmWave MIMO systems. Also, we have derived the approximated expressions for the gains. Based on the highly sparse nature of mmWave channels, we have further developed a fast algorithm for reconfiguration state selection and beam selection. The accuracy of our derived expressions and the performance of the developed algorithm have been verified by numerical results. We have noted from the results that the dominant throughput gains by employing the reconfigurable antennas can be achieved by having a few number of reconfiguration states. Numerical results have shown that a system with 3 reconfiguration states can achieve an average throughput gain of 1.2 and an outage throughput gain of 1.5 for an outage requirement of $\epsilon=0.05$.
As a first comprehensive study on the mmWave MIMO with reconfigurable antennas, our paper can lead to a number of future research directions in this area. In this paper, we have only considered the distinct reconfiguration states with independent channel matrices. In practice, the reconfiguration states may be dependent due to the non-orthogonal radiation characteristics, and the associated channel matrices may be correlated. Thus, taking the dependent reconfiguration states into consideration is an interesting future work, where we can have correlated and non-identically distributed $R_{\psi}$. While this paper has adopted the VCR as the analytical channel model, as mentioned earlier, there are some potential limitations on the utility of VCR in practical mmWave systems. Thus, investigating the reconfigurable antennas for mmWave systems with other channel models, e.g., the S-V model, is an important and interesting future work. Furthermore, future work on effective channel estimation techniques with low latency for mmWave systems with reconfigurable antennas is needed. Also, the associated complexity of channel estimation needs to be taken into account for future system designs. For example, if the channel matrices associated with different reconfiguration states are (strongly) correlated, a potential scheme for reconfiguration state selection and beam selection requiring a low-complexity channel estimation may start by setting the reconfiguration state to a random initial state. After finding the optimal beams in the initial state, one can reselect another state based on the optimal beams for the initial state.
Proof of Proposition \[Prop:1\] {#App:proofaveGa}
===============================
With the Gaussian approximation of the distribution of $R_{\psi}$, we can then approximate $R_{{\widehat{\psi}}}$ as the maximum of $\Psi$ i.i.d. Gaussian random variables, and $$\begin{aligned}
\label{eq:Rcpsibar}
\bar{R}_{{\widehat{\psi}}}&\approx\int_0^\infty 1-\left(F_{R_\psi}(x)\right)^\Psi \mathrm{d}x\notag\\
&=
\int_0^\infty 1-\frac{1}{2^\Psi}\left(1+\mathrm{erf}\left(\frac{x-{\bar{R}_{\psi}}}{\sqrt{2{\sigma^2_{R_\psi}}}}\right)\right)^\Psi\mathrm{d}x,\end{aligned}$$ where $$\label{eq:CDFRpsi}
F_{R_\psi}(x)=1+\mathrm{erf}\left(\frac{x-{\bar{R}_{\psi}}}{\sqrt{2{\sigma^2_{R_\psi}}}}\right)$$ denotes the approximated cdf of $R_\psi$.
Substituting into completes the proof.
Proof of Corollary \[Cor:aveGst5\] {#App:proofaveGast5}
==================================
We rewrite $G_{\bar{R}}$ as $$\label{eq:th_gain_close_alt}
G_{\bar{R}}(\Psi=i) \approx 1+\frac{\sqrt{{\sigma^2_{R_\psi}}}}{{\bar{R}_{\psi}}}E_{i}, $$ where $E_{i}$ denotes the mean of the maximum of $i$ independent standard normal random variables. Denoting the cdf of standard normal distribution by $\Phi(\cdot)$, we have $$\begin{aligned}
\label{eq:proofeiniaj}
E_i&=\int_{-\infty}^\infty x\frac{\mathrm{d}\left(\Phi(x)\right)^i}{\mathrm{d}x}\mathrm{d}x \notag\\
&=i\left(i-1\right)\int_{-\infty}^\infty \frac{\exp\left(-x^2\right)}{2\pi}\left(\Phi(x)\right)^{i-2}\mathrm{d}x
\notag\\
&=\frac{i\left(i-1\right)}{2\pi}\sum_{j=0}^{\lfloor \frac{i}{2}-1\rfloor}\left(\frac{1}{2}\right)^{i-2-2j}\binom{i-2}{2j}A_j,\end{aligned}$$ where $A_j=\int_{-\infty}^\infty \exp\left(-x^2\right)\left(\Phi(x)-\frac{1}{2}\right)^{2j}\mathrm{d}x $. We note that $A_0$ and $A_1$ can be derived[^8], which are given by $
A_0=\sqrt{\pi}
$ and $
A_1=\frac{1}{2\sqrt{\pi}\tan^{-1}\left(\frac{\sqrt{2}}{4}\right)},
$ respectively. Substituting $A_0$ and $A_1$ into , we can obtain $E_1=0$, $E_2=\pi^{-\frac{1}{2}}$, $E_3=\frac{3}{2}\pi^{-\frac{1}{2}}$, $E_4=3\pi^{-\frac{3}{2}}\arccos\left(-\frac{1}{3}\right)$, and $E_5=\frac{5}{2}\pi^{-\frac{3}{2}}\arccos\left(-\frac{23}{27}\right)$. Finally, substituting the expressions for $E_1, \cdots, E_5$ into completes the proof.
Proof of Corollary \[Cor:aveGlsn\] {#App:proofaveGlsn}
==================================
When $\Psi$ is large, we can adopt the Fisher-–Tippett theorem to approximate the distribution of the maximum of $\Psi$ independent standard normal random variables as a Gumbel distribution, whose cumulative distribution function (cdf) is given by $$\label{}
F_{E_\Psi}(x)=\exp\left(-\exp\left(-\frac{x-\Phi^{-1}\left(1-\frac{1}{\Psi}\right)}{\Phi^{-1}\left(1-\frac{1}{e\Psi}\right)-\Phi^{-1}\left(1-\frac{1}{\Psi}\right)}\right)\right),$$ where $\Phi^{-1}(\cdot)$ denotes the inverse cdf of the standard normal distribution. We then have $$\begin{aligned}
\label{eq:Epsilarge}
E_\Psi &\approx \sqrt{2}\left(\left(1-\beta\right){\mathrm{erf}}^{-1}\left(1-\frac{2}{\Psi}\right)+\beta\mathrm{erf}^{-1}\left(1-\frac{2}{e\Psi}\right)\right),\end{aligned}$$ where $\beta$ denotes the Euler’s constant. Substituting into completes the proof.
Proof of Corollary \[Cor:aveGlsngrO\] {#App:proofCor:aveGlsngrO}
=====================================
From the tail region approximation for the inverse error function, we note that $$\label{eq:appinverflargex}
\mathrm{erf}^{-1}(x)= \sqrt{-\ln\left(1-x^2\right)} \quad \text{as} \quad x\rightarrow 1.$$ Based on and , as $\Psi\rightarrow\infty$, $G_{\bar{R}}(\Psi)$ is asymptotically equivalent to $$\begin{aligned}
G_{\bar{R}}(\Psi)&\sim 1+\frac{\sqrt{2{\sigma^2_{R_\psi}}}}{{\bar{R}_{\psi}}}\left( \left(1-\beta\right)\sqrt{-\ln(4)+\ln\left(\frac{\Psi^2}{\Psi-1}\right)}\right.\notag\\
&\left.
+\beta\sqrt{1-\ln(4)+\ln\left(\frac{\Psi^2}{\Psi-1/e}\right)}\right)\notag\\
&\sim \frac{\sqrt{2{\sigma^2_{R_\psi}}}}{{\bar{R}_{\psi}}}\sqrt{\ln(\Psi)}.\end{aligned}$$ This completes the proof.
Proof of Proposition \[Prop:2\] {#App:proofoutGa}
===============================
With the Gaussian approximated PDF of $R_{\psi}$, we have the approximated outage probabilities for the systems without and with reconfigurable antennas as $$\label{eq:outageGsin}
{\mathbb{P}}(R_{\psi}<R)=F_{R_\psi}(R)$$ and $$\label{eq:outageGrec}
{\mathbb{P}}(R_{{\widehat{\psi}}}<R)=\left(F_{R_\psi}(R)\right)^{\Psi},$$ respectively, where $F_{R_\psi}(x)$ is given in . Substituting and into and , respectively, we can obtain the approximated $R_{{\widehat{\psi}}}^{{\mathrm{out}}}$ and $R_{\psi}^{{\mathrm{out}}}$ as $$\label{eq:derRcout}
R_{{\widehat{\psi}}}^{{\mathrm{out}}}\approx F_{R_\psi}^{-1}(\epsilon^{\frac{1}{\Psi}})={\bar{R}_{\psi}}-\sqrt{2{\sigma^2_{R_\psi}}}\mathrm{erf}^{-1}\left(1-2\epsilon^{\frac{1}{\Psi}}\right)$$ and $$\label{eq:derRout}
R_{\psi}^{{\mathrm{out}}}\approx F_{R_\psi}^{-1}(\epsilon)={\bar{R}_{\psi}}-\sqrt{2{\sigma^2_{R_\psi}}}\mathrm{erf}^{-1}\left(1-2\epsilon\right),$$ respectively, where $F_{R_\psi}^{-1}(x)$ denotes the approximated inverse cdf of $R_{\psi}$.
Substituting and into completes the proof.
Proof of Corollary \[Cor:outGlsngrO\] {#App:proofCor:outGlsngrO}
=====================================
As $\Psi\rightarrow\infty$, we have $1-2\epsilon^{\frac{1}{\Psi}}\rightarrow-1$. From the tail region approximation for the inverse error function, we note that $$\label{eq:appinverflargexneg}
\mathrm{erf}^{-1}(x)= -\sqrt{-\ln\left(1-x^2\right)} \quad \text{as} \quad x\rightarrow -1.$$ Based on and , as $\Psi\rightarrow\infty$, $G_{R^{\mathrm{out}}}(\Psi)$ is asymptotically equivalent to $$\begin{aligned}
&G_{R^{\mathrm{out}}}(\Psi)\sim
\frac{{\bar{R}_{\psi}}}{{\bar{R}_{\psi}}-\sqrt{2{\sigma^2_{R_\psi}}}\mathrm{erf}^{-1}\left(1-2\epsilon\right)}+\notag\\
&
\frac{\sqrt{2{\sigma^2_{R_\psi}}}}{{\bar{R}_{\psi}}-\sqrt{2{\sigma^2_{R_\psi}}}\mathrm{erf}^{-1}\left(1-2\epsilon\right)}\sqrt{-\ln\left(1-\left(1-2{\epsilon}^{\frac{1}{\Psi}}\right)^2\right)}
\notag\\
&\sim \frac{\sqrt{2{\sigma^2_{R_\psi}}}}{{\bar{R}_{\psi}}-\sqrt{2{\sigma^2_{R_\psi}}}\mathrm{erf}^{-1}\left(1-2\epsilon\right)}\sqrt{-\ln\left(1-{\epsilon}^{\frac{1}{\Psi}}\right)}
\notag\\
& \stackrel{(a)}{\sim}\frac{\sqrt{2{\sigma^2_{R_\psi}}}}{{\bar{R}_{\psi}}-\sqrt{2{\sigma^2_{R_\psi}}}\mathrm{erf}^{-1}\left(1-2\epsilon\right)}\sqrt{\ln(\Psi)},\end{aligned}$$ where $(a)$ is derived by analyzing the Taylor series of $\sqrt{-\ln\left(1-{\epsilon}^{\frac{1}{\Psi}}\right)}$ at $\Psi\rightarrow\infty$. This completes the proof.
[10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{}
B. He and H. Jafarkhani, “Millimeter wave communications with reconfigurable antennas,” in *Proc. IEEE ICC*, May 2018. \[Online\]. Available: <http://arxiv.org/abs/1806.00051>
T. S. Rappaport, R. W. Heath, R. C. Daniels, and J. N. Murdock, *Millimeter Wave Wireless Communications*.1em plus 0.5em minus 0.4emPrentice Hall, 2014.
Z. Pi and F. Khan, “An introduction to millimeter-wave mobile broadband systems,” *IEEE Commun. Mag.*, vol. 49, no. 6, pp. 101–107, June 2011.
3GPP, “Studay on channel model for frequencies from 0.5 to 100 [GH]{}z,” 3rd Generation Partnership Project (3GPP), Tech. Rep. TR 38.901 V14.1.1, Aug. 2017.
——, “Technical specification group radio access network; channel model for frequency spectrum above 6 [GH]{}z,” 3rd Generation Partnership Project (3GPP), Tech. Rep. TR 38.900 V14.2.0, Dec. 2016.
——, “New measurements at 24 ghz in a rural macro environment,” Telstra, Ericsson, Tech. Rep. TDOC R1-164975, May 2016.
T. S. Rappaport, S. Sun, R. Mayzus, H. Zhao, Y. Azar, K. Wang, G. N. Wong, J. K. Schulz, M. Samimi, and F. Gutierrez, “Millimeter wave mobile communications for 5[G]{} cellular: It will work!” *IEEE Access*, vol. 1, pp. 335–349, May 2013.
J. Kim and I. Lee, “802.11 [WLAN]{}: history and new enabling [MIMO]{} techniques for next generation standards,” *IEEE Commun. Mag.*, vol. 53, no. 3, pp. 134–140, Mar. 2015.
Q. Li, G. Li, W. Lee, M. i. Lee, D. Mazzarese, B. Clerckx, and Z. Li, “[MIMO]{} techniques in [WiMAX]{} and [LTE]{}: a feature overview,” *IEEE Commun. Mag.*, vol. 48, no. 5, pp. 86–92, May 2010.
C. H. Doan, S. Emami, D. A. Sobel, A. M. Niknejad, and R. W. Brodersen, “Design considerations for 60 [GH]{}z [CMOS]{} radios,” *IEEE Commun. Mag.*, vol. 42, no. 12, pp. 132–140, Dec. 2004.
V. Venkateswaran and A. J. van der Veen, “Analog beamforming in [MIMO]{} communications with phase shift networks and online channel estimation,” *IEEE Trans. Signal Process.*, vol. 58, no. 8, pp. 4131–4143, Aug. 2010.
O. E. Ayach, S. Rajagopal, S. Abu-Surra, Z. Pi, and R. W. Heath, “Spatially sparse precoding in millimeter wave [MIMO]{} systems,” *IEEE Transactions on Wireless Communications*, vol. 13, no. 3, pp. 1499–1513, Mar. 2014.
L. Liu and H. Jafarkhani, “Space-time trellis codes based on channel-phase feedback,” *IEEE Trans. Commun.*, vol. 54, no. 12, pp. 2186–2198, Dec. 2006.
B. A. Cetiner, H. Jafarkhani, J.-Y. Qian, H. J. Yoo, A. Grau, and F. D. Flaviis, “Multifunctional reconfigurable [MEMS]{} integrated antennas for adaptive [MIMO]{} systems,” *IEEE Commun. Mag.*, vol. 42, no. 12, pp. 62–70, Dec. 2004.
A. Grau, H. Jafarkhani, and F. D. Flaviis, “A reconfigurable multiple-input multiple-output communication system,” *IEEE Trans. Wireless Commun.*, vol. 7, no. 5, pp. 1719–1733, May 2008.
C. A. Balanis, *Antenna Theory: Analysis and Design*, 3rd ed.1em plus 0.5em minus 0.4emWiley, 2005.
F. Fazel, A. Grau, H. Jafarkhani, and F. D. Flaviis, “Space-time-state block coded [MIMO]{} communication systems using reconfigurable antennas,” *IEEE Trans. Wireless Commun.*, vol. 8, no. 12, pp. 6019–6029, Dec. 2009.
C. G. Christodoulou, Y. Tawk, S. A. Lane, and S. R. Erwin, “Reconfigurable antennas for wireless and space applications,” *Proc. IEEE*, vol. 100, no. 7, pp. 2250–2261, July 2012.
R. L. Haupt and M. Lanagan, “Reconfigurable antennas,” *IEEE Antennas Propag. Mag.*, vol. 55, no. 1, pp. 49–61, Feb. 2013.
S. Pendharker, R. K. Shevgaonkar, and A. N. Chandorkar, “Optically controlled frequency-reconfigurable microstrip antenna with low photoconductivity,” *IEEE Antennas Wireless Propag. Lett.*, vol. 13, pp. 99–102, Jan. 2014.
H. Jafarkhani, *Space-Time Coding: Theory and Practice*.1em plus 0.5em minus 0.4emCambridge University Press, 2005.
S. F. Jilani, B. Greinke, Y. Hao, and A. Alomainy, “Flexible millimetre-wave frequency reconfigurable antenna for wearable applications in 5[G]{} networks,” in *Proc. URSI EMTS*, Aug. 2016, pp. 846–848.
B. Ghassemiparvin and N. Ghalichechian, “Reconfigurable millimeter-wave antennas using paraffin phase change materials,” in *Proc. EuCAP*, Apr. 2016, pp. 1–4.
I. F. da Costa, A. C. S., D. H. Spadoti, L. G. da Silva, J. A. J. Ribeiro, and S. E. Barbin, “Optically controlled reconfigurable antenna array for mm-wave applications,” *IEEE Antennas Wireless Propag. Lett.*, vol. 16, pp. 2142–2145, May 2017.
V. Vakilian, H. Mehrpouyan, and Y. Hua, “High rate space-time codes for millimeter-wave systems with reconfigurable antennas,” in *Proc. IEEE WCNC*, Mar. 2015, pp. 591–596.
V. Vakilian, H. Mehrpouyan, Y. Hua, and H. Jafarkhani, “High-rate space coding for reconfigurable 2x2 millimeter-wave [MIMO]{} systems,” 2015. \[Online\]. Available: <https://arxiv.org/abs/1505.06466>
M. R. Akdeniz, Y. Liu, M. K. Samimi, S. Sun, S. Rangan, T. S. Rappaport, and E. Erkip, “Millimeter wave channel modeling and cellular capacity evaluation,” *IEEE J. Sel. Areas Commun.*, vol. 32, no. 6, pp. 1164–1179, June 2014.
A. M. Sayeed and V. Raghavan, “Maximizing [MIMO]{} capacity in sparse multipath with reconfigurable antenna arrays,” *IEEE J. Sel. Signal Process.*, vol. 1, no. 1, pp. 156–166, June 2007.
A. M. Khalat, *Multifunctional reconfigurable antennas and arrays operating at 60 [GH]{}z band*.1em plus 0.5em minus 0.4emPhD thesis, UTAH STATE UNIVERSITY, 2017.
Y. Zhou, X. Chen, C. Ko, Z. Yang, C. Mouli, and S. Ramanathan, “Voltage-triggered ultrafast phase transition in vanadium dioxide switches,” *IEEE Electron Device Lett.*, vol. 34, no. 2, pp. 220–222, Feb. 2013.
C. Waldschmidt and W. Wiesbeck, “Compact wide-band multimode antennas for [MIMO]{} and diversity,” *IEEE Trans. Antennas Propag.*, vol. 52, no. 8, pp. 1963–1969, Aug. 2004.
G. H. Huff and J. T. Bernhard, “Integration of packaged [RF]{} [MEMS]{} switches with radiation pattern reconfigurable square spiral microstrip antennas,” *IEEE Trans. Antennas Propag.*, vol. 54, no. 2, pp. 464–469, Feb. 2006.
H. Aissat, L. Cirio, M. Grzeskowiak, J. M. Laheurte, and O. Picon, “Reconfigurable circularly polarized antenna for short-range communication systems,” *IEEE Trans. Microw. Theory Techn.*, vol. 54, no. 6, pp. 2856–2863, June 2006.
F. Sohrabi and W. Yu, “Hybrid analog and digital beamforming for mmwave [OFDM]{} large-scale antenna arrays,” *IEEE J. Sel. Areas Commun.*, vol. 35, no. 7, pp. 1432–1443, July 2017.
C. Rusu, R. Mèndez-Rial, N. González-Prelcic, and R. W. Heath, “Low complexity hybrid precoding strategies for millimeter wave communication systems,” *IEEE Trans. Wireless Commun.*, vol. 15, no. 12, pp. 8380–8393, Dec. 2016.
F. Sohrabi and W. Yu, “Hybrid digital and analog beamforming design for large-scale antenna arrays,” *IEEE J. Sel. Signal Process.*, vol. 10, no. 3, pp. 501–513, Apr. 2016.
C. E. Chen, “An iterative hybrid transceiver design algorithm for millimeter wave [MIMO]{} systems,” *IEEE Wireless Commun. Lett.*, vol. 4, no. 3, pp. 285–288, June 2015.
P. V. Amadori and C. Masouros, “Low [RF]{}-complexity millimeter-wave beamspace-[MIMO]{} systems by beam selection,” *IEEE Trans. Commun.*, vol. 63, no. 6, pp. 2212–2223, June 2015.
C. Gustafson, K. Haneda, S. Wyne, and F. Tufvesson, “On mm-wave multipath clustering and channel modeling,” *IEEE Trans. Antennas Propag.*, vol. 62, no. 3, pp. 1445–1455, Mar. 2014.
R. W. Heath, N. González-Prelcic, S. Rangan, W. Roh, and A. M. Sayeed, “An overview of signal processing techniques for millimeter wave [MIMO]{} systems,” *IEEE J. Sel. Signal Process.*, vol. 10, no. 3, pp. 436–453, Apr. 2016.
A. M. Sayeed, “Deconstructing multiantenna fading channels,” *IEEE Trans. Signal Process.*, vol. 50, no. 10, pp. 2563–2579, Oct. 2002.
D. Tse and P. Viswanath, *Fundamentals of Wireless Communication*. 1em plus 0.5em minus 0.4emCambridge University Press, 2005.
J. Brady, N. Behdad, and A. M. Sayeed, “Beamspace [MIMO]{} for millimeter-wave communications: System architecture, modeling, analysis, and measurements,” *IEEE Trans. Antennas Propag.*, vol. 61, no. 7, pp. 3814–3827, July 2013.
V. Raghavan and A. M. Sayeed, “Sublinear capacity scaling laws for sparse [MIMO]{} channels,” *IEEE Trans. Inf. Theory*, vol. 57, no. 1, pp. 345–364, Jan. 2011.
X. Gao, L. Dai, and A. M. Sayeed, “Low [RF]{}-complexity technologies to enable millimeter-wave [MIMO]{} with large antenna array for 5[G]{} wireless communications,” *IEEE Commun. Mag.*, vol. 56, no. 4, pp. 211–217, Apr. 2018.
B. Wang, L. Dai, Z. Wang, N. Ge, and S. Zhou, “Spectrum and energy-efficient beamspace [MIMO-NOMA]{} for millimeter-wave communications using lens antenna array,” *IEEE J. Sel. Areas Commun.*, vol. 35, no. 10, pp. 2370–2382, Oct. 2017.
J. Mo, P. Schniter, N. G. Prelcic, and R. W. Heath, “Channel estimation in millimeter wave [MIMO]{} systems with one-bit quantization,” in *Proc. Asilomar Conf. Signals Syst. Comput.*, Nov. 2014, pp. 957–961.
V. Raghavan, A. S. Y. Poon, and V. V. Veeravalli, “[MIMO]{} systems with arbitrary antenna array architectures: Channel modeling, capacity and low-complexity signaling,” in *Proc. Asilomar Conf. Signals Syst. Comput.*, Nov. 2007, pp. 1219–1223.
D. Huang, V. Raghavan, A. S. Y. Poon, and V. V. Veeravalli, “Angular domain processing for [MIMO]{} wireless systems with non-uniform antenna arrays,” in *Proc. Asilomar Conf. Signals Syst. Comput.*, Oct. 2008, pp. 2043–2047.
A. Maltsev, V. Erceg, E. Perahia, C. Hansen, R. Maslennikov, A. Lomayev, A. Sevastyanov, A. Khoryaev, G. Morozov, M. Jacob, S. Priebe, T. Kürner, S. Kato, H. Sawada, K. Sato, and H. Harada, “Channel models for 60 [GH]{}z [WLAN]{} systems,” *IEEE Document 802.11-09/0334r6*, Jan. 2010.
A. L. Moustakas, S. H. Simon, and A. M. Sengupta, “[MIMO]{} capacity through correlated channels in the presence of correlated interferers and noise: a (not so) large [N]{} analysis,” *IEEE Trans. Inf. Theory*, vol. 49, no. 10, pp. 2545–2561, Oct. 2003.
C. Martin and B. Ottersten, “Asymptotic eigenvalue distributions and capacity for [MIMO]{} channels under correlated fading,” *IEEE Trans. Wireless Commun.*, vol. 3, no. 4, pp. 1350–1359, July 2004.
V. Raghavan, A. Partyka, A. Sampath, S. Subramanian, O. H. Koymen, K. Ravid, J. Cezanne, K. Mukkavilli, and J. Li, “Millimeter-wave [MIMO]{} prototype: Measurements and experimental results,” *IEEE Commun. Mag.*, vol. 56, no. 1, pp. 202–209, Jan. 2018.
V. Raghavan, A. Partyka, L. Akhoondzadeh-Asl, M. A. Tassoudji, O. H. Koymen, and J. Sanelli, “Millimeter wave channel measurements and implications for phy layer design,” *IEEE Trans. Antennas Propag.*, vol. 65, no. 12, pp. 6521–6533, Dec. 2017.
I. E. Telatar, “Capacity of multiantenna [G]{}aussian channels,” *Eur. Trans. Telecommun.*, vol. 10, no. 6, p. 586–595, Nov./Dec. 1999.
P. Smith and M. Shafi, “An approximate capacity distribution for [MIMO]{} systems,” *IEEE Trans. Commun.*, vol. 52, no. 6, pp. 887–890, June 2004.
S. Sanayei and A. Nosratinia, “Capacity maximizing algorithms for joint transmit-receive antenna selection,” in *Proc. ACSSC*, vol. 2, Nov. 2004, pp. 1773–1776.
M. Gharavi-Alkhansari and A. B. Gershman, “Fast antenna subset selection in [MIMO]{} systems,” *IEEE Trans. Signal Process.*, vol. 52, no. 2, pp. 339–347, Feb. 2004.
F. Khan and Z. Pi, “Millimeter-wave mobile broadband ([MMB]{}): Unleashing 3–300 [GH]{}z spectrum,” in *Proc. IEEE Sarnoff Symp.*, May 2011, pp. 1–6.
[^1]: This work was presented in part at the 2018 IEEE International Conference on Communications (ICC) [@He_18_mwcwras].
[^2]: This work was supported in part by the NSF Award ECCS-1642536.
[^3]: The authors are with the Center for Pervasive Communications and Computing, University of California at Irvine, Irvine, CA 92697, USA (email: {biao.he, hamidj}@uci.edu).
[^4]: The existing studies on the 2$\times$2 mmWave MIMO with reconfigurable antennas used the idealized assumption of full-rank channel matrices with i.i.d. complex Gaussian entries for all reconfiguration states, which is not practical for the general mmWave scenarios where the number of antennas is relatively large [@Vakilian_15_SThmmra; @Vakilian_15_ThmmraAr].
[^5]: Without loss of generality, we here assume that $N_r$ and $N_t$ are odd.
[^6]: The relatively low-complexity MMSE-SIC decoder can also be adopted here to maximize the average throughput.
[^7]: Note that the selection criteria in is maximizing the throughput rather than maximizing the channel magnitude.
[^8]: The detailed steps to derive $A_0$ and $A_1$ are omitted here. The interested reader can find a relevant discussion at https://math.stackexchange.com/questions/473229.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A large solar pore with a granular light bridge was observed on October 15, 2008 with the IBIS spectrometer at the Dunn Solar Telescope and a 69-min long time series of spectral scans in the lines Ca II 854.2 nm and Fe I 617.3 nm was obtained. The intensity and Doppler signals in the Ca II line were separated. This line samples the middle chromosphere in the core and the middle photosphere in the wings. Although no indication of a penumbra is seen in the photosphere, an extended filamentary structure, both in intensity and Doppler signals, is observed in the Ca II line core. An analysis of morphological and dynamical properties of the structure shows a close similarity to a superpenumbra of a sunspot with developed penumbra. A special attention is paid to the light bridge, which is the brightest feature in the pore seen in the Ca II line centre and shows an enhanced power of chromospheric oscillations at 3–5 mHz. Although the acoustic power flux in the light bridge is five times higher than in the ”quiet” chromosphere, it cannot explain the observed brightness.'
address:
- '$^1$ Astronomical Institute, Academy of Sciences of the Czech Republic (v.v.i.), Fričova 298, CZ-25165 Ondřejov, Czech Republic'
- '$^2$ Charles University in Prague, Faculty of Mathematics and Physics, Astronomical Institute, V Holešovičkách 2, CZ-18000 Prague 8, Czech Republic'
- '$^3$ Department of Physics, University of Roma Tor Vergata, Via della Ricerca Scientifica 1, I-00133 Roma, Italy'
author:
- 'M Sobotka$^1$, M Švanda$^{1,2}$, J Jurčák$^1$, P Heinzel$^1$ and D Del Moro$^3$'
title: Atmosphere above a large solar pore
---
Introduction
============
Pores are small sunspots without penumbra. The absence of a filamentary penumbra in the photosphere has been interpreted as the indication of a simple magnetic structure with mostly vertical field, e.g. \[1, 2\]. Magnetic field lines are observed to be nearly vertical in centres of pores and inclined by about 40$^\circ$ to 60$^\circ$ at their edges \[3, 4\]. Pores contain a large variety of fine bright features, such as umbral dots and light bridges that may be signs of a convective energy transportation mechanism.
Light bridges (hereafter LBs) are bright structures in sunspots and pores that separate umbral cores or are embedded in the umbra. Their structure depends on the inclination of local magnetic field and can be granular, filamentary, or a combination of both \[5\]. Many observations confirm that magnetic field in LBs is generally weaker and more inclined with respect to the local vertical. It was shown in \[6\] that the field strength increases and the inclination decreases with increasing height. This indicates the presence of a magnetic canopy above a deeply located field-free region that intrudes into the umbra and forms the LB. Above a LB, \[7\] found a persistent brightening in the TRACE 160 nm bandpass formed in the chromosphere. It was interpreted as a steady-state heating possibly due to constant small-scale reconnections in the inclined magnetic field.
In the chromosphere, large isolated sunspots are often surrounded by a pattern of dark, nearly radial fibrils. This pattern, called superpenumbra, is visually similar to the white-light penumbra but extends to a much larger distance from the sunspot. Dark superpenumbral fibrils are the locations of the strongest inverse Evershed flow – an inflow and downflow in the chromosphere toward the sunspot. Time-averaged Doppler measurements indicate the maximum speed of this flow equal to 2–3 km s$^{-1}$ near the outer penumbral border \[8\].
Observations and data processing
================================
A large solar pore NOAA 11005 was observed with the Interferometric Bidimensional Spectrometer IBIS \[9\] attached to the Dunn Solar Telescope (DST) on 15 October 2008 from 16:34 to 17:43 UT, using the DST adaptive optics system. The slowly decaying pore was located at 25.2 N and 10.0 W (heliocentric angle $\theta = 23^\circ$) during our observation. According to \[10\], the maximum photospheric field strength was 2000 G, the inclination of magnetic field at the edge of the pore was 40$^\circ$ and the whole field was inclined by 10$^\circ$ to the west.
The IBIS dataset consists of 80 sequences, each containing a full Stokes ($I, Q, U, V$) 21-point spectral scan of the Fe I 617.33 nm line (see \[10\]) and a 21-point $I$-scan of the Ca II 854.2 nm line. The wavelength distance between the spectral points of the Ca II line is 6.0 pm and the time needed to scan the 0.126 nm wide central part of the line profile is 6.4 s. The exposure time for each image was set to 80 ms and each sequence took 52 seconds to complete, thus setting the time resolution. The pixel scale of these images was 0$''$.167. Due to the spectropolarimetric setup of IBIS, the working field of view (FOV) was $228 \times 428$ pixels, i.e., $38''\times 71''.5$. The detailed description of the observations and calibration procedures can be found in \[10,11\].
Complementary observations were obtained with the HINODE/SOT Spectropolarimeter \[12,13\]. The satellite observed the pore on 15 October 2008 at 13:20 UT, i.e., about 3 hours before the start of our observations. From one spatial scan in the full-Stokes profiles of the lines Fe I 630.15 and 630.25 nm we used a part covering the pore umbra with a granular LB.
According to \[14\], the inner wings ($\pm 60$ pm) of the infrared Ca II 854.2 nm line sample the middle photosphere at the typical height $h \simeq 250$ km above the $\tau_{500} = 1$ level, while the centre of this line is formed in the middle chromosphere at $h \simeq 1200$–1400 km. This provides a good tool to study the pore and its surroundings at different heights in the atmosphere and to look for relations between the photospheric and chromospheric structures.
The observations in the Ca II line are strongly influenced by oscillations and waves present in the chromosphere and upper photosphere. The observed intensity fluctuations in time are caused by real changes of intensity as well as by Doppler shifts of the line profile. To separate the two effects, Doppler shifts of the line profile were measured using the double-slit method \[15\], consisting in the minimisation of difference between intensities of light passing through two fixed slits in the opposite wings of the line. An algorithm based on this principle was applied to the time sequence of Ca II profiles. The distance of the two wavelength points (“slits”) was 36 pm, so that the “slits” were located in the inner wings near the line core, where the intensity gradient of the profile is at maximum and the effective formation height in the atmosphere is approximately 1000 km. The wavelength sampling was increased by the factor of 40 using linear interpolation, thus obtaining the sensitivity of the Doppler velocity measurement 53 m s$^{-1}$. The reference zero of Doppler velocity was defined as a time- and space-average of all measurements. This way we obtained a series of 80 Doppler velocity maps. This method does not take into account the asymmetry of the line profile, e.g., the changes of Doppler velocity with height in the atmosphere. Using the information about the Doppler shifts, all Ca II profiles were shifted to a uniform position with a subpixel accuracy. This way we obtained an intensity data cube ($x, y, \lambda$, scan), where, if we neglect line asymmetries and inaccuracies of the method, the observed intensity fluctuations correspond to the real intensity changes. The oscillations and waves were separated from the slowly evolving intensity and Doppler structures by means of the 3D $k-\omega$ subsonic filter with the phase-velocity cutoff at 6 km s$^{-1}$.
Results
=======
Examples of intensity maps in the continuum 621 nm, blue wing and centre of the line Ca II 854.2 nm and a Doppler map are shown in Fig. 1. Oscillations and waves are filtered out from these images. A filamentary structure around the pore, composed of radially oriented bright and dark fibrils, is clearly seen in the line-centre intensity and Doppler maps. The area and shape of the filamentary structure are identical in all pairs of the line-centre intensity and Doppler images in the time series. The fibrils begin immediately at the umbral border and in many cases continue till the border of the filamentary structure. Their lengths are identical in the Doppler and intensity maps. However, the fibrils seen in the line centre are spatially uncorrelated with those in the Doppler maps.
![\[fig2\]Time-averaged Doppler map with contours of zero, $-1$ and 1 km s$^{-1}$.](Fig1.eps){width="11cm"}
![\[fig2\]Time-averaged Doppler map with contours of zero, $-1$ and 1 km s$^{-1}$.](Fig2.eps){width="3.3cm"}
Concentric running waves originating in the centre of the pore are observed in the unfiltered time series of the line-centre intensity and Doppler images. They propagate through the filamentary structure to the distance 3000–5000 km from the visible border of the pore with a typical speed of 10 km s$^{-1}$. These waves are very similar to running penumbral waves observed in the penumbral chromosphere and in superpenumbrae of developed sunspots \[16\].
The subsonic-filtered time series of Doppler maps was averaged in time to obtain a spatial distribution of mean line-of-sight (LOS) velocities around the pore. The result is shown in Fig. 2 together with contours of zero, $-1$ and 1 km s$^{-1}$. We can see from the figure that the inner part of the filamentary structure contains a positive LOS velocity (away from us, a downflow), while the outer part, located mostly on the limb side, shows a negative LOS velocity (toward us, an upflow). Taking into account the heliocentric angle $\theta = 23^\circ$ and the fact that the plasma moves along magnetic field lines forming a funnel, the negative LOS velocity is only partly caused by real upflows but mostly by horizontal inflows into the pore. The picture of plasma moving toward the pore and flowing down in its vicinity is consistent with the inverse Evershed effect observed in the sunspots’ superpenumbra.
All these facts are leading to the conclusion that the filamentary structure observed in the chromosphere above the pore is equivalent to a superpenumbra of a developed sunspot. The missing correlation between the fibrils seen in the line centre and those in the Doppler maps further supports this conclusion, because, according to \[17\], flow channels of the inverse Evershed effect are not identical with superpenumbral filaments.\
A strong granular LB separates the two umbral cores of the pore. It is the brightest feature inside the pore in the photosphere as well as in the chromosphere, where it is by factor of 1.3 brighter than the average brightness in the FOV. The granular structure of the LB is preserved at all heights, from the photospheric continuum level to the formation height of the Ca II line centre. It is interesting that while a typical pattern of reverse granulation, observed in the Ca II wings, appears outside the pore in the middle photosphere ($h \simeq 250$ km), the LB is always composed of small bright granules separated by dark intergranular lanes (Fig. 1). A feature-tracking technique was applied to correlate the LB granules in position and time at different heights in the atmosphere. A correlation was found between the photospheric LB granules in the continuum and Ca II wings (correlation coefficient 0.46). On the other hand, there is no correlation between the chromospheric LB “granules” observed in the Ca II line centre and the photospheric ones in the wings and continuum.
The feature-tracking technique has shown that the mean size of the LB granules increases with height from 0$''$.45 in the continuum to 0$''$.50 in the Ca II wings and 0$''$.54 in the Ca II line centre. Similarly, the average width of the LB increases with height from 2$''$ (continuum) to 2$''$.5 (Ca II wings) and 3$''$ (Ca II line centre). On the other hand, it can be expected that the width of the LB magnetic structure decreases with height due to the presence of magnetic canopy. The complementary HINODE observations in two spectral lines Fe I 630.15 and 630.25 nm made it possible to obtain vertical stratification of temperature and magnetic field strength in the LB photosphere, using the inversion code SIR \[18\]. The results, summarised in Table 1, show that the width of LB in temperature maps really increases with height, while the width of the LB magnetic structure decreases and the magnetic field strength increases, confirming the magnetic canopy configuration.
[rrrr]{} Height (km) & Width in $T$ & Width in $B$ & $B_{\rm min}$ (G)\
0 & 2$''$.7 & 1$''$.7 & 0\
90 & 2$''$.7 & 1$''$.5 & 100\
180 & 2$''$.7 & 1$''$.3 & 300\
270 & 2$''$.9 & 1$''$.1 & 500\
350 & 3$''$.3 & 0$''$.8 & 700\
Power spectra of chromospheric oscillations were calculated using the unfiltered time series of the Ca II line-centre intensity and Doppler images. Power maps derived from the Doppler velocities at frequencies 3–6 mHz are shown in Fig. 3. At the LB position, we can see a strongly enhanced power around 3–5 mHz, comparable with that in a plage near the eastern border of the pore A similar power enhancement was reported in \[19\]. Usually, low-frequency oscillations do not propagate through the temperature minimum from the photosphere to the chromosphere due to the acoustic cut-off at 5.3 mHz \[20\]. To explain our observations, we assume that the low-frequency oscillations leak into the chromosphere along an inclined magnetic field \[19,21\], which is present in the LB and in the plage. The inclined magnetic field in the LB is verified by inversions of the full-Stokes Fe I 617.33 nm profiles \[10\]. The inclination angle, extrapolated to the height of the temperature minimum, is 40$^\circ$–50$^\circ$ to the west thanks to inclined magnetic field lines at the periphery of the larger (eastern) umbral core. These field lines pass above the magnetic canopy of the LB.
![\[fig3\]Power maps of Doppler velocities at frequencies 3–6 mHz. The contours outline the boundaries of the pore and light bridge, observed in the continuum.](Fig3.eps){width="16cm"}
Following \[22\], we estimate the acoustic energy flux in the LB chromosphere and compare it with the flux in the “quiet” region. The observed Doppler velocities are used for this purpose. With the estimated magnetic field inclination of 50$^\circ$, the effective acoustic cut-off frequency decreases to the value of 3 mHz in the LB. The total calculated acoustic power flux is 550 W m$^{-2}$ in the LB, while only 110 W m$^{-2}$ in the “quiet” chromosphere. These results seem to be lower than 1840 W m$^{-2}$ presented in \[22\] but one has to bear in mind that this value was obtained for the photosphere, where the power of the acoustic oscillations must be higher than in the chromosphere.
Discussion and conclusions
==========================
We studied the photosphere and chromosphere above a large solar pore with a granular LB using spectroscopic observations with spatial resolution of 0$''$.3–0$''$.4 in the line Ca II 854.2 nm and photospheric Fe I lines. We have shown that in the chromospheric filamentary structure around the pore, observed in the Ca II line core and Doppler maps, the inverse Evershed effect and running waves are present. Chromospheric fibrils seen in the intensity and Doppler maps are spatially uncorrelated. From these characteristics and from the morphological similarity of the filamentary structure to superpenumbrae of developed sunspots we conclude that the observed pore has a kind of a superpenumbra, in spite of a missing penumbra in the photosphere.
A special attention was paid to the granular LB that separated the pore into two umbral cores. The magnetic canopy structure \[6\] is confirmed in this LB. In the middle photosphere ($h \simeq 250$ km, Ca II wings), the reverse granulation is seen around the pore but not in the LB. The reverse granulation is explained by adiabatic cooling of expanding gas in granules, which is only partially cancelled by radiative heating \[23\]. In the LB, hot (magneto)convective plumes at the bottom of the photosphere cannot expand adiabatically in higher photospheric layers due to the presence of magnetic field and the radiative heating dominates, forming small bright granules separated by dark lanes. The positive correlation between the LB structures in the continuum and Ca II line wings indicates that the middle-photosphere structures are heated by radiation from the low photosphere. Since the mean free photon path in the photosphere is larger than 1$''$ for $h > 120$ km, the LB observed in the line wings is broader and its granules are larger than in the continuum due to the diffusion of radiation.
In the middle chromosphere ($h \simeq 1300$ km, Ca II centre), the LB is the brightest feature in the pore and it is brighter by factor of 1.3 than the average intensity in the FOV. Since the height in the atmosphere is well above the temperature minimum, the radiative heating cannot be expected. The heating by acoustic waves seems to be a candidate, because the acoustic power flux in the LB is five times higher than in the “quiet” chromosphere. To check this possibility, we have to compare the acoustic power flux with the total radiative cooling. An average profile of the Ca II line in the LB was used to derive a simple semi-empirical model based on the VAL3C chromosphere \[24\], with the temperature increased by 3000 K in the upper chromospheric layers ($h > 900$ km). The net radiative cooling rates were calculated using this model. The resulting height-integrated radiative cooling is approximately 6700 W m$^{-2}$ in the LB chromosphere and 3000 W m$^{-2}$ in the “quiet” chromosphere. The acoustic power fluxes (550 W m$^{-2}$ and 110 W m$^{-2}$, respectively) are by an order of magnitude lower than the estimated total radiative cooling, so that the acoustic power flux does not seem to provide enough energy to reach the observed LB brightness.
References {#references .unnumbered}
==========
[24]{}
Simon G W and Weiss N O 1970 [*Solar Phys.*]{} [**13**]{} 85
Rucklidge A M, Schmidt H U and Weiss N O 1995 [*MNRAS*]{} [**273**]{}, 491
Keppens R and Martínez Pillet V 1996 [*Astron. Astrophys.*]{} [**316**]{} 229
Sütterlin P 1998 [*Astron. Astrophys.*]{} [**333**]{} 305
Sobotka M 1997 [*1st Advances in Solar Physics Euroconference, Advances in the Physics of Sunspots*]{}, ed B Schmieder, J C del Toro Iniesta and M Vázquez, ASP Conference series Vol 118 p 155
Jurčák J, Martínez Pillet V and Sobotka M 2006 [*Astron. Astrophys.*]{} [**453**]{} 1079
Berger T E and Berdyugina S V 2003 [*Astrophys. J.*]{} [**589**]{} L117
Alissandrakis C E, Dialetis D, Mein P et al. 1988 [*Astron. Astrophys.*]{} [**201**]{} 339
Cavallini F 2006 [*Solar Phys.*]{} [**236**]{} 415
Sobotka M, Del Moro D, Jurčák J and Berrilli F 2012 [*Astron. Astrophys.*]{} [**537**]{} A85
Viticchié B, Del Moro D, Criscuoli S and Berrilli F 2010 [*Astrophys. J.*]{} [**723**]{} 787
Kosugi T, Matsuzaki K, Sakao T et al. 2007 [*Solar Phys.*]{} [**243**]{} 3
Tsuneta S, Ichimoto K, Katsukawa Y et al. 2008 [*Solar Phys.*]{} [**249**]{} 167
Cauzzi G, Reardon K P, Uitenbroek H et al. 2008 [*Astron. Astrophys.*]{} [**480**]{} 515
Garcia A, Klvaňa M and Sobotka M 2010 [*Cent. Eur. Astrophys. Bull.*]{} [**34**]{} 47
Christopoulou E B, Georgakilas A A and Koutchmy S 2000 [*Astron. Astrophys.*]{} [**354**]{} 305
Tsiropoula G, Alissandrakis C E, Dialetis D and Mein P 1996 [*Solar Phys.*]{} [**167**]{} 79
Ruiz Cobo B and del Toro Iniesta J C 1992 [*Astrophys. J.*]{} [**398**]{} 375
Stangalini M, Del Moro D, Berrilli F amd Jefferies S M 2011 [*Astron. Astrophys.*]{} [**534**]{} A65
Fossat E, Regulo C, Roca Cortes T et al. 1992 [*Astron. Astrophys.*]{} [**266**]{} 532
Jefferies S M, McIntosh S W, Armstrong J D et al. 2006 [*Astrophys. J.*]{} [**648**]{}, L151
Bello González N, Flores Soriano M, Kneer F and Okunev O 2009 [*Astron. Astrophys.*]{} [**508**]{} 941
Cheung M C M, Schüssler M and Moreno Insertis F 2007 [*Astron. Astrophys.*]{} [**461**]{} 1163
Vernazza J E, Avrett E H and Loeser R 1981 [*Astrophys. J. Suppl. Ser.*]{} [**45**]{} 635
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We introduce *dark mediator Dark matter* (dmDM) where the dark and visible sectors are connected by at least one light mediator $\phi$ carrying the same dark charge that stabilizes DM. $\phi$ is coupled to the Standard Model via an operator $\bar q q \phi \phi^*/\Lambda$, and to dark matter via a Yukawa coupling $y_\chi \overline{\chi^c}\chi \phi$. Direct detection is realized as the $2\rightarrow3$ process $\chi N \rightarrow \bar \chi N \phi$ at tree-level for $m_\phi \lesssim 10 \kev$ and small Yukawa coupling, or alternatively as a loop-induced $2\rightarrow2$ process $\chi N \rightarrow \chi N$. We explore the direct-detection consequences of this scenario and find that a heavy $\mathcal{O}(100 \gev)$ dmDM candidate fakes different $\mathcal{O}(10 \gev)$ standard WIMPs in different experiments. Large portions of the dmDM parameter space are detectable above the irreducible neutrino background and not yet excluded by any bounds. Interestingly, for the $m_\phi$ range leading to novel direct detection phenomenology, dmDM is also a form of Self-Interacting Dark Matter (SIDM), which resolves inconsistencies between dwarf galaxy observations and numerical simulations.'
address:
- 'C. N. Yang Institute for Theoretical Physics, Stony Brook University, Stony Brook, NY 11794, U.S.A.'
- 'Physics Department, University of California Davis, Davis, California 95616, U.S.A.'
author:
- David Curtin
- 'Ze’ev Surujon'
- Yuhsin Tsai
title: Direct Detection with Dark Mediators
---
Introduction
============
In this letter, we present *Dark Mediator Dark Matter* (dmDM) to address two important gaps in the DM literature: exploring mediators with dark charge, and non-standard interaction topologies for scattering off nuclei. Additional details and constraints will be explored in a companion paper [@dmDM].
The existence of dark matter is firmly established by many astrophysical and cosmological observations [@Ade:2013zuv], but its mass and coupling to the Standard Model (SM) particles are still unknown. Weakly Interacting Massive Particles (WIMPs) are the most popular DM candidates since they arise in many theories beyond the SM, including supersymmetry, and may naturally give the correct relic abundance [@Jungman:1995df]. However, improved experimental constraints – from collider searches, indirect detection and direct detection [@Goodman:1984dc; @Akerib:2013tjd; @Aprile:2012nq; @Bernabei:2013cfa; @Aalseth:2012if; @Angloher:2011uu; @Agnese:2013rvf; @Fan:2013faa] – begin to set tight limits (with some conflicting signal hints) on the standard WIMP scenario with a contact interaction to quarks. This makes it necessary to look for a more complete set of DM models which are theoretically motivated while giving unique experimental signatures.
![ The quark-level Feynman diagrams responsible for DM-nucleus scattering in *Dark Mediator Dark Matter* (dmDM). Left: the $2\rightarrow3$ process at tree-level. Right: the loop-induced $2\rightarrow2$ process. The arrows indicate flow of dark charge. []{data-label="f.feynmandiagram"}](dd_process_prl__1){width="8cm"}
Dark Mediator Dark Matter
=========================
Given its apparently long lifetime, most models of DM include some symmetry under which the DM candidate is charged to make it stable. An interesting possibility is that not only the DM candidate, but also the mediator connecting it to the visible sector is charged under this dark symmetry. Such a ‘dark mediator’ $\phi$ could only couple to the SM fields in pairs.
As a simple example, consider real or complex SM singlet scalars $\phi_i$ coupled to quarks, along with Yukawa couplings to a Dirac fermion DM candidate $\chi$. The terms in the effective Lagrangian relevant for direct detection are $$\mathcal{L}_\mathrm{DM} \supset
\displaystyle{\sum_{i,j}^{n_{\phi}}}\,\frac{1}{\Lambda_{ij}} \bar q\,q \,\phi_i \phi_j^* + \displaystyle{\sum_{i}^{n_{\phi}}}\left ( y^{\phi_i}_\chi \overline{\chi^c}\chi \phi_i + h.c. \right)+...
\label{eq:dmDM}$$ where $\ldots$ stands for $\phi, \chi$ mass terms, as well as the rest of the dark sector, which may be more complicated than this minimal setup. This interaction structure can be enforced by a $\mathbb{Z}_4$ symmetry. To emphasize the new features of this model for direct detection, we focus on the minimal case with a single mediator $n_\phi = 1$ (omitting the $i$-index). However, the actual number of dark mediators is important for interpreting indirect constraints [@dmDM].
The leading order process for DM-nucleus scattering is $\chi N \to \bar \chi N \phi$ if $m_\phi \lesssim \mathcal{O}(10 \kev)$. However, an elastic scattering $\chi N \to \chi N$ is always present at loop-level since it satisfies all possible symmetries, see [Fig. \[f.feynmandiagram\]]{}. Which of the two possibilities dominates direct detection depends on the size of the Yukawa couplings $y_\chi^{\phi_i}$ as well as the dark mediator masses.
Previous modifications to WIMP-nucleon scattering kinematics include the introduction of a mass splitting [@TuckerSmith:2001hy; @Graham:2010ca; @Essig:2010ye]; considering matrix elements $|\mathcal{M}|^2$ with additional velocity- or momentum transfer suppressions (for a complete list see e.g. [@MarchRussell:2012hi]), especially at low DM masses close to a GeV [@Chang:2009yt]; light scalar or ‘dark photon’ mediators (see e.g. [@Essig:2013lka] which give large enhancements at low nuclear recoil); various forms of composite dark matter [@Alves:2009nf; @Kribs:2009fy; @Lisanti:2009am; @Cline:2012bz; @Feldstein:2009tr] which may introduce additional form factors; DM-nucleus scattering with intermediate bound states [@Bai:2009cd] which enhances scattering in a narrow range of DM velocities; and induced nucleon decay in Asymmetric Dark Matter models [@Davoudiasl:2011fj]. Notably missing from this list are alternative process topologies for DM-nucleus scattering. This omission is remedied by the dmDM scenario. dmDM is uniquely favored to produce a detectable $2\to3$ scattering signal at direct detection experiments. This is because it contains two important ingredients: (1) a light mediator with non-derivative couplings to enhance the cross section, compensating for the large suppression of emitting a relativistic particle in a non-relativistic scattering process, and (2) a scalar as opposed to a vector mediator, allowing it to carry dark charge (without a derivative coupling). This imposes selection rules which make the $2\to2$ process subleading in $y_\chi$. These ingredients are difficult to consistently implement in other model constructions without violating constraints on light force carriers.
The effect of strong differences between proton and neutron coupling to DM have been explored by [@Feng:2011vu]. To concentrate on the kinematics we shall therefore assume the operator $\bar q q \phi \phi^*/\Lambda$ is flavor-blind in the quark mass basis. Above the electroweak symmetry breaking scale this operator is realized as $\bar Q_L H q_R \phi\phi^*/M^2$. It can be generated by integrating out heavy vector-like quarks which couple to the SM and $\phi$ [@dmDM], giving $1/\Lambda = y_Q^2 y_h v/M_Q^2$. This UV completion allows for large direct detection cross sections without being in conflict with collider bounds, but may be still probed at the 14 TeV LHC.
-------------------------------------------------------------------------------------------------------
$\scriptstyle m_\chi = 10 \gev, m_\phi = 0.2 \kev, v = 400 \kmpers $
$\scriptstyle m_N \ = \ \textcolor{blue}{28}, \ \textcolor{red}{73}, \ \textcolor{purple}{131} \gev$
-------------------------------------------------------------------------------------------------------
\
![ Nuclear recoil spectra of dmDM (without nuclear/nucleus form factors and coherent scattering enhancement) for $y_\chi = 1, \Lambda = 1 \tev$ in a Silicon, Germanium and Xenon target. The dashed lines are spectra of standard WIMP scattering (via operator $\bar q q \bar \chi \chi/\tilde \Lambda^2$, with $\tilde \Lambda = 7 \tev$) shown for comparison. dmDM spectra computed with `MadGraph5` [@Alwall:2011uj] and `FeynRules1.4` [@Ask:2012sm]. []{data-label="f.partonleveldistribution"}](raw2to3spectrum_mN_28_73_131_mX_10_mphi_02_v_400__2 "fig:"){width="7cm"}
Nuclear Recoil Spectrum
=======================
We start by examining the novel $2\rightarrow3$ regime of dmDM. The DM-nucleus collision is inelastic, not by introducing a new mass scale like a splitting, but by virtue of the process topology. The nuclear recoil spectrum is different compared to previously explored scenarios. This is illustrated in [Fig. \[f.partonleveldistribution\]]{}, where we compare nuclear recoil spectra of standard WIMPs to dmDM for fixed velocity and different nucleus mass, *before* convolving with various form factors and the ambient DM speed distribution. The observable dmDM differential cross section is independent of $m_\phi$ for $m_\phi \lesssim \kev$ and can be well described by the function $$\begin{aligned}
\label{eq:recoil}
\frac{d\,\sigma_{2\to3}}{d\,E_r}&\simeq& \, \frac{\mathcal{C}}{E_r} \,\left(1-\sqrt{\frac{E_r}{E_r^\mathrm{max}}}\right)^2,\end{aligned}$$ where $\mathcal{C}=1.3\times 10^{-42}\,(\tev/\Lambda)^2$ cm$^2$ and $E_r^\mathrm{max}=\frac{2 \mu_{\chi N}^2}{m_N} v^2 $, same as the WIMP case for a given DM velocity. (We emphasize that this is a phenomenological description, the actual spectra were produced in MadGraph, see [Section \[s.directdetection\]]{}.) The first factor comes from the light mediator propagator $(2m_N\,E_r)^{-2}$ as well the integrated phase space of the escaping $\phi$. The cross section suppression (second factor) is more pronounced as the DM becomes lighter or slower, and as the nucleus becomes heavier, both of which reduces $E_r^\mathrm{max}$. This is because massless $\phi$ emission carries away a more significant fraction of the total collision energy if the heavy particle momenta are smaller. The maximum kinematically allowed nuclear recoil is then less likely.
When $n_{\phi}=1$, the $2\rightarrow2$ process will dominate direct detection for Yukawa coupling $y_\chi$ above some threshold, or if $m_\phi \gtrsim 10 \kev$. For the purpose of calculating the matrix element, the loop diagram in [Fig. \[f.feynmandiagram\]]{} (right) is equivalent to the operator $$\label{e.2to2operator}
\frac{\,y_{\chi}^2}{2\,\pi^2}\,\frac{1}{\Lambda\,q} \ (\bar{\chi}\,\chi\,\bar{N}\,N),$$ where $q=\sqrt{2\,m_N\,E_r}$ is the momentum transfer in the scattering. Effectively, this is identical to a standard WIMP with a $\bar \chi \chi \bar N N$ contact operator, but with an additional $1/E_r$ suppression in the cross section. This gives a similar phenomenology as a light mediator being exchanged at tree-level with derivative coupling.
Note that the relative importance of these two scattering processes is highly model dependent. For example, if $n_\phi = 2$ the dominant scalar-DM coupling could be $\bar q q \phi_1 \phi_2^*/\Lambda_{12}$. In that case, the $2\to2$ operator above is $\propto y_\chi^{\phi_1} y_\chi^{\phi_2}$ and can be suppressed without reducing the $2\to3$ rate by taking $y_\chi^{\phi_2} \gg y_\chi^{\phi_1}$. The scattering behavior of both the $2\rightarrow3$ and $2\rightarrow2$ regimes necessitates a re-interpretation of all DM direct detection bounds. We will do this below.
Indirect Constraints {#s.indirectconstraints}
====================
Direct detection experiments probe the ratio $y_\chi/\Lambda$ and $y_\chi^2/\Lambda$ for $2\to3$ and $2\to2$ scattering respectively. However, indirect constraints on dmDM from cosmology, stellar astrophysics and collider experiments are sensitive to the Yukawa coupling and $\Lambda$ separately. In [@dmDM] we conduct an extensive study of these bounds, including the first systematic exploration of constraints on the $\bar q q \phi \phi^*/\Lambda$ operator with light scalars $\phi$. Since these constraints (in particular, Eqns. \[e.NScoolingbound\] and \[e.thermalrelic\] below) provide important context for our results on direct detection, we summarize the two most important results here. For details we refer the reader to [@dmDM].
The scalar mediator(s) of dmDM are most stringently constrained from stellar astrophysics and cosmology:
- Avoiding overclosure requires $m_\phi \lesssim \ev$ [@Kolb:1990vq], so we take the heaviest stable $\phi$ to be essentially massless, making it a very subdominant dark matter component. This also satisfies structure formation, computed for light sterile neutrinos in [@Wyman:2013lza]. Measurements by the Planck satellite [@Ade:2013zuv] restrict the number of light degrees of freedom during Big Bang Nucleosynthesis, enforcing the bound $n_\phi \leq 2$ for real scalars.
- The coupling of $\phi$ to the SM is most constrained from stellar astrophysics. For $n_\phi = 1$, observational data on neutron star cooling essentially rules out any directly detectable dmDM model [@dmDM]. However, this bound is easily relaxed for $n_\phi = 2$ if $m_{\phi_1} \lesssim \ev$, $m_{\phi_2} \sim \mev$, with a cosmologically unstable $\phi_2$. The dominant interaction to the SM is assumed to be $\bar q q \phi_1 \phi_2^*/\Lambda$. In that case, $\phi_2$ emission in the neutron star is Boltzmann suppressed due to its core temperature of $T \lesssim 100 \kev$, and $\phi_1$ emission proceeds via a loop process. The bound on $\Lambda$ is then weakened to $$\label{e.NScoolingbound}
\Lambda \gtrsim 10 \tev.$$
- In Supernovae, emission of light invisible particles can truncate the neutrino burst [@Raffelt:2006cw]. However, if these particles interact with the stellar medium more strongly than neutrinos they are trapped and do not leak away energy from the explosion. The temperature of supernovae $T \sim 10 \mev$ is large enough to produce $\phi_1, \phi_2$ at tree-level in the above $n_\phi = 2$ scenario, and the scattering cross section with nuclei is much larger than for neutrinos if $\Lambda \lesssim 10^6 \tev$. Therefore this setup is compatible with supernovae constraints.
- The LHC can set constraints on heavy dark vector quarks in a possible UV completion of dmDM. The CMS $20 \ \mathrm{fb}^{-1}$ di-jet + MET search [@CMS:2013gea] search sets a lower bound on the heavy quark to be $1.5$ TeV.
The physics of direct detection for this $n_\phi = 2$ setup is identical to the minimal $n_\phi = 1$ model. This is because the typical momentum transfer is $\mathcal{O}(10 \mev)$, making the intermediate $\phi_2$ mediator massless for the purposes of direct detection. We are therefore justified in examining the direct detection phenomenology of the $n_\phi = 1$ model in detail, applying the $\Lambda$ bound Eqn. \[e.NScoolingbound\] and with the understanding that the full realization of dmDM requires a slightly non-minimal spectrum.
The dark matter yukawa coupling is constrained from observations on large scale structure and (under certain assumptions) from cosmology:
- Dark matter self-interaction bounds from bullet cluster observations constrain the DM Yukawa coupling to be $y_\chi \lesssim 0.13 (m_\chi/\gev)^{3/4}$ [@Feng:2009mn].
- A thermal relic $\chi$ with $\Omega_\chi = \Omega_\mathrm{CDM}$ requires $$\label{e.thermalrelic}
y_\chi = y_\chi^\mathrm{relic}(m_\chi) \approx 0.0027 \left( \frac{m_\chi}{\gev}\right)^{1/2}$$ if there is no significant $\phi^3$ term. This also satisfies the above self-interaction bounds.
Interestingly, the range of $m_\phi \sim \ev$ to MeV that is relevant for its novel direct detection signal also makes dmDM a realization of Self-Interacting DM (SIDM) [@Carlson:1992fn; @Spergel:1999mh; @Wandelt:2000ad; @Loeb:2010gj; @Rocha:2012jg; @Zavala:2012us; @Medvedev:2013vsa; @Tulin:2013teo]. A Yukawa interaction consistent with $\chi$ being a thermal relic can then help resolve the “core/cusp” and “too-big-to-fail” inconsistencies between dwarf galaxy observations and many-body simulations [@dmDM; @Tulin:2013teo].
Direct Detection {#s.directdetection}
================
We compute dmDM nuclear recoil spectra at direct detection experiments by simulating the parton-level process in `MadGraph5` [@Alwall:2011uj], and derive the event rates according to $$\frac{d R}{d E_r} = N_T \frac{\rho_\chi}{m_\chi} \int dv \ v f(v) \frac{d \sigma_N}{d E_r},$$ where $f(v)$ is the local DM speed distribution (approximate Maxwell-Bolzmann with $v_0 \approx 220$ km/s and a $v_{esc} \approx 544$ km/s cutoff, boosted into the earth frame $v_e \approx 233$ [@Smith:2006ym]), while $\rho_\chi \approx 0.3 \ \gev \ \cmmthree$ is the local DM density [@Bovy:2012tw], and $N_T$ is the target number density per kg. $d \sigma_N/d E_r$ includes the usual Helm nuclear form factor [@Engel:1991wq; @Lewin:1995rx], the $A^2$ coherent scattering enhancement as well as the quark-nucleon form factor for scalar interactions (see [@Belanger:2008sj] for a review). We validated our Monte Carlo pipeline by reproducing analytically known $2\rightarrow2$ results.
[Fig. \[f.DDspectraCDMS\]]{} shows some nuclear recoil spectra for a Silicon and Xenon target. (We henceforth assume an effectively massless $\phi$.) dmDM is compared to standard WIMPs (velocity- and recoil-independent contact-interaction) for different DM masses. An important feature of our model is apparent: a $\sim 50 \gev$ dmDM candidate looks like a $\sim 10 \gev$ $(20 \gev$) WIMP when scattering off Silicon (Xenon). Moreover, the shape of $d \sigma_N/d E_r$ is insensitive to $m_\chi$ unless $m_\chi$ is much smaller than $m_N$ (see Eq. \[eq:recoil\]). This makes it much more difficult to measure the DM mass using the shape of the spectrum. Signals at two detectors with different target materials are required.
![image](DDspectraCDMS){width="15cm"}\
![image](DDspectraLUX){width="15cm"}
[m[6.3cm]{}m[6.3cm]{}]{} ![ **Left:** For each dmDM mass $m_\chi = m_{2\rightarrow3}$ this plot shows the WIMP mass $m_\chi = m_{2\rightarrow2}$ which gives the same spectral shape at XENON100 ($S1 > 3$ with 6% light gathering efficiency, dashed red line), LUX ($S1 > 2$ with 14% light gathering efficiency, dash-dotted black line), CDMSII Silicon ($E_r > 7 \kev$, solid blue line), and CDMSlite (Germanium, $Er > 0.2 \kev$, dotted purple line) before selection cuts. **Right:** The ‘observed’ WIMP-nucleon cross section for each dmDM mass $m_{2\to3}$, assuming the best-fit $m_{2\to2}$ from the left. The dmDM parameters are $y_\chi = 1, \Lambda = 45 \tev$. []{data-label="f.compare2to2to2to3"}](m2to3vsm2to2__4 "fig:"){width="6.3cm"} & ![ **Left:** For each dmDM mass $m_\chi = m_{2\rightarrow3}$ this plot shows the WIMP mass $m_\chi = m_{2\rightarrow2}$ which gives the same spectral shape at XENON100 ($S1 > 3$ with 6% light gathering efficiency, dashed red line), LUX ($S1 > 2$ with 14% light gathering efficiency, dash-dotted black line), CDMSII Silicon ($E_r > 7 \kev$, solid blue line), and CDMSlite (Germanium, $Er > 0.2 \kev$, dotted purple line) before selection cuts. **Right:** The ‘observed’ WIMP-nucleon cross section for each dmDM mass $m_{2\to3}$, assuming the best-fit $m_{2\to2}$ from the left. The dmDM parameters are $y_\chi = 1, \Lambda = 45 \tev$. []{data-label="f.compare2to2to2to3"}](xsec2to3vsbestmatch2to2 "fig:"){width="6.3cm"}
We can make this observation more concrete by mapping dmDM parameters to WIMP parameters. This is possible because both sets of nuclear recoil spectra look roughly like falling exponentials. For each dmDM spectrum with a given mass there is a closely matching WIMP spectrum with some different (lower) mass. To find the $m_{2\rightarrow2}$ corresponding to each $m_{2\rightarrow3}$ we compare binned WIMP and dmDM distributions and minimize the total relative difference in each bin. The resulting mapping is shown in [Fig. \[f.compare2to2to2to3\]]{} (left). Even very heavy dmDM candidates mimic light WIMPs of different masses at different experiments. A corresponding cross section remapping (right) shows that experiments with heavier nuclei are more sensitive to dmDM due to the inelastic nature of the collision.
![**Left:** Direct detection bounds on the $2\rightarrow3$ regime of dmDM. The vertical axis is proportional to $\sigma_{\chi N \to \bar \chi N \phi}$. *Solid lines*: 90% CL bounds by XENON100 (red), LUX (black) and CDMSlite (purple), as well as the best-fit regions by CDMS II Si (blue, green). The large-dashed black line indicates the irreducible neutrino background [@Billard:2013qya]. *Small-dashed magenta line*: Upper bound for $y_\chi = y_\chi^\mathrm{relic}(m_\chi)$ and neutron star cooling bound $\Lambda < 10 \tev$. *Lower dotted orange line*: upper bound for $2\to3$ dominated direct detection and neutron star bound with all equal Yukawa couplings. This line can be arbitrarily moved, as discussed below Eq. \[e.2to2operator\]. The *upper dotted orange line* is for $y^{\phi_1}_\chi = y^{\phi_2}_\chi/20$, in which case the vertical axis is understood to be $(y_\chi^{\phi_2}/\Lambda)^2$. **Right:** Direct detection bounds on the $2\rightarrow2$ regime of $n_\phi = 1$ dmDM, same labeling as the left plot. The vertical axis is proportional to $\sigma_{\chi N \to \bar \chi N}$, and is understood to be $(y_\chi^1 y_\chi^2/\Lambda)^2$ for the $n_\phi = 2$ model outlined in [Section \[s.indirectconstraints\]]{}. []{data-label="f.mappingbounds"}](2to3_alltransformedboundsLambdam2__5 "fig:"){width="8cm"} ![**Left:** Direct detection bounds on the $2\rightarrow3$ regime of dmDM. The vertical axis is proportional to $\sigma_{\chi N \to \bar \chi N \phi}$. *Solid lines*: 90% CL bounds by XENON100 (red), LUX (black) and CDMSlite (purple), as well as the best-fit regions by CDMS II Si (blue, green). The large-dashed black line indicates the irreducible neutrino background [@Billard:2013qya]. *Small-dashed magenta line*: Upper bound for $y_\chi = y_\chi^\mathrm{relic}(m_\chi)$ and neutron star cooling bound $\Lambda < 10 \tev$. *Lower dotted orange line*: upper bound for $2\to3$ dominated direct detection and neutron star bound with all equal Yukawa couplings. This line can be arbitrarily moved, as discussed below Eq. \[e.2to2operator\]. The *upper dotted orange line* is for $y^{\phi_1}_\chi = y^{\phi_2}_\chi/20$, in which case the vertical axis is understood to be $(y_\chi^{\phi_2}/\Lambda)^2$. **Right:** Direct detection bounds on the $2\rightarrow2$ regime of $n_\phi = 1$ dmDM, same labeling as the left plot. The vertical axis is proportional to $\sigma_{\chi N \to \bar \chi N}$, and is understood to be $(y_\chi^1 y_\chi^2/\Lambda)^2$ for the $n_\phi = 2$ model outlined in [Section \[s.indirectconstraints\]]{}. []{data-label="f.mappingbounds"}](2to2dmDM_alltransformedboundsLambdam2_no2to3dominancebound "fig:"){width="8cm"}
[Fig. \[f.compare2to2to2to3\]]{} defines an experiment-dependent parameter map that we can use to map each collaboration’s WIMP bounds onto the dmDM model if $2\to3$ scattering dominates[^1]. The resulting direct detection bounds are shown in [Fig. \[f.mappingbounds\]]{} (left). We include the irreducible neutrino background [@Billard:2013qya] at the LUX experiment, which serves as an approximate lower border of the observable dmDM parameter space. An identical procedure can be used in the $2\to2$ dominant regime of dmDM. The translation map has similar qualitative features to the previous case since $d \sigma/d E_r \sim E_r^{-1}$, except the faked WIMP signal corresponds to somewhat higher mass. The resulting direct detection bounds are shown in [Fig. \[f.mappingbounds\]]{} (right).
The probability for any one $2\rightarrow2$ nuclear recoil event to lie above experimental detection threshold is much larger than for a $2\rightarrow3$ event, due to the less severe recoil suppression. For $n_\phi = 1$, this means the former will dominate direct detection unless $m_\phi \lesssim \kev$ and the Yukawa coupling is very small, $y_\chi \lesssim 10^{-3} < y_\chi^\mathrm{relic}$. However, as discussed in [Section \[s.indirectconstraints\]]{}, the neutron star cooling constraint requires at least $n_\phi = 2$. The $2\to2$ process could then be arbitrarily suppressed, allowing $2\to3$ direct detection with a thermal relic $\chi$.
For the $2\to3$ and $2\to2$ scattering regimes, direct detection probes $y_\chi/\Lambda$ and $y_\chi^2/\Lambda$ respectively. The neutron star cooling bound $\Lambda \gtrsim 10 \tev$ and the bounds on dark matter Yukawa coupling $y_\chi$ can be combined to be shown in the direct detection planes of [Fig. \[f.mappingbounds\]]{}. The assumption of a thermal relic then excludes the regions in Fig. \[f.mappingbounds\] above the magenta dashed line, meaning these bounds supersede the liquid Xenon experiments for $m_\chi \lesssim 10 \gev$ in the $2\to3$ dominant regime.
There are large discoverable regions of dmDM parameter space that are not excluded. Due to the nontrivial dependence of the dmDM recoil spectrum on the target- and dark-matter masses and velocity, signals at several experiments will be needed to differentiate standard WIMPs from our model, but dmDM offers the realistic prospect of TeV-scale heavy quark discoveries pointing the way towards a sensitivity target for direct detection.
Conclusion
==========
*Dark Mediator Dark Matter* introduces the possibility that dark matter interacts with the standard model via a mediator which also carries dark charge. This “Double-Dark Portal” adds the phenomenon of additional particle emission to the menu of possible interactions with nuclei, serving as an existence proof that this scattering topology can be realized. Direct detection experiments are starting to probe interesting regions of parameter space compatible with a thermal relic and neutron star bounds. For observationally relevant parameters, dmDM also acts as an implementation of SIDM [@Carlson:1992fn; @Spergel:1999mh; @Wandelt:2000ad; @Loeb:2010gj; @Rocha:2012jg; @Zavala:2012us; @Medvedev:2013vsa; @Tulin:2013teo], which can resolve various inconsistencies between many-body simulations and observations for dwarf galaxies. Even more than many other DM models, dmDM discovery is aided by lowering nuclear recoil thresholds. Further investigation is warranted and includes potential LHC signals, as well as possible leptophilic realizations of the model.
**Acknowledgements —** The authors would like to gratefully acknowledge the valuable contributions of Yue Zhao during an earlier stage of this collaboration. We thank Rouven Essig, Patrick Fox, Roni Harnik and Patrick Meade for valuable comments on draft versions of this letter. We are very grateful to Joseph Bramante, Rouven Essig, Greg Gabadadze, Jasper Hasenkamp, Matthew McCullough, Olivier Mattelaer, Matthew Reece, Philip Schuster, Natalia Toro, Sean Tulin, Neal Weiner, Itay Yavin and Hai-Bo Yu for valuable discussions. D.C. and Z.S. are supported in part by the National Science Foundation under Grant PHY-0969739. Y.T. is supported in part by the Department of Energy under Grant DE-FG02-91ER40674. The work of Y.T. was also supported in part by the National Science Foundation under Grant No. PHYS-1066293 and the hospitality of the Aspen Center for Physics
[99]{}
D. Curtin and Y. Tsai, arXiv:1405.1034 \[hep-ph\]. P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], arXiv:1303.5076 \[astro-ph.CO\]. See, for example, G. Jungman, M. Kamionkowski and K. Griest, Phys. Rept. [**267**]{}, 195 (1996) \[hep-ph/9506380\]. G. Bertone and D. Merritt, Mod. Phys. Lett. A [**20**]{}, 1021 (2005) \[astro-ph/0504422\].
M. W. Goodman and E. Witten, Phys. Rev. D [**31**]{}, 3059 (1985). D. S. Akerib [*et al.*]{} \[LUX Collaboration\], arXiv:1310.8214 \[astro-ph.CO\].
E. Aprile [*et al.*]{} \[XENON100 Collaboration\], Phys. Rev. Lett. [**109**]{}, 181301 (2012) \[arXiv:1207.5988 \[astro-ph.CO\]\].
R. Bernabei, P. Belli, S. d’Angelo, A. Di Marco, F. Montecchia, F. Cappella, A. d’Angelo and A. Incicchitti [*et al.*]{}, Int. J. Mod. Phys. A [**28**]{}, 1330022 (2013) \[arXiv:1306.1411 \[astro-ph.GA\]\].
C. E. Aalseth [*et al.*]{} \[CoGeNT Collaboration\], Phys. Rev. D [**88**]{}, 012002 (2013) \[arXiv:1208.5737 \[astro-ph.CO\]\].
G. Angloher, M. Bauer, I. Bavykina, A. Bento, C. Bucci, C. Ciemniak, G. Deuter and F. von Feilitzsch [*et al.*]{}, Eur. Phys. J. C [**72**]{}, 1971 (2012) \[arXiv:1109.0702 \[astro-ph.CO\]\]. R. Agnese [*et al.*]{} \[CDMS Collaboration\], \[arXiv:1304.4279 \[hep-ex\]\].
T. Cohen, M. Lisanti, A. Pierce and T. R. Slatyer, JCAP [**1310**]{}, 061 (2013) \[arXiv:1307.4082 \[hep-ph\]\]. J. Fan and M. Reece, JHEP [**1310**]{}, 124 (2013) \[arXiv:1307.4400 \[hep-ph\]\].
D. Tucker-Smith and N. Weiner, Phys. Rev. D [**64**]{}, 043502 (2001) \[hep-ph/0101138\].
P. W. Graham, R. Harnik, S. Rajendran and P. Saraswat, Phys. Rev. D [**82**]{}, 063512 (2010) \[arXiv:1004.0937 \[hep-ph\]\]. R. Essig, J. Kaplan, P. Schuster and N. Toro, \[arXiv:1004.0691 \[hep-ph\]\]. J. March-Russell, J. Unwin and S. M. West, JHEP [**1208**]{}, 029 (2012) \[arXiv:1203.4854 \[hep-ph\]\].
S. Chang, A. Pierce and N. Weiner, JCAP [**1001**]{}, 006 (2010) \[arXiv:0908.3192 \[hep-ph\]\].
R. Essig, J. A. Jaros, W. Wester, P. H. Adrian, S. Andreas, T. Averett, O. Baker and B. Batell [*et al.*]{}, arXiv:1311.0029 \[hep-ph\].
D. S. M. Alves, S. R. Behbahani, P. Schuster and J. G. Wacker, Phys. Lett. B [**692**]{}, 323 (2010) \[arXiv:0903.3945 \[hep-ph\]\].
G. D. Kribs, T. S. Roy, J. Terning and K. M. Zurek, Phys. Rev. D [**81**]{}, 095001 (2010) \[arXiv:0909.2034 \[hep-ph\]\].
M. Lisanti and J. G. Wacker, Phys. Rev. D [**82**]{}, 055023 (2010) \[arXiv:0911.4483 \[hep-ph\]\].
J. M. Cline, A. R. Frey and G. D. Moore, Phys. Rev. D [**86**]{}, 115013 (2012) \[arXiv:1208.2685 \[hep-ph\]\].
B. Feldstein, A. L. Fitzpatrick and E. Katz, JCAP [**1001**]{}, 020 (2010) \[arXiv:0908.2991 \[hep-ph\]\]. Y. Bai and P. J. Fox, JHEP [**0911**]{}, 052 (2009) \[arXiv:0909.2900 \[hep-ph\]\]. H. Davoudiasl, D. E. Morrissey, K. Sigurdson and S. Tulin, Phys. Rev. D [**84**]{}, 096008 (2011) \[arXiv:1106.4320 \[hep-ph\]\].
J. L. Feng, J. Kumar, D. Marfatia and D. Sanford, Phys. Lett. B [**703**]{}, 124 (2011) \[arXiv:1102.4331 \[hep-ph\]\].
J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer, JHEP [**1106**]{}, 128 (2011) \[arXiv:1106.0522 \[hep-ph\]\]. S. Ask, N. D. Christensen, C. Duhr, C. Grojean, S. Hoeche, K. Matchev, O. Mattelaer and S. Mrenna [*et al.*]{}, arXiv:1209.0297 \[hep-ph\].
E. W. Kolb and M. S. Turner, Front. Phys. [**69**]{}, 1 (1990).
M. Wyman, D. H. Rudd, R. A. Vanderveld and W. Hu, arXiv:1307.7715 \[astro-ph.CO\].
G. G. Raffelt, Lect. Notes Phys. [**741**]{}, 51 (2008) \[hep-ph/0611350\].
CMS Collaboration \[CMS Collaboration\], CMS-PAS-SUS-13-012. J. L. Feng, M. Kaplinghat, H. Tu and H. -B. Yu, JCAP [**0907**]{}, 004 (2009) \[arXiv:0905.3039 \[hep-ph\]\].
E. D. Carlson, M. E. Machacek and L. J. Hall, Astrophys. J. [**398**]{}, 43 (1992)
D. N. Spergel and P. J. Steinhardt, Phys. Rev. Lett. [**84**]{}, 3760 (2000) \[astro-ph/9909386\].
B. D. Wandelt, R. Dave, G. R. Farrar, P. C. McGuire, D. N. Spergel and P. J. Steinhardt, astro-ph/0006344.
A. Loeb and N. Weiner, Phys. Rev. Lett. [**106**]{}, 171302 (2011) \[arXiv:1011.6374 \[astro-ph.CO\]\].
M. Rocha, A. H. G. Peter, J. S. Bullock, M. Kaplinghat, S. Garrison-Kimmel, J. Onorbe and L. A. Moustakas, Mon. Not. Roy. Astron. Soc. [**430**]{}, 81 (2013) \[arXiv:1208.3025 \[astro-ph.CO\]\]. J. Zavala, M. Vogelsberger and M. G. Walker, Monthly Notices of the Royal Astronomical Society: Letters [**431**]{}, L20 (2013) \[arXiv:1211.6426 \[astro-ph.CO\]\].
M. VMedvedev, arXiv:1305.1307 \[astro-ph.CO\].
S. Tulin, H. -B. Yu and K. M. Zurek, Phys. Rev. D [**87**]{}, no. 11, 115007 (2013) \[arXiv:1302.3898 \[hep-ph\]\].
M. C. Smith, G. R. Ruchti, A. Helmi, R. F. G. Wyse, J. P. Fulbright, K. C. Freeman, J. F. Navarro and G. M. Seabroke [*et al.*]{}, Mon. Not. Roy. Astron. Soc. [**379**]{}, 755 (2007) \[astro-ph/0611671\].
J. Bovy and S. Tremaine, Astrophys. J. [**756**]{}, 89 (2012) \[arXiv:1205.4033 \[astro-ph.GA\]\]. J. Engel, Phys. Lett. B [**264**]{}, 114 (1991).
J. D. Lewin and P. F. Smith, Astropart. Phys. [**6**]{}, 87 (1996).
G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, Comput. Phys. Commun. [**180**]{}, 747 (2009) \[arXiv:0803.2360 \[hep-ph\]\].
J. Billard, L. Strigari and E. Figueroa-Feliciano, arXiv:1307.5458 \[hep-ph\]. R. J. Barlow, Nucl. Instrum. Meth. A [**297**]{}, 496 (1990).
[^1]: We have confirmed the validity of this approach with full maximum likelihood fits [@Barlow:1990vc].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Using physical layer network coding, compute-and-forward is a promising relaying scheme that effectively exploits the interference between users and thus achieves high rates. In this paper, we consider the problem of finding the optimal integer-valued coefficient vector for a relay in the compute-and-forward scheme to maximize the computation rate at that relay. Although this problem turns out to be a shortest vector problem, which is suspected to be NP-hard, we show that it can be relaxed to a series of equality-constrained quadratic programmings. The solutions of the relaxed problems serve as real-valued approximations of the optimal coefficient vector, and are quantized to a set of integer-valued vectors, from which a coefficient vector is selected. The key to the efficiency of our method is that the closed-form expressions of the real-valued approximations can be derived with the Lagrange multiplier method. Numerical results demonstrate that compared with the existing methods, our method offers comparable rates at an impressively low complexity.'
author:
- 'Baojian Zhou, Jinming Wen, and Wai Ho Mow, [^1][^2]'
bibliography:
- 'C:/Users/bzhouab/Dropbox/Research/ref/cf\_qpr.bib'
title: |
A Quadratic Programming Relaxation Approach to Compute-and-Forward\
Network Coding Design
---
physical layer network coding, AWGN networks, compute-and-forward, quadratic programming, Lagrange multiplier.
Introduction
============
a promising relaying strategy in wireless networks, compute-and-forward (CF) has attracted a lot of research interest since it was proposed in 2008 by Nazer and Gastpar [@Nazer2008]. The advantage of CF is that it achieves higher rates in the medium signal-to-noise ratio (SNR) regime when compared with other relaying strategies, e.g., amplify-and-forward, decode-and-forward. Relays in CF attempt to decode integer linear combinations of the transmitted codewords, rather than the codewords themselves. The integer coefficient vectors corresponding to the linear combinations and the decoded messages are then forwarded to the destination. Under certain conditions (see [@Nazer2011] for details), the destination can recover the original source messages with enough forwarded messages and coefficient vectors from the relay.
The design of the CF scheme lies in selecting the coefficient vectors at the relays. There are many choices of coefficient vectors for one relay, and each may render a different *computation rate* [@Nazer2011] at that relay. Computation rate is defined as the maximum transmission rate from the associated sources to a relay such that the linear combinations at the relay can be successfully decoded. If the coefficient vectors are linearly independent, the *achievable rate* of the network equals the minimum computation rate; otherwise, the achievable rate of the network is zero and none of the messages can be recovered. The objective of designing the CF scheme is to maximize the overall achievable rate of the network. One approximation method is presented in [@Wei2012]. However, for those networks where each relay is allowed to send only one coefficient vector to the destination, and only local channel state information (CSI) is available, i.e., each relay knows merely its own channel vector, one reasonable solution is to select the coefficient vector that maximizes the computation rate at each relay.
In this paper, we consider additive white Gaussian noise (AWGN) networks where only local CSI is available, and focus on the CF network coding design problem with the objective being maximizing the computation rate at a relay by choosing the optimal coefficient vector. It has been shown that this problem reduces to a shortest vector problem (SVP). Different methods have been developed to tackle this problem. The branch-and-bound method proposed in [@Richter2012] finds the optimal solution but its efficiency degrades as the dimension of channel vectors grows according to the simulation results. Although the general SVP is suspected to be NP-hard, Sahraei and Gastpar showed in [@Sahraei2014] that the SVP in the CF design is special, and developed an algorithm (called the “SG” method in this paper) that solves the SVP in polynomial time. For independent and identically distributed (i.i.d.) Gaussian channel entries, the complexity of the SG method is of order 2.5 with respect to the dimension, and is linear with respect to the square root of the signal-to-noise ratio (SNR). A class of methods are those based on lattice reduction (LR) algorithms (e.g., Minkowski, HKZ, LLL, and CLLL LR algorithms; c.f. [@Zhang2012; @Lenstra1982; @Vetter2009; @Ling2013; @Chang2013; @Gan2009]). The method in [@Sakzad2012] based on the LLL LR algorithm [@Lenstra1982] provides close-to-optimal rates and is well-known to be of polynomial time complexity with respect to the vector dimension. In [@Wen2015], we proposed an efficient method based on sphere decoding to find the optimal coefficient vector; however there is no theoretical guarantee on the complexity.
Our goal in this work is to develop a new method that finds a suboptimal coefficient vector for a relay with low complexity compared with the existing methods, while provides a close-to-optimal computation rate at the same time. Taking advantage of some useful properties of the problem, we first show that the original SVP can be approximated by a series of quadratic programmings (QPs). The closed-form solutions of the QPs are derived by use of the Lagrange multiplier method and can be computed with linear complexity with respect to the dimension, which is the key to the efficiency of our method. The solutions of the QPs serve as real-valued approximations of the integer coefficient vector, and are quantized into a set of candidate integer vectors by a successive quantization algorithm. Finally, the integer vector in the candidate set that maximizes the computation rate is selected to be the coefficient vector. The complexity of our method is of order 1.5 with respect to the dimension for i.i.d. Gaussian channel entries, and is lower than the above mentioned methods. Numerical results demonstrate that among existing methods that provide close-to-optimal rates, our method is much more efficient as expected.
As a summary, our contributions in this work include the following:
- For the real-valued channels, we develop a quadratic programming relaxation approach to find a suboptimal coefficient vector for a relay so that the computation rate at that relay is close-to-optimal. The complexity is $O(L\sqrt{P\norm{\h}^2})$ for a given channel vector $\h\in\Rbb^L$ and signal power constraint $P$, and is of average value $O(P^{0.5}L^{1.5})$ for i.i.d. standard Gaussian channel entries.
- For the complex-valued channels, we demonstrate how to apply our method in an efficient way to find the complex-valued coefficient vector.
- Extensive simulation results are presented to compare the effectiveness and efficiency of our method with the existing methods.
Part of this work has been presented in [@Zhou2014]. One main improvement here is the complexity order for i.i.d. Gaussian channel entries is further reduced from 3 to 1.5.
In the following, we will first introduce the system model of AWGN networks as well as the CF network coding design problem in Section \[section:ProblemStatement\]. Then in Section \[section:ProposedMethod\], we will present our proposed method in detail. Numerical results will be shown in Section \[section:NumericalResults\]. Finally, we will conclude our work in Section \[section:Conclusions\].
[*Notation.*]{} Let $\Rbb$ be the real field, $\Cbb$ be the complex field, and $\Zbb$ be the ring of integers. Boldface lowercase letters denote column vectors, and boldface uppercase letters denote matrices, e.g., $\w\in\Rbb^L$ and $\W\in\Rbb^{M\times L}$. $\norm{\w}$ denotes the $\ell^2$-norm of $\w$, and $\w^T$ denotes the transpose of $\w$. For a vector $\w$, let $\w(\ell)$ be the element with index $\ell$, and $\w(i\!:\!j)$ be the vector composed of elements with indices from $i$ to $j$. For a matrix $\W$, let be the submatrix containing elements with row indices from $i$ to $j$ and column indices from $k$ to $\ell$, be the submatrix containing elements with row indices from $i$ to $j$ and column index $k$, $\W(i,k\!:\!\ell)$ be the submatrix containing elements with row index $i$ and column indices from $k$ to $\ell$, and $\W(i,j)$ be the element with row index $i$ and column index $j$. Let $\floor{x}$ and $\ceil{x}$, i.e., the corresponding floor and ceiling functions of $x$, be the maximum integer no greater than $x$ and the minimum integer no less than $x$, respectively. Let $\floor{\w}_\ell$ and $\ceil{\w}_\ell$ be the vectors generated from $\w$ by applying the corresponding operation on the $\ell$-th element only. $\0$ denotes an all-zero vector, and $\I$ denotes an identity matrix. ${\rm sign}(\w)$ returns the vector that contains the signs of the elements in $\w$. ${\rm abs}(\w)$ returns the vector whose elements are the absolute values of the elements in $\w$.
Problem Statement {#section:ProblemStatement}
=================
We consider additive white Gaussian noise (AWGN) networks [@Nazer2011] where sources, relays and destinations are connected by linear channels with AWGN. For the ease of explanation, we first develop our method for real-valued channels, and then demonstrate how to apply our method to complex-valued channels. An AWGN network with real-valued channels is defined as the following.
\[definition:RealChannelModel\] *(Real-Valued Channel Model)* In an AWGN network, each relay (indexed by $m=1,2,\cdots,M$) observes a noisy linear combination of the transmitted signals through the channel, $$\begin{aligned}
\label{equation:RealChannelModel}
\y_m = \sum_{\ell=1}^L \h_m(\ell)\x_\ell + \z_m,\end{aligned}$$ where $\x_\ell \in \Rbb^n$ with the power constraint $\frac{1}{n}\norm{\x_\ell}^2 \leq P$ is the transmitted codeword from source $\ell$ ($\ell = 1,2,\cdots,L$), $\h_m \in \Rbb^L$ is the channel vector to relay $m$, $\h_m(\ell)$ is the $\ell$-th entry of $\h_m$, $\z_m \in \Rbb^n$ is the noise vector with entries being i.i.d. Gaussian, i.e., $\z_m\!\sim\!\bigN\!\left(\0,\I\right)$, and $\y_m$ is the signal received at relay $m$.
In the sequel, we will focus on one relay and thus ignore the subscript “$m$” in $\h_m$, $\a_m$, etc.
In CF, rather than directly decode the received signal $\y$ as a codeword, a relay first applies to $\y$ an amplifying factor $\alpha$ such that $\alpha\h$ is close to an integer *coefficient vector* $\a$, and tries to decode $\alpha\y$ as an integer linear combination, whose coefficients form $\a$, of the original codewords $\{\x_\ell\}$. The *computation rate* [@Nazer2011] is the maximum transmission rate from the associated sources to a relay such that the integer linear combinations at the relay can be decoded with arbitrarily small error probability. Assume the $\log$ function is with respect to base 2, and define $\log^+(w) \triangleq \max \left( \log(w),0 \right)$. The computation rate can be calculated with Theorem \[theorem:ComputationRate\] from [@Nazer2011].
*(Computation Rate in Real-Valued Channel Model)* \[theorem:ComputationRate\] For a relay with coefficient vector $\a$ in the real-valued channel model defined in Definition \[definition:RealChannelModel\], the following computation rate is achievable,
$$\begin{aligned}
\label{equation:ComputationRate}
\bigR \left( \h, \a \right) =
\frac{1}{2} \log^+
\left( \left(
\norm{\a}^2 -
\frac{P (\h^T\a)^2}{1 + P\norm{\h}^2}
\right)^{-1}\right).
\end{aligned}$$
With the computation rate being the metric, we define the optimal coefficient vector as follows.
*(The Optimal Coefficient Vector)* The optimal coefficient vector ${\a^\star}$ for a channel vector $\h$ is the one that maximizes the computation rate, $$\begin{aligned}
\label{equation:aOptimal}
{\a^\star}= \arg \max_{\a \in \Zbb^L \backslash\{\0\}} \bigR \left( \h, \a \right).
\end{aligned}$$
After a few simple manipulations, the optimization problem stated in can be written in the following quadratic form [@Wei2012], $$\label{equation:IntegerQP}
\begin{aligned}
{\a^\star}=
\arg\min_{\a\in\Zbb^L\backslash\{\0\}}
\a^T\bsG\a,
\end{aligned}$$ where $$\begin{aligned}
\label{equation:G}
\bsG \triangleq
\I - \frac{P}{1 + P\norm{\h}^2} \h\h^T.
\end{aligned}$$
If we take $\bsG$, which is positive definite, as the *Gram matrix* of a lattice $\Lambda$, then the problem turns out to be the SVP in the lattice $\Lambda$. In the next section, we will propose an efficient approximation method based on QP relaxation that gives a suboptimal coefficient vector.
Proposed Method {#section:ProposedMethod}
===============
In this section, we will first derive our method for the real-valued channel model, and then extend the method for the complex-valued channel model.
Preliminaries
-------------
We start with investigating some properties of the problem, which is the basis of our new method.
(Signature Matrix) A signature matrix is a diagonal matrix whose diagonal elements are $\pm1$.
(Signed Permutation Matrix) A signed permutation matrix is a generalized permutation matrix whose nonzero entries are $\pm 1$.
After replacing $-1$’s with $1$’s, a signed permutation matrix becomes a permutation matrix. Obviously, signed permutation matrices are unimodular and orthogonal. Every signed permutation matrix can be expressed as $\S=\P\T$, where $\T$ is a signature matrix, and $\P$ is a permutation matrix.
\[theorem:ProblemTransformation\] If ${\a^\star}$ is the optimal coefficient vector for a channel vector $\h$ with power constraint $P$, then for any signed permutation matrix $\S\in\Zbb^{L\times L}$, $\S{\a^\star}$ is optimal for $\S\h$ with the same power constraint $P$, and $\bigR(\h,{\a^\star})=\bigR(\S\h,\S{\a^\star})$.
We first show $\bigR \left( \h, \a \right) = \bigR \left( \S\h, \S\a \right)$ for any $\h$ and $\a$ with the same power constraint $P$. $\S$ is unimodular, then $\S\a$ is an integer vector and can be applied as a coefficient vector. $\S$ is orthogonal, then $\S^T\S = \I$. $\norm{\S\h}^2 = \h^T\S^T\S\h = \h^T\h = \norm{\h}^2$, and similarly $\norm{\S\a}^2 = \norm{\a}^2$. $(\S\h)^T\S\a = \h^T\S^T\S\a = \h^T\a$. According to Theorem \[theorem:ComputationRate\], the computation rate $\bigR \left( \h, \a \right)$ is determined by $P$, $\norm{\h}^2$, $\norm{\a}^2$, and $\h^T\a$. Thus, $\bigR \left( \h, \a \right) = \bigR \left( \S\h, \S\a \right)$.
${\a^\star}$ is optimal for $\h$ means ${\a^\star}$ maximizes $\bigR \left( \h, \a \right)$. Then $\S{\a^\star}$ maximizes $\bigR \left( \S\h, \S\a \right)$ since $\bigR \left( \h, \a \right) = \bigR \left( \S\h, \S\a \right)$ always holds. Therefore, $\S{\a^\star}$ is optimal for $\S\h$ with the same power constraint $P$.
(Nonnegative Ordered Vector) A vector $\h$ is said to be nonnegative ordered if its elements are nonnegative and in nondecreasing order according to their indices.
\[lemma:h2hBar\] For any vector $\h$, there exists a signed permutation matrix $\S$ such that $\S\h$ is nonnegative ordered.
To find such an $\S$ in Lemma \[lemma:h2hBar\], we can simply choose $\S=\P\T$, where $\T$ is a signature matrix that converts all the elements in $\h$ to nonnegative, and $\P$ is a permutation matrix that sorts the elements in $\T\h$ in nondecreasing order.
With Theorem \[theorem:ProblemTransformation\] and Lemma \[lemma:h2hBar\], for any channel vector $\h$, we can first find a signed permuation matrix $\S$ and transform $\h$ to the nonnegative ordered ${\bar{\h}}=\S\h$, then obtain the optimal coefficient vector ${{\bar{\a}}^\star}$ for ${\bar{\h}}$, and finally recover the desired optimal coefficient vector ${\a^\star}=\S^{-1}{{\bar{\a}}^\star}$ for $\h$. In this way, it suffices to focus on solving the problem in for nonnegative ordered channel vectors ${\bar{\h}}$.
\[remark:Transformation\] In implementation, there is no need to use the signed permutation matrix $\S$. It is merely necessary to: 1) record the sign of the elements in $\h$ with a vector $\bst=\fsign{\h}$, and 2) sort $\boldsymbol{\hbar}=\fabs{\h}$ in ascending order as ${\bar{\h}}$ and record the original indices of the elements with a vector $\p$ such that ${\bar{\h}}(\ell)=\boldsymbol{\hbar}(\p(\ell))$, $\ell=1,2,\cdots,L$. After ${{\bar{\a}}^\star}$ for ${\bar{\h}}$ is obtained, ${\a^\star}$ for $\h$ can be recovered with ${\a^\star}(\p(\ell))=\bst(\p(\ell)){{\bar{\a}}^\star}(\ell)$, $\ell=1,2,\cdots,L$.
Given a channel vector as $\h=[-1.9,0.1,1.1]^T$, then $\bst=[-1,1,1]^T$, $\fabs{\h}=[1.9,0.1,1.1]^T$, ${\bar{\h}}=[0.1,1.1,1.9]^T$, and $\p=[2,3,1]^T$. If for certain power $P$, ${{\bar{\a}}^\star}=[0,1,2]^T$, then ${\a^\star}=[-2,0,1]^T$.
According to , if ${\a^\star}$ is optimal for $\h$, then $-{\a^\star}$ is also optimal for $\h$. To reduce redundancy, we restrict the optimal coefficient vector ${\a^\star}$ to be the one such that $\h^T{\a^\star}\geq0$ in the following.
\[lemma:aNonnegative\] If all the elements in a channel vector $\h$ are nonnegative, then all the elements in the optimal coefficient vector ${\a^\star}$ are also nonnegative.
Suppose ${\a^\star}(i) < 0$, and define $\a'$ as: $\a'(i) = 0$, and $\a'(\ell) = {\a^\star}(\ell)$, $\forall \ell \neq i$. Obviously, $\norm{\a'} < \norm{{\a^\star}}$, and $\h^T \a' \geq \h^T {\a^\star}\geq 0$. Then according to , $\bigR \left( \h, \a' \right) > \bigR \left( \h, {\a^\star}\right)$, which implies ${\a^\star}$ is not optimal and leads to a contradiction. Thus, all the elements in ${\a^\star}$ must be nonnegative.
\[lemma:a0\] For a channel vector $\h$ and its corresponding optimal coefficient vector ${\a^\star}$, if $\h(i) = 0$, then ${\a^\star}(i) = 0$.
Suppose $\h(i) = 0$, and ${\a^\star}(i) \neq 0$. Define $\a'$ as: $\a'(i) = 0$, and $\a'(\ell) = {\a^\star}(\ell)$, $\forall \ell \neq i$. Obviously, $\norm{\a'} < \norm{{\a^\star}}$, and $\h^T \a' = \h^T {\a^\star}\geq 0$. Then according to , $\bigR \left( \h, \a' \right) > \bigR \left( \h, {\a^\star}\right)$, which implies ${\a^\star}$ is not optimal. Thus, if $\h(i) = 0$, then ${\a^\star}(i) = 0$.
\[lemma:hijEqual\] For a channel vector $\h$ and its corresponding optimal coefficient vector ${\a^\star}$, if $\h(i) = \h(j)$, $i < j$, then ${\a^\star}(i) = {\a^\star}(j)$ or $\fabs{{\a^\star}(i) - {\a^\star}(j)} = 1$.
Without loss of generality, assume ${\a^\star}(i) - {\a^\star}(j) < -1$. Define $\a'$ as: $\a'(i) = {\a^\star}(i)+1$, $\a'(j) = {\a^\star}(j)-1$, and $\a'(\ell) = {\a^\star}(\ell)$, $\forall \ell \notin \{i,j\}$. Obviously, $\norm{\a'} < \norm{{\a^\star}}$, and $\h^T \a' = \h^T {\a^\star}\geq 0$. Then according to , $\bigR \left( \h, \a' \right) > \bigR \left( \h, {\a^\star}\right)$, which implies ${\a^\star}$ is not optimal. Thus, ${\a^\star}(i) - {\a^\star}(j) \geq -1$. Similarly, ${\a^\star}(j) - {\a^\star}(i) \geq -1$. Therefore, ${\a^\star}(i) = {\a^\star}(j)$ or $\fabs{{\a^\star}(i) - {\a^\star}(j)}=~1$.
\[remark:hijEqual\] In Lemma \[lemma:hijEqual\], for the case where $\h(i) = \h(j)$ with $\fabs{{\a^\star}(i) - {\a^\star}(j)} = 1$, $i < j$, we will always set ${\a^\star}(j) = {\a^\star}(i) + 1$ since setting ${\a^\star}(i) = {\a^\star}(j) + 1$ results the same computation rate. Then, as long as $\h(i) = \h(j)$, $i < j$, it holds that ${\a^\star}(i)\leq{\a^\star}(j)$.
\[theorem:aNonnegativeOrdered\] For a nonnegative ordered channel vector $\h$, the optimal coefficient vector ${\a^\star}$ is also nonnegative ordered.
According to Lemma \[lemma:aNonnegative\], all the elements in ${\a^\star}$ are nonnegative. Suppose ${\a^\star}$ is not nonnegative ordered, then there must exist $i, j $ ($1 \leq i < j \leq L $) such that ${\a^\star}(i) > {\a^\star}(j) \geq 0$. According to Lemma \[lemma:a0\], ${\a^\star}(i) > 0$ implies $\h(i) > 0$. According to Lemma \[lemma:hijEqual\] and Remark \[remark:hijEqual\], ${\a^\star}(i) > {\a^\star}(j)$ implies $\h(i) \neq \h(j)$ and thus $\h(j) > \h(i) > 0$. Then, $\h(i) {\a^\star}(j) + \h(j) {\a^\star}(i) > \h(i) {\a^\star}(i) + \h(j) {\a^\star}(j)$.
Define $\a'$ as: $\a'(i) = {\a^\star}(j)$, $\a'(j) = {\a^\star}(i)$, and $\a'(\ell) = {\a^\star}(\ell)$, $\forall \ell \notin \{i,j\}$. Obviously, $\norm{\a'} = \norm{{\a^\star}}$, and $\h^T \a'
= \sum_{\ell = 1}^L \h(\ell) \a'(\ell)
= \sum_{\ell \notin \{i,j\}} \h(\ell) \a'(\ell) + \h(i) \a'(i) + \h(j) \a'(j)
= \sum_{\ell \notin \{i,j\}} \h(\ell) {\a^\star}(\ell) + \h(i) {\a^\star}(j) + \h(j) {\a^\star}(i)
> \sum_{\ell \notin \{i,j\}} \h(\ell) {\a^\star}(\ell) + \h(i) {\a^\star}(i) + \h(j) {\a^\star}(j)
= \sum_{\ell = 1}^L \h(\ell) {\a^\star}(\ell)
= \h^T {\a^\star}\geq 0$. Then according to , $\mathcal{R} \left( \h, \a' \right) > \mathcal{R} \left( \h, {\a^\star}\right)$, which implies ${\a^\star}$ is not optimal. Therefore, ${\a^\star}$ must be nonnegative ordered.
Relaxation to QPs
-----------------
As stated before, it suffices to obtain the optimal coefficient vector for a nonnegative ordered channel vector. Thus, in the following, we will focus on solving the problem in for a nonnegative ordered channel vector ${\bar{\h}}$. We first relax this problem to a series of QPs.
Denote the optimal coefficient vector for ${\bar{\h}}$ as ${{\bar{\a}}^\star}$. According to Theorem \[theorem:aNonnegativeOrdered\], the maximum element in ${{\bar{\a}}^\star}$ is ${{\bar{\a}}^\star}(L)$. Suppose ${{\bar{\a}}^\star}(L)$ is known to be $\bar{a}^\star_L \in \Zbb \backslash \{0\}$, then the problem in can be relaxed as a QP, $$\begin{aligned}
\label{equation:RelaxedQP}
&\underset{\a}{\text{minimize}}
&&\a^T \bsG \a\\
&\text{subject to}
&&\a \in \Rbb^L\\
&
&&\a(L) = \bar{a}_L^\star
\end{aligned}$$ where $\bsG$ is as defined in . The problem is convex since $\bsG$ is positive definite. Denote the solution of this relaxed problem as ${{\bar{\a}}^\dagger}\in \Rbb^L$. The intuition behind this relaxation is that appropriate quantization of the real-valued optimal ${{\bar{\a}}^\dagger}$ with the constraint $\a(L) = \bar{a}_L^\star$ will lead to the integer-valued optimal ${{\bar{\a}}^\star}$ or at least a close-to-optimal one with a high probability.
However, since ${{\bar{\a}}^\star}(L)$ is unknown, we alternatively approximate the problem in by solving a series of QPs, i.e., solving the following QP multiple times for $k = 1,2,\cdots,K$. $$\begin{aligned}
\label{equation:RelaxedQPs}
&\underset{\a}{\text{minimize}}
&&\a^T \bsG \a\\
&\text{subject to}
&&\a \in \Rbb^L\\
&
&&\a(L) = k
\end{aligned}$$ Denote the solution to the above QP with the constraint $\a(L) = k$ as ${{\bar{\a}}^\dagger}_k$. For simplicity, we use $\{s_k\}$ to denote the set with elements being $s_k, k=1,2,\cdots,K$ in the following. As long as $K \geq {{\bar{\a}}^\star}(L)$, the solution ${{\bar{\a}}^\dagger}$ to the QP in will be included in the set of solutions $\{ {{\bar{\a}}^\dagger}_k \}$ to the QPs in . Fortunately, to obtain the solution set $\{ {{\bar{\a}}^\dagger}_k \}$, it is sufficient to solve merely one QP in with $k = 1$, according to the following theorem.
\[theorem:aLinear\] Denote the solution to the QP in with the constraint $\a(L) = k$ as ${{\bar{\a}}^\dagger}_k$, then ${{\bar{\a}}^\dagger}_k = k {{\bar{\a}}^\dagger}_1$.
$$\begin{aligned}
{{\bar{\a}}^\dagger}_k
&= \arg \min_{ \a \in \Rbb^L, \a(L) = k }
\a^T \bsG \a\\
&= k \left( \arg \min_{ \mathbf{t} \in \Rbb^L, \mathbf{t}(L) = 1 }
(k \mathbf{t}^T) \bsG (k \mathbf{t}) \right) \quad ({\rm Let\ } \a = k \mathbf{t}.)\\
&= k \left( \arg \min_{ \mathbf{t} \in \Rbb^L, \mathbf{t}(L) = 1 }
k^2 \mathbf{t}^T \bsG \mathbf{t} \right)\\
&= k \left( \arg \min_{ \mathbf{t} \in \Rbb^L, \mathbf{t}(L) = 1 }
\mathbf{t}^T \bsG \mathbf{t} \right)\\
&= k {{\bar{\a}}^\dagger}_1. \qedhere\end{aligned}$$
The closed-form expression of ${{\bar{\a}}^\dagger}_1$ can be readily obtained by solving a linear system as stated in the following theorem, which is the key for the low complexity of our method.
\[theorem:aBarDagger1\] Let ${{\bar{\a}}^\dagger}_1$ be the optimal solution to the QP in with the constraint $\a(L) = 1$, then $${{\bar{\a}}^\dagger}_1 =
\begin{bmatrix}
\rr\\
1
\end{bmatrix},$$ where $$\label{equation:r}
\rr = - \Bigl( \bsG(1\!:\!L-1,1\!:\!L-1) \Bigr)^{-1}
\bsG(1\!:\!L-1,L).$$
The QP in has only an equality constraint, and thus is linear and particularly simple [@Luenberger2008]. We now derive the closed-form solution with the Lagrange multiplier method. Let the Lagrange multiplier associated with the constraint $\a(L) = 1$ be $\lambda \geq 0$, then the Lagrangian is $$\mathcal{L}(\a,\lambda) = \a^T \bsG \a + \lambda \left( \a(L) - 1 \right).$$ The optimal solution can be obtained by letting the derivative of the Lagrangian be zero, i.e., $$\begin{aligned}
\frac{\partial}{\partial \a}\mathcal{L}(\a,\lambda)
= (\bsG + \bsG^T) \a +
\begin{bmatrix}
\0\\
\lambda
\end{bmatrix}
= 2 \bsG \a +
\begin{bmatrix}
\0\\
\lambda
\end{bmatrix}
= \0.\end{aligned}$$ Let $\rr = \a(1\!:\!L-1)$, $\lambda = 2 \mu$, and write $\bsG$ and $\a$ as block matrices, then $$\begin{aligned}
\left[
\arraycolsep=1pt\def\arraystretch{1.5}
\begin{array}{c|c}
\bsG(1\!:\!L-1,1\!:\!L-1)
&\bsG(1\!:\!L-1,L)\\
\hline
\bsG(L,1\!:\!L-1)
&\bsG(L,L)
\end{array}
\right]
\begin{bmatrix}
\rr\\
1
\end{bmatrix}
+
\begin{bmatrix}
\0\\
\mu
\end{bmatrix}
= \0.
\end{aligned}$$ In the above equation, observe that $$\begin{aligned}
\left[
\arraycolsep=1pt\def\arraystretch{1.5}
\begin{array}{c|c}
\bsG(1\!:\!L-1,1\!:\!L-1)
&\bsG(1\!:\!L-1,L)
\end{array}
\right]
\begin{bmatrix}
\rr\\
1
\end{bmatrix}
= \0,
\end{aligned}$$ then $$\rr = - \Bigl( \bsG(1\!:\!L-1,1\!:\!L-1) \Bigr)^{-1}
\bsG(1\!:\!L-1,L),$$ and the results follow immediately.
Calculating ${{\bar{\a}}^\dagger}_1$ in Theorem \[theorem:aBarDagger1\] has a complexity order of $O(L^3)$ due to the matrix inversion in the expression of $\bsr$. We note that this complexity order can be reduced to $O(L)$ by the following lemma.
\[lemma:r\] Equation can be expressed in a simpler form as $$\bsr = \frac{\u(L)}{1-\norm{\u(1\!:\!L-1)}^2}\u(1\!:\!L-1),$$ where the “normalized" channel vector $\u$ is defined as $$\label{equation:u}
\u = \sqrt{\frac{P}{1+P\norm{{\bar{\h}}}^2}}{\bar{\h}}.$$
Express $\bsG$ in and $\bsr$ in in terms of $\u$, $$\begin{gathered}
\bsG = \I - \u\u^T,\\
\bsr = \big(\I^{L-1} - \u(1\!:\!L-1)\u^T(1\!:\!L-1)\big)^{-1}\u(L)\u(1\!:\!L-1),\end{gathered}$$ where $\I^{L-1}$ denotes the identity matrix with dimension $L-1$. Then it is easy to verify that Lemma \[lemma:r\] holds.
With Theorems \[theorem:aLinear\] and \[theorem:aBarDagger1\], the $K$ solutions $\{ {{\bar{\a}}^\dagger}_k \}$ to the $K$ QPs in can be easily obtained.
The next step is to quantize the real-valued approximations $\{ {{\bar{\a}}^\dagger}_k \}$ to integer vectors by applying the floor or the ceiling functions to each of the elements. One issue that still remains is how to determine the value of $K$. Intuitively, the larger $K$, the better. Actually, it is sufficient to set $K$ as $$\begin{aligned}
\label{equation:K}
K = \arg\max_{\norm{\floor{{{\bar{\a}}^\dagger}_k}}^2 < 1 + P \norm{{\bar{\h}}}^2} k
\end{aligned}$$ according to the following lemma from [@Nazer2011].
\[lemma:aRange\] For a given channel vector $\h$, the computation rate $\mathcal{R} \left( \h, \a \right)$ is zero if the coefficient vector $\a$ satisfies $$\begin{aligned}
\norm{\a}^2 \geq 1 + P \norm{\h}^2.\end{aligned}$$
\[remark:KPractical\] For high SNR (i.e., large $P$) and large dimensions of ${\bar{\h}}$, $K$ in can be quite huge. However, as we will show in the next section, for i.i.d. Gaussian channel entries with high SNR, $K$ can be set to a rather small value without degrading the average computation rate.
In practice, for i.i.d. Gaussian channel entries, we set an upper bound for $K$ as $K_u$, which is determined off-line according to the simulation results, such that the simulated average computation rate at 20dB with $K$ being $K_u$ is greater than 99% of that with $K$ being $K_u+1$. We set $K_u$ based on rates at 20dB since the value of $K$ influences more the rates at larger SNR, and 20dB is the the maximum SNR considered in this paper. Then, we set $K$ as the maximum integer that is no greater than $K_u$ while satisfies at the same time, i.e., $$\begin{aligned}
\label{equation:KPractical}
K = \arg\max_{\substack{\norm{\floor{{{\bar{\a}}^\dagger}_k}}^2 < 1 + P \norm{{\bar{\h}}}^2\\
k\leq K_u}} k.
\end{aligned}$$ For implementation, $K$ can be easily determined by using a bi-section search.
Quantization
------------
We propose the *successive quantization* algorithm shown in Algorithm \[agorithm:SuccessiveQuantization\] to quantize the $K$ real-valued approximations $\{ {{\bar{\a}}^\dagger}_k \}$ to integer-valued vectors $\{ {{\bar{\a}}^\diamond}_k \}$ that serve as candidates of a suboptimal coefficient vector ${{\bar{\a}}^\diamond}$. For convenience, define $$\begin{aligned}
\label{equation:f}
f(\w) \triangleq \w^T \bsG \w,\end{aligned}$$ where $\w\in\Rbb^L$, and $\bsG$ is defined in with $\h$ being nonnegative ordered. Also, let $\floor{\w}_\ell$ and $\ceil{\w}_\ell$ be the vectors generated from $\w$ by applying the floor and the ceiling operations on the $\ell$-th element only, respectively.
${{\bar{\a}}^\diamond}_k\leftarrow {{\bar{\a}}^\dagger}_k$
To simplify the inequality condition $f\left(\floor{{{\bar{\a}}^\dagger}_k}_\ell\right) < f\left(\ceil{{{\bar{\a}}^\dagger}_k}_\ell\right)$ in line \[line:FloorCondition\] of Algorithm \[agorithm:SuccessiveQuantization\], we first introduce the following lemma.
\[lemma:FloorCondition\] For the function $f(\w) = \w^T \bsG \w$ defined in where $\bsG^T = \bsG$, the inequality condition $f\left(\floor{\w}_\ell\right) < f\left(\ceil{\w}_\ell\right)$ is equivalent to $$\begin{aligned}
\label{equation:FloorCondition}
2 \left(\floor{\w}_\ell\right)^T \bsG(:,\ell) + \bsG(\ell,\ell) > 0.\end{aligned}$$
$f\left(\floor{\w}_\ell\right) < f\left(\ceil{\w}_\ell\right)$ implies $\floor{\w}_\ell \neq \ceil{\w}_\ell$, i.e., $\w(\ell)$ is not an integer. Let $\e_\ell \in \Rbb^L$ be the vector with only one nonzero element $\e_\ell(\ell) = 1$, then $\ceil{\w}_\ell = \floor{\w}_\ell + \e_\ell$, and $$\begin{aligned}
&f\left(\ceil{\w}_\ell\right)
= \left(\ceil{\w}_\ell\right)^T \bsG \ceil{\w}_\ell\\
&= \left( \floor{\w}_\ell + \e_\ell \right)^T \bsG \left( \floor{\w}_\ell + \e_\ell \right)\\
&= \left(\floor{\w}_\ell\right)^T \bsG \floor{\w}_\ell
+ \left(\floor{\w}_\ell\right)^T \bsG \e_\ell
+ \e_\ell^T \bsG \floor{\w}_\ell
+ \e_\ell^T \bsG \e_\ell\\
&= f\left(\floor{\w}_\ell\right)
+ \left(\floor{\w}_\ell\right)^T \bsG(:,\ell)
+ \bsG(\ell,:) \floor{\w}_\ell
+ \bsG(\ell,\ell)\\
&= f\left(\floor{\w}_\ell\right) + 2 \left(\floor{\w}_\ell\right)^T \bsG(:,\ell) + \bsG(\ell,\ell).
\end{aligned}$$ Obviously, $f\left(\floor{\w}_\ell\right) < f\left(\ceil{\w}_\ell\right)$ is equivalent to $$\begin{aligned}
2 \left(\floor{\w}_\ell\right)^T \bsG(:,\ell) + \bsG(\ell,\ell) > 0.\tag*{\qedhere}\end{aligned}$$
\[lemma:FloorCondition2\] With Lemma \[lemma:FloorCondition\], the inequality condition $f\left(\floor{{{\bar{\a}}^\dagger}_k}_\ell\right) < f\left(\ceil{{{\bar{\a}}^\dagger}_k}_\ell\right)$ in line \[line:FloorCondition\] of Algorithm \[agorithm:SuccessiveQuantization\] can be simplified as $$\begin{aligned}
2\floor{{{\bar{\a}}^\dagger}_k(\ell)} - 2\left(\left(\floor{{{\bar{\a}}^\dagger}_k}_\ell\right)^T\u\right)\u(\ell) + 1 - \u(\ell)^2 < 0,\end{aligned}$$ where $\u$ is the normalized channel vector as defined in .
The proof is straightforward by writing $\bsG$ in terms of $\u$, and thus is omitted here.
After the quantization, a suboptimal coefficient vector ${{\bar{\a}}^\diamond}$ for ${\bar{\h}}$ is obtained with $$\begin{aligned}
\label{equation:aBarDiamond}
{{\bar{\a}}^\diamond}= \arg \min_{ \a \in \{ {{\bar{\a}}^\diamond}_k \} } \a^T \bsG \a.
\end{aligned}$$
Finally, a suboptimal coefficient vector ${\a^\diamond}$ for the original channel vector $\h$ is recovered from ${{\bar{\a}}^\diamond}$ according to Remark \[remark:Transformation\].
We summarize our proposed *QP relaxation* method in Algorithm \[agrm:QPRApproachOutline\]. The pseudocode is shown in Algorithm \[algorithm:QPRApproachCode\], where the function $[\bar{\w},\p]={\rm sort}(\w)$ sorts the elements in $\w$ in ascending order, returns the sorted vector $\bar{\w}$, and stores the original indices of the elements as vector $\p$, the function ${\rm floor}(\w)$ applies the floor operation to each element of $\w$ and returns the resulted integer vector.
1. \[item:outline:Preprocess\] Preprocess $\h$ to the nonnegative ordered ${\bar{\h}}$ with Remark \[remark:Transformation\].
2. \[item:outline:CalculateaBarDagger1\] Calculate ${{\bar{\a}}^\dagger}_1$, with Theorem \[theorem:aBarDagger1\] and Lemma \[lemma:r\].
3. \[item:outline:DetermineK\] Determine $K$ with .
4. \[item:outline:CalculateaBarDaggerk\] Calculate $\{ {{\bar{\a}}^\dagger}_k \}$, i.e., the real-valued approximations of the optimal coefficient vector for ${\bar{\h}}$, using Theorem \[theorem:aLinear\].
5. \[item:outline:Quantize\] Quantize $\{ {{\bar{\a}}^\dagger}_k \}$ to integer-valued vectors $\{ {{\bar{\a}}^\diamond}_k \}$ with Algorithm \[agorithm:SuccessiveQuantization\].
6. \[item:outline:Select\] Select a vector from $\{ {{\bar{\a}}^\diamond}_k \}$ to be a suboptimal coefficient vector ${{\bar{\a}}^\diamond}$ for ${\bar{\h}}$ using .
7. \[item:outline:Recover\] Recover a suboptimal coefficient vector ${\a^\diamond}$ for $\h$ from ${{\bar{\a}}^\diamond}$ according to Remark \[remark:Transformation\].
$\bst\leftarrow$ $({\bar{\h}},\p)\leftarrow$ $b\leftarrow 1 + P\norm{\h}^2$
$\u\leftarrow (P/b)^{1/2}{\bar{\h}}$ $\bsr\leftarrow \frac{\u(L)}{1-\norm{\u(1:L-1)}^2}\u(1\!:\!L-1)$ ${{\bar{\a}}^\dagger}_1(1\!:\!L-1)\leftarrow \bsr$ ${{\bar{\a}}^\dagger}_1(L)\leftarrow 1$
()[ $K_l\leftarrow 1$ $K\leftarrow K_l$ ]{}
${{\bar{\a}}^\diamond}(1\!:\!L-1)\leftarrow \0$ ${{\bar{\a}}^\diamond}(L)\leftarrow 1$ $f_{\min}\leftarrow \norm{{{\bar{\a}}^\diamond}}^2 - \big(({{\bar{\a}}^\diamond})^T\u\big)^2$
Complexity Analysis
-------------------
Here we analyze the complexity of our algorithm, in terms of the number of flops required. Referring to the outline in Algorithm \[agrm:QPRApproachOutline\], the processing of $\h$ in step \[item:outline:Preprocess\] involves recording the signs of the elements and sorting the elements, and takes $O(L\log(L))$ flops. Calculating ${{\bar{\a}}^\dagger}_1$ in step \[item:outline:CalculateaBarDagger1\] has a complexity of $O(L)$. For the bi-section search applied to determine $K$ in step \[item:outline:DetermineK\], the maximum number of loops required to execute is $O(\log(K_u))$, the number of flops in each loop is $O(L)$, and thus the maximum cost is $O(\log(K_u)L)$. Step \[item:outline:CalculateaBarDaggerk\] takes $O(KL)$ flops. By introducing appropriate temporary variables $b$ and $d$ as shown in Algorithm \[algorithm:QPRApproachCode\], the successive quantization of a real-valued approximation ${{\bar{\a}}^\dagger}_k$ can be implemented in an efficient way in $O(L)$ flops. Thus, the complexity of quantizing all the $K$ real-valued approximations is $O(KL)$. Selecting a coefficient vector from the quantized vector set in step \[item:outline:Select\] has a cost of $O(KL)$. Step \[item:outline:Recover\] takes $O(L)$ flops. In summary, the complexity of the method is $O(L(\log(L)+\log(K_u)+K))$.
However, the above analyzed complexity expression involves the experiment-based $K_u$, and its exact order with respect to the dimension $L$ is intractable. As an alternative, we use an upper bound to approximate the cost. According to , it is easy to see that $K$ and $K_u$ are at most of order $O(\sqrt{P\norm{\h}^2})$. Then, the complexity of our method is $O(L(\log(L)+\sqrt{P\norm{\h}^2}))$. We reserve the power $P$ in the expression since we may also care about how the complexity varies when the SNR gets large.
In the complexity expression above, since the square root function is strictly concave, it follows from Jensen’s inequality that $\mathbb{E}(\sqrt{\norm{\h}^2})\leq\sqrt{\mathbb{E}(\norm{\h}^2)}$. Specifically, for i.i.d. standard Gaussian channel entries, the expectation of $\norm{\h}^2$ is $L$, and thus the corresponding average complexity of the proposed method becomes $O(L\log(L)+P^{0.5}L^{1.5})$. It is easy to see that the complexity is of order 1.5 with respect to the dimension $L$.
Extension to the Complex-Valued Channel Model
---------------------------------------------
We now consider the complex-valued channel model of the AWGN networks, and demonstrate how to apply the proposed QP relaxation method for complex-valued channels. The complex-valued channel model is defined as below.
\[definition:ComplexChannelModel\] *(Complex-Valued Channel Model)* In an AWGN network, each relay (indexed by $m=1,2,\cdots,M$) observes a noisy linear combination of the transmitted signals through the channel, $$\begin{aligned}
\label{equation:ComplexChannelModel}
\y_m = \sum_{\ell=1}^L \h_m(\ell)\x_\ell + \z_m,\end{aligned}$$ where $\x_\ell\in\Cbb^n$ with the power constraint $\frac{1}{n}\norm{\x_\ell}^2 \leq P$ is the transmitted codeword from source $\ell$ ($\ell = 1,2,\cdots,L$), $\h_m\in\Cbb^L$ is the channel vector to relay $m$, $\z_m\in\Cbb^n$ is the noise vector with entries being i.i.d. Gaussian, i.e., $\z_m\sim\mathcal{CN}\!\left(\0,\I\right)$, and $\y_m$ is the signal received at relay $m$.
Similar to what we have done for the real-valued channel model, we will focus on one relay, and ignore the subscript “$m$” for notational convenience.
Writing the summation in in the vector product form, becomes $$\begin{aligned}
\label{equation:ComplexChannelModelVectorForm}
\y = [\x_1,\x_2,\cdots,\x_L]\h + \z.\end{aligned}$$ It is well-known that a complex-valued channel model can be written in its real-valued equivalent form. Let $\Re(\w)$ denote the vector composed of the real part of $\w$, and $\Im(\w)$ denote the vector composed of the imaginary part of $\w$. The complex-valued equation has the following real-equivalent form $$\small
\begin{aligned}
\label{equation:ComplexChannelModelRealForm}
[\Re(\y),\Im(\y)]=[\Re(\x_1),\Re(\x_2),\cdots,\Re(\x_L),\Im(\x_1),\Im(\x_2),\cdots,\Im(\x_L)]
\times
\begin{bmatrix}
\Re(\h),\Im(\h)\\
-\Im(\h),\Re(\h)
\end{bmatrix}
+ [\Re(\z),\Im(\z)].
\end{aligned}$$ It is obvious that $$\small
\begin{aligned}
\label{equation:ComplexChannelModelRealForm2}
\Re(\y)&=[\Re(\x_1),\Re(\x_2),\cdots,\Re(\x_L), \Im(\x_1),\Im(\x_2),\cdots,\Im(\x_L)]
\times
\begin{bmatrix}
\Re(\h)\\
-\Im(\h)
\end{bmatrix}
+ \Re(\z),\\
\Im(\y)&=[\Re(\x_1),\Re(\x_2),\cdots,\Re(\x_L), \Im(\x_1),\Im(\x_2),\cdots,\Im(\x_L)]
\times
\begin{bmatrix}
\Im(\h)\\
\Re(\h)
\end{bmatrix}
+ \Im(\z).
\end{aligned}$$ Then we can view $\begin{bmatrix}
\Re(\h)\\
-\Im(\h)
\end{bmatrix}$ and $\begin{bmatrix}
\Im(\h)\\
\Re(\h)
\end{bmatrix}$ as two $2L$-dimensional real-valued channels, and view $\Re(\x_\ell)$ and $\Im(\x_\ell)$ as two independent $n$-dimensional real-valued transmitted codewords. Assume equal power allocation on the real part and the imaginary part of each transmitted codeword, i.e., $\norm{\Re(\x_\ell)}^2=\norm{\Im(\x_\ell)}^2$, then the power constraint of each real-valued transmitted codeword is $\frac{1}{n}\norm{\Re(\x_\ell)}^2\leq\frac{1}{2}P$ and $\frac{1}{n}\norm{\Im(\x_\ell)}^2\leq\frac{1}{2}P$.
Based on the above interpretation, we can apply the proposed QP relaxation method to each of the two $2L$-dimensional channels to find the corresponding coefficient vectors. It should be noted that we only need to find the coefficient vector for one of the $2L$-dimensional channels, which saves half of the computation cost. Let $\a$ be a Gaussian integer, and assume the found coefficient vector for $\begin{bmatrix}
\Re(\h)\\
-\Im(\h)
\end{bmatrix}$ is $\begin{bmatrix}
\Re(\a)\\
-\Im(\a)
\end{bmatrix}$. Then, according to Theorem \[theorem:ProblemTransformation\], the coefficient vector for $\begin{bmatrix}
\Im(\h)\\
\Re(\h)
\end{bmatrix}$ is $\begin{bmatrix}
\Im(\a)\\
\Re(\a)
\end{bmatrix}$. In this sense, for each complex-valued channel vector $\h$, we can find a Gaussian integer as the best coefficient vector $\a$ using the QP relaxation method.
Numerical Results {#section:NumericalResults}
=================
![Average computation rate for $L=4$ using our QPR method with different $K$.[]{data-label="figure:K_L4"}](K_L4-eps-converted-to.pdf "fig:"){width="240pt"}\
In this section, we present some numerical results to demonstrate the effectiveness and efficiency of our QP relaxation approach. As explained before, finding the coefficient vector for a complex-valued channel can be transformed to finding the coefficient vector for a real-valued channel. Thus, we focus on the real-valued channels here. We consider the case where the entries of the channel vector $\h$ are i.i.d. standard Gaussian, i.e., $\h\sim\mathcal{N}(\0,\I)$. In our simulations, the dimension $L$ ranges from 2 to 16, and the power $P$ ranges from 0dB to 20dB. For a given dimension and a given power, we randomly generate $10000$ instances of the channel vector, and apply the QP relaxation method to find the coefficient vectors, and calculate the corresponding average computation rate.
We first show that as stated in Remark \[remark:KPractical\], for high dimension and large power, the number of real-valued approximations $K$ can be set to a rather small value without degrading the rate apparently. As shown in Figure \[figure:K\_L4\], for dimension $L=4$ and power $P$ from 0dB to 20dB, the average computation rate quickly converges as $K$ increases from 1 to 4. Further increasing $K$ up to 10 incurs additional computational cost with little improvement in the average computation rate.
With the above observation, it is reasonable to introduce the upper bound $K_u$ for $K$, and adopt the criterion in to determine $K$. $K_u$ can be calculated off-line by simulations prior to applying the method, which incurs no additional processing complexity in real-time. The values of $K_u$ according to the simulation results are listed in Table \[table:Ku\].
[c|\*[15]{}[r]{}]{} $L$ &2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16\
$K_u$ &2 & 3 & 4 & 5 & 5 & 5 & 6 & 6 & 6 & 6 & 7 & 6 & 6 & 6 & 4\
\
We then show the effectiveness of our method by comparing the average computation rate with those of other existing methods. The methods covered include the following.
- Our QP relaxation (QPR) method that gives the suboptimal solution.
- The branch-and-bound (BnB) method proposed by Richter [*et al.*]{} in [@Richter2012] that provides the optimal solution.
- The method developed by Sahraei and Gastpar in [@Sahraei2014] that finds the optimal solution with an average-case complexity of $O(P^{0.5}L^{2.5})$ for i.i.d. Gaussian channel entries. We refer to this method as the “SG" method for short.
- The LLL method proposed by Sakzad [*et al.*]{} in [@Sakzad2012], which is based on the LLL lattice reduction (LR) algorithm. The parameter $\delta$ in the LLL LR algorithm is set as 0.75 since further increasing $\delta$ towards 1 achieves little gain in the computation rate but requires more computation labor. Although the LLL LR algorithm has known average complexity for some cases [@Daude1994; @Ling2007], its average complexity for our case is unknown, and the worst-case complexity could be unbounded [@Jalden2008].
- The quantized search (QS) method developed by Sakzad [*et al.*]{} in [@Sakzad2012]. The search consists of two phases: 1) an integer $\alpha_0$ between 1 and $\floor{P^{1/2}}$ that provides the maximum rate is selected as the initial value of the amplifying factor $\alpha$; 2) the amplifying factor is then refined by searching in $[\alpha_0-1,\alpha_0+1]$ with a step size 0.1. After the amplifying factor $\alpha$ is determined, the coefficient vector $\a$ is set as $\round{\alpha\h}$. An improved version of the QS method is the quantized exhaustive search (QES) method proposed in [@Sakzad2014], which was developed for complex-valued channels.
- The rounding method that simply sets the coefficient vector by rounding the channel vector to an integer-valued vector.
As shown in Figure \[figure:Rate\], the optimal methods, i.e., the BnB method and the SG method, always provide the highest average computation rates for all dimensions and over the whole SNR regime, as expected. The LLL method provides close-to-optimal average computation rates. Our proposed QPR method also offers close-to-optimal average computation rates for almost all the dimensions and SNR values considered, except that its performance degrades a little bit for high dimensions at high SNR as shown in Figure \[figure:Rate\_L16\]. The performance of our QPR method improves slightly compared with the version we presented in [@Zhou2014; @Wen2015]. The reason is that here we initialize the output coefficient vector as $[0,\cdots,0,1]^T$, which definitely results non-zero computation rate, while in the previous version the output coefficient vector could result zero computation rate.
\
\
Finally, we demonstrate the efficiency of the proposed QPR method by comparing the running time of finding the coefficient vectors for 10000 channel vector samples. The methods considered include those that provide optimal rates and close-to-optimal rates, i.e., the SG method, the BnB method, the LLL method, and our QPR method. The running time varies for different SNR values, and thus we compare the running time with $P$ being 0dB, 10dB and 20dB. As shown in Figure \[figure:Time\], the proposed QPR method is much more efficient than all the other methods, especially for high dimensions. Specifically, the running time of the optimal methods can be one scale larger than that of the QPR method.
In summary, for i.i.d. Gaussian channel entries, our proposed QPR method offers close-to-optimal average computation rates with a much lower complexity than that of the existing optimal and close-to-optimal methods.
Conclusions {#section:Conclusions}
===========
In this paper, we considered the compute-and-forward network coding design problem of finding the optimal coefficient vector that maximizes the computation rate at a relay, and developed the quadratic programming (QP) relaxation method that finds a high quality suboptimal solution. We first revealed some useful properties of the problem, and relaxed the problem to a series of QPs. We then derived the closed-form solutions of the QPs, which is the key to the efficiency of our method, and proposed a successive quantization algorithm to quantize the real-valued solutions to integer vectors that serve as candidates of the coefficient vector. Finally, the candidate that maximizes the computation rate is selected as the best coefficient vector. For $L$-dimensional channel vectors with i.i.d. Gaussian entries, the average-case complexity of the proposed QP relaxation method is of order 1.5 with respect to the dimension $L$. Numerical results demonstrated that our QP relaxation method offers close-to-optimal computation rates, and is much more computationally efficient than the existing methods that provide the optimal computation rates as well as the LLL method that also provides close-to-optimal computation rates.
[^1]: Baojian Zhou and Wai Ho Mow are with the Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, NT, Hong Kong (e-mail: {bzhouab, eewhmow}@ust.hk). Baojian Zhou was supported by a grant from University Grants Committee of the Hong Kong Special Administrative Region, China (Project No. AoE/E-02/08).
[^2]: Jinming Wen is with the Laboratoire de l’Informatique du Parallélisme, (CNRS, ENS de Lyon, Inria, UCBL), Université de Lyon, Lyon 69007, France (e-mail: jwen@math.mcgill.ca). Jinming Wen was supported in part by ANR through the HPAC project under Grant ANR 11 BS02 013.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In this paper, we study the properties of Diophantine exponents $w_n$ and $w_n^{*}$ for Laurent series over a finite field. We prove that for an integer $n\geq 1$ and a rational number $w>2n-1$, there exist a strictly increasing sequence of positive integers $(k_j)_{j\geq 1}$ and a sequence of algebraic Laurent series $(\xi _j)_{j\geq 1}$ such that $\deg \xi _j =p^{k_j}+1$ and $$w_1(\xi _j)=w_1 ^{*}(\xi _j)=\ldots =w_n(\xi _j)=w_n ^{*}(\xi _j)=w$$ for any $j\geq 1$. For each $n\geq 2$, we give explicit examples of Laurent series $\xi $ for which $w_n(\xi )$ and $w_n^{*}(\xi )$ are different.'
address: 'Graduate School of Pure and Applied Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba, Ibaraki, 305-8571, Japan'
author:
- Tomohiro Ooto
title: On Diophantine exponents for Laurent series over a finite field
---
Introduction {#sec:intro}
============
Mahler [@Mahler32] and Koksma [@Koksma39] introduced Diophantine exponents which measure the quality of approximation to real numbers. Using the Diophantine exponents, they classified the set $\lR$ all of real numbers. Let $\xi $ be a real number and $n\geq 1$ be an integer. We denote by $w_n(\xi )$ the supremum of the real numbers $w$ which satisfy $$0<|P(\xi )|\leq H(P)^{-w}$$ for infinitely many integer polynomials $P(X)$ of degree at most $n$. Here, $H(P)$ is defined to be the maximum of the absolute values of the coefficients of $P(X)$. We denote by $w_n ^{*}(\xi )$ the supremum of the real numbers $w^{*}$ which satisfy $$0<|\xi -\alpha |\leq H(\alpha )^{-w^{*}-1}$$ for infinitely many algebraic numbers $\alpha $ of degree at most $n$. Here, $H(\alpha )$ is equal to $H(P)$, where $P(X)$ is the minimal polynomial of $\alpha $ over $\lZ$.
We recall some results on Diophantine exponents. It is clear that $w_1(\xi )=w_1 ^{*}(\xi )$ for all real numbers $\xi $. Roth [@Roth55] established that $w_1(\xi )=w_1^{*}(\xi )=1$ for all irrational algebraic real numbers $\xi $. Furthermore, it follows from the Schmidt Subspace Theorem that $$\label{eq:SST}
w_n(\xi )=w_n ^{*}(\xi )=\min \{ n,d-1\}$$ for all $n\geq 1$ and algebraic real numbers $\xi $ of degree $d$. It is known that $$0\leq w_n(\xi )-w_n ^{*}(\xi )\leq n-1$$ for all $n\geq 1$ and real numbers $\xi $ (see Section 3.4 in [@Bugeaud04]). Sprindzuk [@Sprindzuk69] proved that $w_n(\xi )=w_n ^{*}(\xi )=n$ for all $n\geq 1$ and almost all real numbers $\xi $. Baker [@Baker76] proved that for $n\geq 2$, there exists a real number $\xi $ for which $w_n(\xi )$ and $w_n^{*}(\xi )$ are different. More precisely, he proved that the set of all values taken by the function $w_n-w_n^{*}$ contains the set $[0,(n-1)/n]$ for $n\geq 2$. In recent years, this result has been improved. Bugeaud [@Bugeaud12; @Bugeaud042] showed that the set of all values taken by $w_2-w_2^{*}$ is equal to the closed interval $[0,1]$ and the set of all values taken by $w_3-w_3^{*}$ contains the set $[0,2)$. Bugeaud and Dujella [@Bugeaud11] proved that for any $n\geq 4$, the set of all values taken by $w_n-w_n^{*}$ contains the set $\left[ 0,\frac{n}{2}+\frac{n-2}{4(n-1)}\right) $.
Let $p$ be a prime. We can define Diophantine exponents $w_n$ and $w_n^{*}$ over the field $\lQ _p$ of $p$-adic numbers in a similar way to the real case. Analogues of the above results for $p$-adic numbers have been studied (see e.g. Section 9.3 in [@Bugeaud04] and [@Bugeaud15; @Pejkovic12]).
Let $p$ be a prime and $q$ be a power of $p$. Let us denote by $\lF _q$ the finite field of $q$ elements, $\lF _q[T]$ the ring of all polynomials over $\lF _q$, $\lF _q(T)$ the field of all rational functions over $\lF _q$, and $\lF _q((T^{-1}))$ the field of all Laurent series over $\lF _q$. For $\xi \in \lF _q((T^{-1})) \setminus \{ 0\}$, we can write $$\xi = \sum _{n=N}^{\infty }a_n T^{-n},$$ where $N\in \lZ $, $a_n \in \lF _q$ for all $n\geq N$, and $a_N \neq 0$. We define an absolute value on $\lF _q ((T^{-1}))$ by $|0|:=0$ and $|\xi |:=q^{-N}$. This absolute value can be uniquely extended to the algebraic closure of $\lF _q((T^{-1}))$ and we continue to write $|\cdot |$ for the extended absolute value. We call an element of $\lF _q((T^{-1}))$ an [*algebraic Laurent series*]{} if the element is algebraic over $\lF _q(T)$. We can define Diophantine exponents $w_n$ and $w_n^{*}$ for Laurent series over a finite field in a similar way to the real case.
Mahler [@Mahler49] proved that an analogue of the Roth Theorem in this framework does not hold, that is, there exists an algebraic Laurent series $\xi $ such that $w_1(\xi )>1$. Indeed, let $r$ be a power of $p$ and put $\xi :=\sum _{n=0}^{\infty }T^{-r^n}$. Then $\xi $ is an algebraic Laurent series of degree $r$ with $w_1(\xi )=r-1$. After that, several people investigated algebraic Laurent series for which the analogue of the Roth Theorem does not hold (see e.g. [@Chen13; @Firicel13; @Mathan92; @Schmidt00; @Thakur99]). Furthermore, Thakur [@Thakur11; @Thakur13] constructed explicit algebraic Laurent series for which the analogue of for $n=r+1$ does not hold, where $r$ is a power of $p$. For example, for integers $m,n\geq 2$, he constructed algebraic Laurent series $\alpha _{m,n}$, with explicit equations and continued fractions, such that $$\deg \alpha _{m,n}\leq r^m+1,\quad \lim_{n\rightarrow \infty }E_1(\alpha _{m,n})=2,\quad \lim_{n\rightarrow \infty }E_{r+1}(\alpha _{m,n})\geq r^{m-1}+\frac{r-1}{(r+1)r}.$$ Here, $E_n(\xi )$ measures the quality of approximations to $\xi $ by algebraic Laurent series of degree $n$ (see Section \[sec:mainresult\] for the precise definition and relation between $E_n$ and $w_n^{*}$). Since $w_{r+1}^{*}(\alpha _{m,n})+1\geq E_{r+1}(\alpha_{m,n})$, we obtain $$\lim_{n\rightarrow \infty }w_{r+1}^{*}(\alpha _{m,n})\geq r^{m-1}-\frac{r^2+1}{(r+1)r},$$ which implies that an analogue of does not hold for $\alpha _{m,n}$ with sufficiently large $m,n$. In this paper, we investigate the phenomenon that properties of the Diophantine exponents in characteristic zero are different from that of positive characteristic. More precisely, for an integer $n\geq 1$, we construct algebraic Laurent series $\xi $ such that $E_1(\xi )$ are large values and $$E_1(\xi ) = \max\{ E_m(\xi )\mid 1\leq m\leq n \} .$$
The author [@Ooto17] showed that there exists $\xi \in \lF _q((T^{-1}))$ for which $w_2(\xi )$ and $w_2^{*}(\xi )$ are different. In this paper, improving the method in [@Ooto17], we solve the problem of whether or not there exists $\xi \in \lF _q((T^{-1}))$ for which $w_n(\xi )$ and $w_n^{*}(\xi )$ are different for any $n\geq 3$.
This paper is organized as follows: We state the main results on the Diophantine exponents $w_n$ and $w_n ^{*}$ for Laurent series over a finite field in Section \[sec:mainresult\]. In Section \[sec:pre\], we prepare some lemmas in order to prove the main results. We collect the proofs of the main results in Section \[sec:mainproof\]. It is well-known that finite automatons relate to algebraic Laurent series. In Section \[sec:conrem\], we give properties of the Diophantine exponents for Laurent series whose coefficients are generated by finite automatons. We also give analogues of the main results for real and $p$-adic numbers.
Main results {#sec:mainresult}
============
In this section, we state the main results about the Diophantine exponents for Laurent series over a finite field and give some problems associated to the main results.
We use the following notation throughout this paper. We denote by $\lfloor x\rfloor $ the integer part of a real number $x$. We use the Vinogradov notation $A\ll B$ if $|A|\leq c |B|$ for some constant $c>0$. We write $A\asymp B$ if $A\ll B$ and $B\ll A$. For a finite word $W$, we put $\overline{W}:=WW\cdots W\cdots $ (infinitely many concatenations of the word $W$). An infinite word ${\bf a}=a_0a_1\cdots $ is called [*ultimately periodic*]{} if there exist a finite word $U$ and a non-empty finite word $V$ such that ${\bf a}=U\overline{V}$. We identify a sequence $(a_n)_{n\geq 0}$ with the infinite word $a_0 a_1 \cdots a_n \cdots $.
We denote by $(\lF _q[T])[X]$ the set of all polynomials in $X$ over $\lF_q[T]$. The [*height*]{} of a polynomial $P(X)\in (\lF _q [T])[X]$, denoted by $H(P)$, is defined to be the maximum of the absolute values of the coefficients of $P(X)$. We denote by $(\lF _q[T])[X]_{\min }$ the set of all non-constant, irreducible, primitive polynomials in $(\lF _q[T])[X]$ whose leading coefficients are monic polynomials in $T$. For $\alpha \in \overline{\lF _q(T)}$, there exists a unique $P(X)\in (\lF _q[T])[X]_{\min }$ such that $P(\alpha )=0$. We call the polynomial $P(X)$ the [*minimal polynomial*]{} of $\alpha $. The [*height*]{} (resp. the [*degree*]{}, the [*inseparable degree*]{}) of $\alpha $, denoted by $H(\alpha )$ (resp. $\deg \alpha $, $\operatorname{insep}\alpha $), is defined to be the height of $P(X)$ (resp. the degree of $P(X)$, the inseparable degree of $P(X)$). We now define the Diophantine exponents for Laurent series over a finite field. Let $n\geq 1$ be an integer and $\xi $ be in $\lF _q((T^{-1}))$. We denote by $w_n(\xi )$ (resp. $w_n^{*}(\xi )$) the supremum of the real numbers $w$ (resp. $w^{*}$) which satisfy $$0<|P(\xi )|\leq H(P)^{-w}\quad (\text{resp.\ } 0<|\xi -\alpha |\leq H(\alpha )^{-w^{*}-1})$$ for infinitely many $P(X)\in (\lF _q[T])[X]$ of degree at most $n$ (resp. $\alpha \in \overline{\lF _q(T)}$ of degree at most $n$). It is clear that $w_1(\xi )=w_1 ^{*}(\xi )$ for all $\xi \in \lF _q((T^{-1}))$.
Let $n\geq 1$ be an integer and let $\xi \in \lF_q((T^{-1}))$ be a Laurent series which is not algebraic of degree at most $n$. We denote by $E_n(\xi )$ the supremum of the real numbers $w$ which satisfy $$0<|\xi -\alpha |\leq H(\alpha )^{-w}$$ for infinitely many algebraic Laurent series $\alpha \in \lF _q((T^{-1}))$ of degree $n$. It is obvious that $$\label{eq:e1}
E_1(\xi )=w_1(\xi )+1=w_1 ^{*}(\xi )+1$$ for all irrational Laurent series $\xi \in \lF _q((T^{-1}))$. It is also obvious that $$\label{eq:e12}
w_n^{*}(\xi )+1\geq \max\{E_m(\xi )\mid 1\leq m\leq n\}$$ for all $\xi \in \lF _q((T^{-1}))$ which are not algebraic of degree at most $n$.
As in the classical continued fraction theory of real numbers, if $\xi \in \lF _q((T^{-1}))$, then we can write $$\xi = a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{\cdots }}},$$ where $a_0, a_n \in \lF _q[T]$ and $\deg a_n\geq 1$ for all $n\geq 1$. We write $\xi =[a_0,a_1,a_2,\ldots ]$. We call $a_0 $ and $a_n $ the [*partial quotients*]{} of $\xi $.
Schmidt [@Schmidt00] and Thakur [@Thakur99] studied the Diophantine exponent $w_1$ for algebraic Laurent series. Schmidt [@Schmidt00] introduced a classification of algebraic Laurent series as follows: Let $\alpha $ be in $\lF _q((T^{-1}))\setminus \lF _q(T)$. We say that $\alpha $ is of [*Class I*]{} (resp. [*Class IA*]{}) if there exist an integer $s\geq 0$ and $R,S,T,U \in \lF _q[T]$ with $RU-ST \neq 0$ (resp. $RU-ST \in \lF _q ^{\times }$) such that $$\alpha =\frac{R \alpha ^{p^s} +S}{T \alpha ^{p^s}+U}.$$ For example, any quadratic Laurent series is of Class IA. Mathan [@Mathan92] proved that the value of $w_1$ for a Laurent series of Class I is rational. However, it is not known whether or not there exists an algebraic Laurent series for which the value of $w_1$ is irrational. Let $r$ be a power of $p$. Schmidt [@Schmidt00] and Thakur [@Thakur99] independently proved that for any rational number $1<w\leq r$, there exists an algebraic Laurent series $\xi \in \lF _q((T^{-1}))$ of degree at most $r+1$ such that $w_1(\xi )=w$. It is well-known that $w_1(\xi )=1$ and $w_1(\eta )\geq 1$ hold for any quadratic Laurent series $\xi \in \lF _q((T^{-1}))$ and irrational Laurent series $\eta \in \lF _q((T^{-1}))$ (see e.g. Theorem 5.1 in [@Ooto17] and Lemma \[lem:alg\]). Therefore, we deduce that the set of all values taken by $w_1$ over the set of all Laurent series of Class IA is equal to the set of all rational numbers greater than or equal to $1$. Chen [@Chen13] refined Schmidt and Thakur’s result by showing that the degree of $\xi $ can be taken to be equal to $r+1$.
We partially extend Chen’s result to $w_n$ and $w_n ^{*}$ for $n\geq 2$.
\[thm:mainalg\] Let $d\geq 1$ be an integer and $w>2d-1$ be a rational number. Then there exist a strictly increasing sequence of positive integers $(k_j)_{j\geq 1}$ and a sequence $(\xi _j)_{j\geq 1}$ such that, for any $j\geq 1$, $\xi _j$ is of Class IA of degree $p^{k_j}+1$, and $$\label{eq:mainlabel}
w_1(\xi _j)=w_1 ^{*}(\xi _j)=\ldots =w_d(\xi _j)=w_d ^{*}(\xi _j)=w.$$
(i). By Lemma \[lem:alg\], we obtain $w\leq p^{k_1}$.
(ii). When $w=p^s$, where $s\geq 1$ is an integer, it is known that there exist Laurent series of Class IA which satisfy . Indeed, let $a\in \lF _q[T]$ be a non-constant polynomial. We define a Laurent series $\xi _a$ of Class IA by $$\xi _a=[a,a^{p^s},a^{p^{2s}},\ldots ].$$ Then it is known that $\xi _a$ is algebraic of degree $p^s+1$ and satisfies $w_1(\xi _a)=p^s$ (see Theorem 1 (1) and Remarks (1) in [@Thakur99]). Therefore, it follows from Lemma \[lem:alg\] that $$w_n(\xi _a)=w_n^{*}(\xi _a)=p^s$$ for all $n\geq 1$.
(iii). By and , we deduce that $$E_1(\xi_j) = \max\{E_n(\xi_j)\mid 1\leq n\leq d\}=w+1,$$ where the $\xi _j$’s are algebraic Laurent series as in Theorem \[thm:mainalg\].
It is known that the values of $w_1$ can be determined through the partial quotients of continued fraction. In this paper, we extend this result to $w_n$ and $w_n ^{*}$ for all $n\geq 1$ for a certain class of Laurent series. This is the key point of the proof of Theorem \[thm:mainalg\].
As mentioned in Section \[sec:intro\], it is known that $w_n(\xi )=w_n^{*}(\xi )$ for all real algebraic numbers $\xi $ and integers $n\geq 1$. The proof of this result depends on the Schmidt Subspace Theorem which is a generalization of the Roth Theorem. However, analogues of these theorems in positive characteristic do not hold (see Section \[sec:intro\]). Therefore, we address the following problem.
\[prob:normalstar\] Is it true that $$w_n(\xi )=w_n ^{*}(\xi )$$ for an integer $n\geq 1$ and an algebraic Laurent series $\xi $?
Note that Theorem \[thm:mainalg\] gives a partial answer to Problem \[prob:normalstar\]. If we remove the condition that $\xi $ is algebraic, then the answer to Problem \[prob:normalstar\] is not true (see Theorems \[thm:mainconti1\] and \[thm:mainconti2\] below).
We state some corollaries of Theorem \[thm:mainalg\].
\[cor:classcor\] Let $n\geq 1$ be an integer. Then the set of all values taken by $w_n$ [(]{}resp. $w_n ^{*}$[)]{} over the set of all Laurent series of Class IA contains the set of all rational numbers greater than $2n-1$.
We address the following natural problem arising from Corollary \[cor:classcor\].
Let $n\geq 1$ be an integer. Determine the set of all values taken by $w_n$ (resp. $w_n ^{*}$) over the set of all algebraic Laurent series.
Since the sequence of degrees of $\xi _j$ tends to infinity under the conditions of Theorem \[thm:mainalg\], we obtain the following corollary.
\[cor:linind\] Let $d\geq 1$ be an integer and $w>2d-1$ be a rational number. Then there exists a set $\{ \xi _j \mid j\geq 1 \}$ of linearly independent Laurent series of Class IA such that, for any $j\geq 1$ $$w_1(\xi _j)=w_1 ^{*}(\xi _j)=\ldots =w_d(\xi _j)=w_d ^{*}(\xi _j)=w.$$
We obtain the following theorem in a similar method to the proof of Theorem \[thm:mainalg\].
\[thm:realequal\] Let $d\geq 1$ be an integer and $w\geq 2d-1$ be a real number. Then there exist uncountably many $\xi \in \lF _q((T^{-1}))$ such that $$w_1(\xi )=w_1 ^{*}(\xi )=\ldots =w_d(\xi )=w_d ^{*}(\xi )=w.$$
Analogues of Theorem \[thm:realequal\] for real and $p$-adic numbers are already given in [@Bugeaud10; @Bugeaud112].
For $\xi \in \lF _q((T^{-1}))$, it is easily seen that $(w_n(\xi ))_{n\geq 1}$ and $(w_n^{*}(\xi ))_{n\geq 1}$ are increasing sequences with $0\leq w_n(\xi ), w_n^{*}(\xi )\leq +\infty $ for all $n\geq 1$. Therefore, we have $w_n(\xi )=w_n ^{*}(\xi )=0$ and $w_n(\eta )=w_n^{*}(\eta )=1$ for all $n\geq 1, \xi \in \lF _q(T)$, and quadratic Laurent series $\eta \in \lF _q((T^{-1}))$ by Theorem 5.1 in [@Ooto17] and Lemma \[lem:alg\]. It is immediate that for any $n\geq 1, w_n(\xi )=w_n^{*}(\xi )=+\infty $, where $\xi =\sum _{m=1}^{\infty }T^{-m!}$. Hence, we have the following corollary of Theorem \[thm:realequal\].
\[cor:dis\] For an integer $n\geq 1$, the set of all values taken by $w_n$ [(]{}resp. $w_n^{*}$[)]{} contains the set $\{ 0,1\} \cup [2n-1,+\infty ]$. Furthermore, the set of all values taken by $w_1$ [(]{}resp. $w_1^{*}$[)]{} is equal to $\{ 0\}\cup [1,+\infty ]$.
We extend Theorems 1.1 and 1.2 in [@Ooto17].
\[thm:mainconti1\] Let $d\geq 2$ be an integer and $w\geq (3d+2+\sqrt{9d^2+4d+4})/2$ be a real number. Let $a,b\in \lF_q[T]$ be distinct non-constant polynomials. Let $(a_{n,w})_{n\geq 1}$ be the sequence given by $$a_{n,w}=\begin{cases}
b & \text{if } n=\lfloor w^i \rfloor \text{ for some integer } i\geq 0,\\
a & \text{otherwise}.
\end{cases}$$ Set $\xi _w:=[0,a_{1,w},a_{2,w},\ldots ]$. Then we have $$\label{eq:mainconti1}
w_n ^{*}(\xi _w)=w-1,\quad w_n(\xi _w)=w$$ for all $2\leq n\leq d$.
\[thm:mainconti2\] Let $d\geq 2$ be an integer, $w\geq 121d^2$ be a real number, and $a,b,c\in \lF_q[T]$ be distinct non-constant polynomials. Let $0<\eta <\sqrt{w}/d$ be a positive number and put $m_i:=\lfloor (\lfloor w^{i+1}\rfloor -\lfloor w^i-1\rfloor )/\lfloor \eta w^i\rfloor \rfloor $ for all $i\geq 1$. Let $(a_{n,w,\eta })_{n\geq 1}$ be the sequence given by $$a_{n,w,\eta }=\begin{cases}
b & \text{if } n=\lfloor w^i \rfloor \text{ for some integer } i\geq 0,\\
c & \parbox{250pt}{$\text{if } n \neq \lfloor w^i\rfloor \text{ for all integers }i\geq 0 \text{ and } n=\lfloor w^j\rfloor +m \lfloor \eta w^j\rfloor \text{ for some integers }1\leq m\leq m_j, j\geq 1,$}\\
a & \text{otherwise}.
\end{cases}$$ Set $\xi _{w,\eta }:=[0,a_{1,w,\eta },a_{2,w,\eta },\ldots ]$. Then we have $$\label{eq:mainconti2}
w_n ^{*}(\xi _{w,\eta })=\frac{2 w-2-\eta }{2+\eta },\quad w_n(\xi _{w,\eta })=\frac{2 w-\eta }{2+\eta }$$ for all $2\leq n\leq d$. Hence, we have $$w_n(\xi _{w,\eta })-w_n ^{*}(\xi _{w,\eta })=\frac{2}{2+\eta }$$ for all $2\leq n\leq d$.
Theorems \[thm:mainconti1\] and \[thm:mainconti2\] imply that for each $n\geq 2$, we can construct explicitly Laurent series $\xi $ for which $w_n(\xi )$ and $w_n ^{*}(\xi )$ are different. The general strategies of the proof of Theorems \[thm:mainconti1\] and \[thm:mainconti2\] are same as that of Theorems 1.1 and 1.2 in [@Ooto17]. The key ingredient is that for $n\geq 3$, if $\xi \in \lF _q((T^{-1}))$ has a dense (in a suitable sense) sequence of very good quadratic approximations, then we can determine $w_n(\xi )$ and $w_n^{*}(\xi )$.
The following corollary is immediate from Theorems \[thm:realequal\], \[thm:mainconti1\], and \[thm:mainconti2\].
\[cor:minusset\] Let $d\geq 2$ be an integer and $\delta $ be in the closed interval $[0,1]$. Then there exist uncountably many $\xi \in \lF _q((T^{-1}))$ such that $w_n(\xi )-w_n ^{*}(\xi )=\delta $ for all $2\leq n\leq d$. In particular, the set of all values taken by $w_d-w_d ^{*}$ contains the closed interval $[0,1]$.
Note that it is already known that the set of all values taken by $w_2-w_2^{*}$ is the closed interval $[0,1]$ in [@Ooto17].
In the last part of this section, we mention a problem associated to Corollary \[cor:dis\] and \[cor:minusset\].
Let $n\geq 1$ be an integer. Determine the set of all values taken by $w_n$ (resp. $w_n^{*}, w_n-w_n^{*}$).
Preliminaries {#sec:pre}
=============
Let $\xi $ be in $\lF _q((T^{-1}))$ and $n\geq 1$ be an integer. We denote by $\tilde{w}_n(\xi )$ the supremum of the real numbers $w$ which satisfy $$0<|P(\xi )|\leq H(P)^{-w}$$ for infinitely many $P(X) \in (\lF _q[T])[X]_{\min }$ of degree at most $n$.
\[lem:weak\] Let $n\geq 1$ be an integer and $\xi $ be in $\lF _q((T^{-1}))$. Then we have $$w_n (\xi )=\tilde{w}_n (\xi ).$$
See Lemma 5.3 in [@Ooto17].
\[lem:poly\] Let $\alpha ,\beta $ be in $\lF _q ((T^{-1}))$ and $P(X) \in (\lF _q[T])[X]$ be a non-constant polynomial of degree $d$. Let $C\geq 0$ be a real number. Assume that $|\alpha -\beta |\leq C$. Then there exists a positive constant $C_1(\alpha ,C,d)$, depending only on $\alpha ,C,$ and $d$, such that $$\label{eq:poly.differ}
|P(\alpha )-P(\beta )|\leq C_1(\alpha ,C,d)|\alpha -\beta |H(P).$$
It is easily seen that $$|P(\alpha )-P(\beta )|\leq H(P)\max _{1\leq i\leq d} |\alpha ^i-\beta ^i|.$$ By the assumption, we have $\max (C,|\alpha |)=\max (C,|\beta |)$. For any $1\leq i\leq d$, we obtain $$\begin{aligned}
|\alpha ^i-\beta ^i| & = |\alpha -\beta |\left| \sum_{j=0}^{i-1}\alpha ^j \beta ^{i-1-j} \right|
\leq |\alpha -\beta |\max _{0\leq j\leq i-1} |\alpha ^j \beta ^{i-1-j}| \\
& \leq |\alpha -\beta |\max _{1\leq k\leq d}(C,|\alpha |)^{k-1}.\end{aligned}$$ Hence, we have .
\[lem:differ\] Let $n\geq 1$ be an integer and $\xi $ be in $\lF _q((T^{-1}))$. Then we have $$w_n ^{*}(\xi )\leq w_n(\xi ).$$
See Proposition 5.6 in [@Ooto17].
\[lem:bestrational\] Let $\xi $ be in $\lF _q ((T^{-1})), d\geq 1$ be an integer, and $\theta ,\rho ,\delta $ be positive numbers. Assume that there exists a sequence $(p_j/q_j)_{j\geq 1}$ with $p_j ,q_j \in \lF _q[T], q_j \neq 0, \gcd(p_j, q_j)=1$ for any $j\geq 1$ such that $(|q_j|)_{j\geq 1}$ is a divergent increasing sequence, and $$\begin{gathered}
\limsup _{j\rightarrow \infty } \frac{\log |q_{j+1}|}{\log |q_j|} \leq \theta ,\\
d+\delta \leq \liminf _{j\rightarrow \infty }\frac{-\log |\xi -p_j/q_j|}{\log |q_j|}, \quad \limsup _{j\rightarrow \infty }\frac{-\log |\xi -p_j/q_j|}{\log |q_j|}\leq d+\rho .\end{gathered}$$ Then we have for all $1\leq n\leq d$ $$\label{eq:b.r.a.inequ}
d-1+\delta \leq w_n ^{*}(\xi )\leq w_n(\xi )\leq \max \left( d-1+\rho , \frac{d\theta }{\delta } \right) .$$
Note that Lemma \[lem:bestrational\] is a generalization of analogue of Lemma 1 in [@Amou91].
Let $0<\iota <\delta $ be a real number. By the assumption, there exists an integer $c_0\geq 1$ such that $$|q_j|\leq |q_{j+1}|\leq |q_j|^{\theta +\iota },\quad
\frac{1}{|q_j|^{d+\rho +\iota }}\leq \left| \xi -\frac{p_j}{q_j} \right| \leq \frac{1}{|q_j|^{d+\delta -\iota }}$$ for all $j\geq c_0$. Since $|\xi -p_j/q_j|\leq 1$ for $j\geq c_0$, we have $|q_j|\max (1,|\xi |)=\max (|p_j|,|q_j|)=H(p_j/q_j)$ for $j\geq c_0$. Therefore, we obtain $$0< \left| \xi -\frac{p_j}{q_j} \right| \leq \frac{\max (1,|\xi |)^{d+\delta }}{H(p_j/q_j)^{d+\delta -\iota }}$$ for $j\geq c_0$. Since $\iota $ is arbitrary, we have $$d-1+\delta \leq w_1 ^{*}(\xi )\leq w_2 ^{*}(\xi )\leq \ldots \leq w_d ^{*}(\xi ).$$ By Lemma \[lem:differ\], it is sufficient to show that $$\label{eq:remain}
w_d(\xi )\leq \max \left( d-1+\rho ,\frac{d\theta }{\delta } \right).$$ Put $c_1:=\max (1,|\xi |)^{d-1}$. Let $P(X) \in (\lF _q[T])[X]_{\min }$ be a polynomial of degree at most $d$ with $H(P)\geq c_1 ^{-1}|q_{c_0}|^{\frac{\delta }{\theta }}$. We first consider the case where $P(p_j/q_j)=0$ for some $j\geq c_0$. Then we can write $P(X)=a_j(q_j X-p_j)$ for some $a_j\in \lF _q$. Therefore, we have $$|P(\xi )|\geq |q_j|^{-d+1-\rho -\iota }\geq H(P)^{-d+1-\rho -\iota }.$$ We now turn to the case where $P(q_j/p_j)\neq 0$ for all $j\geq c_0$. We define an integer $j_0\geq c_0$ by $|q_{j_0}|\leq (c_1 H(P))^{\frac{\theta +\iota }{\delta -\iota }}<|q_{j_0+1}|$. Then we have $$H(P)<c_1 ^{-1}|q_{j_0+1}|^{\frac{\delta -\iota }{\theta +\iota }}\leq c_1 ^{-1}|q_{j_0}|^{\delta -\iota }.$$ It follows from Lemma \[lem:poly\] that $$|P(\xi )-P(p_{j_0}/q_{j_0})|\leq c_1 H(P)\left| \xi -\frac{p_{j_0}}{q_{j_0}}\right| <|q_{j_0}|^{-d}.$$ Since $|P(p_{j_0}/q_{j_0})|\geq |q_{j_0}|^{-d}$, we obtain $$|P(\xi )|=|P(p_{j_0}/q_{j_0})|\geq |q_{j_0}|^{-d}\geq (c_1 H(P))^{-\frac{d(\theta +\iota )}{\delta -\iota }}.$$ Therefore, by Lemma \[lem:weak\], we have $$w_d(\xi )\leq \max \left( d-1+\rho +\iota ,\frac{d(\theta +\iota )}{\delta -\iota } \right) .$$ Since $\iota $ is arbitrary, we obtain .
Let $\xi $ be in $\lF_q((T^{-1}))$ and we denote by $[a_0,a_1,a_2, \ldots ]$ the continued fraction expansion of $\xi $. We define sequences $(p_n)_{n\geq -1}$ and $(q_n)_{n\geq -1}$ by $$\begin{cases}
p_{-1}=1,\ p_0=a_0,\ p_n=a_n p_{n-1}+p_{n-2},\ n\geq 1,\\
q_{-1}=0,\ q_0=1,\ q_n=a_n q_{n-1}+q_{n-2},\ n\geq 1.
\end{cases}$$ We call $(p_n/q_n)_{n\geq 0}$ the [*convergent sequence*]{} of $\xi $. We gather fundamental properties of continued fractions in the following lemma.
\[lem:conti.lem\] Let $\xi =[a_0,a_1,a_2,\ldots ]$ be in $\lF _q((T^{-1}))$ and $(p_n/q_n)_{n\geq 0}$ be the convergent sequence of $\xi $. Then the following hold:
1. $\dfrac{p_n}{q_n}=[a_0,a_1,\ldots a_n]\quad (n\geq 0)$, \[enum:quo\]
2. $\gcd(p_n,q_n)=1\quad (n\geq 0)$, \[enum:prime\]
3. $|q_n|=|a_1||a_2|\cdots |a_n|\quad (n\geq 1)$, \[enum:q\]
4. ${\left\lvert\xi -\dfrac{p_n}{q_n}\right\rvert} =\dfrac{1}{{\left\lvertq_n\right\rvert}{\left\lvertq_{n+1}\right\rvert}}=\dfrac{1}{{\left\lverta_{n+1}\right\rvert}{\left\lvertq_n\right\rvert}^2}\quad (n\geq 0)$, \[enum:differ\]
5. $\xi =\dfrac{\xi _{n+1}p_n+p_{n-1}}{\xi _{n+1}q_n+q_{n-1}}$, where $\xi _{n+1}=[a_{n+1},a_{n+2},\ldots ]\quad (n\geq 0)$. \[enum:par\]
The following lemma is a well-known result (see e.g. [@Lasjaunias00; @Thakur11]).
\[lem:conti.w\_1\] Let $\xi =[a_0,a_1,a_2,\ldots ]$ be in $\lF _q((T^{-1}))$ and $(p_n/q_n)_{n\geq 0}$ be the convergent sequence of $\xi $. Then we have $$w_1 (\xi )= w_1 ^{*}(\xi )=\limsup_{n\rightarrow \infty } \frac{\deg q_{n+1}}{\deg q_n}.$$
We extend Lemma \[lem:conti.w\_1\] by using Lemma \[lem:bestrational\].
\[prop:conti.w\_d\] Let $d\geq 1$ be an integer and $\xi =[a_0,a_1,a_2,\ldots ]$ be in $\lF _q((T^{-1}))$. Let $(p_n/q_n)_{n\geq 0}$ be the convergent sequence of $\xi $. Assume that $$\liminf_{n\rightarrow \infty }\frac{\deg q_{n+1}}{\deg q_n} \geq 2 d-1.$$ Then we have $$\label{eq:best.alg}
w_1(\xi )=w_1 ^{*}(\xi )=\ldots =w_d(\xi )=w_d ^{*}(\xi )=\limsup _{n\rightarrow \infty } \frac{\deg q_{n+1}}{\deg q_n}.$$
For $j\geq 1$, put $$A_j:=\frac{\deg q_{j+1}}{\deg q_j}=\frac{\log |q_{j+1}|}{\log |q_j|}.$$ It follows from Lemmas \[lem:differ\] and \[lem:conti.w\_1\] that for all $n\geq 1$, $$\limsup _{j\rightarrow \infty } A_j \leq w_n ^{*}(\xi )\leq w_n(\xi ).$$ By Lemma \[lem:conti.lem\] (\[enum:differ\]), we have $$\frac{-\log |\xi -p_j/q_j|}{\log |q_j|}=1+A_j$$ for all $j\geq 1$. It follows from Lemma \[lem:conti.lem\] (\[enum:prime\]) and (\[enum:q\]) that $q_j \neq 0$ and $\gcd(p_j,q_j)=1$ for all $j\geq 1$. Moreover, the positive integer sequence $(|q_j|)_{j\geq 1}$ is strictly increasing, which implies that it is divergent. Applying Lemma \[lem:bestrational\] with $$\theta =\limsup_{j\rightarrow \infty }A_j,\quad \delta =d,\quad \rho =1+\limsup_{j\rightarrow \infty }A_j-d,$$ we obtain $$w_n ^{*}(\xi )\leq w_n(\xi )\leq \limsup _{j\rightarrow \infty } A_j$$ for all $1\leq n\leq d$. Hence, we have .
Schmidt [@Schmidt00] characterized Laurent series of Class IA by using continued fractions.
\[thm:classIAiff\] Let $\alpha $ be in $\lF _q((T^{-1}))$. Then $\alpha $ is of Class IA if and only if the continued fraction expansion of $\alpha $ is of the form $$\label{eq:IA1}
\alpha =[a_1, a_2, \ldots ,a_t, b_1, b_2, \ldots ],$$ where $t\geq 0$ and $$b_{j+s}=
\begin{cases}\label{eq:IA2}
a b_j ^{p^k} & \text{when } j \text{ is odd}, \\
a^{-1} b_j ^{p^k} & \text{when } j \text{ is even}
\end{cases}$$ for some $a\in \lF _q ^{\times }$, integers $s\geq 1$ and $k\geq 0$.
See Theorem 4 in [@Schmidt00].
Thakur [@Thakur99] worked at the ratios of the degrees of the denominators of the convergent sequences for Laurent series of Class IA.
\[thm:supinf\] Let $\alpha \in \lF _q((T^{-1}))$ be as in and , and $(p_n/q_n)_{n\geq 0}$ be the convergent sequence of $\alpha $. Put $d_i :=\deg b_i$ and $$r_i:=\frac{d_i}{p^k (\sum _{j=1}^{i-1}d_j )+\sum _{j=i}^{s}d_j }.$$ Then we have $$\begin{gathered}
\limsup _{n\rightarrow \infty }\frac{\deg q_{n+1}}{\deg q_n} =1+(p^k-1) \max \{ r_1, \ldots ,r_s \} ,\label{eq:limsup} \\
\liminf _{n\rightarrow \infty }\frac{\deg q_{n+1}}{\deg q_n} =1+(p^k-1) \min \{ r_1, \ldots ,r_s \} . \label{eq:liminf}\end{gathered}$$
We have by Theorem 1 (1) in [@Thakur99] and Lemma \[lem:conti.w\_1\]. Meanwhile, follows in a similar way to the proof of Theorem 1 (1) in [@Thakur99].
We define a valuation $v$ on $\lF_q((T^{-1}))$ by $v(\xi )=-\log _q{\left\lvert\xi \right\rvert}$ for $\xi \in \lF_q ((T^{-1}))$.
\[lem:irrcri\] Let $$P(X)=X^m+\sum_{i=1}^{m}a_iX^{m-i}$$ be a monic polynomial in $(\lF_q((T^{-1})))[X]$. Assume that $v(a_m)$ is a positive integer with $\gcd (v(a_m),m)=1$. Then $P(X)$ is irreducible over $\lF_q((T^{-1}))$ if and only if $v(a_i)/i>v(a_m)/m$ for all $1\leq i\leq m-1$.
See Proposition 2.2 in [@Popescu95].
We give a sufficient condition to determine degrees of Laurent series of Class IA. The following lemma is inspired by the proof of Theorem 2 in [@Chen13].
\[lem:classiadegree\] Let $\alpha \in \lF_q((T^{-1}))$ be as in and . If $\gcd (\deg b_s,p)=1$, then $\alpha $ is algebraic of degree $p^k+1$.
By Lemma \[lem:conti.lem\] , it is sufficient to show that $\beta =[b_1,b_2,\ldots ]$ is algebraic of degree $p^k+1$. Let $(p_n/q_n)_{n\geq 0}$ be the convergent sequence of $\beta $. Since $$\beta _s=[ab_1^{p^k},a^{-1}b_2^{p^k},\ldots ]=a\beta ^{p^k},$$ it follows from Lemma \[lem:conti.lem\] that $\beta $ is a root of the monic polynomial $$P(X):=X^{p^k+1}-\frac{p_{s-1}}{q_{s-1}}X^{p^k}+\frac{q_{s-2}}{aq_{s-1}}X-\frac{p_{s-2}}{aq_{s-1}}.$$ Put $$Q(X):=X^{p^k}+\left( \beta -\frac{p_{s-1}}{q_{s-1}}\right)\sum_{j=0}^{p^k-1}\beta ^{p^k-1-j}X^j +\frac{q_{s-2}}{aq_{s-1}},$$ then we have $P(X)=(X-\beta )Q(X)$. For $1\leq i\leq p^k$, let $c_i$ be the coefficient of $X^{p^k-i}$ in $Q(X)$. By Lemma \[lem:conti.lem\] and , we deduce that $$\begin{gathered}
v(c_i)=(p^k-i+1)\deg b_1+2\sum_{j=2}^{s}\deg b_j\quad (1\leq i\leq p^k-1),\\
v(c_{p^k})=\deg b_s.\end{gathered}$$ Therefore, by Lemma \[lem:irrcri\], the monic polynomial $Q(X)$ is irreducible over $\lF_q((T^{-1}))$. Since $\beta \notin \lF_q(T)$, we derive that $P(X)$ is irreducible over $\lF_q(T)$.
\[lem:seq\] Let $d\geq 1$ be an integer and $w>2d-1$ be a rational number. Write $w=a/b,$ where $a,b\geq 1$ are integers and $a=p^m a',$ where $a'\geq 1, m\geq 0$ are integers and $\gcd(a',p)=1$. Then there exist a strictly increasing sequence of positive integers $(k_j)_{j\geq 1}$, a sequence of integers $(n_j)_{j\geq 1}$, and a sequence of rational numbers $(u_j)_{j\geq 1}$ such that for any $j\geq 1, n_j\geq 3, u_j=a_j/b_j,$ where $a_j,b_j \geq 1$ are integers, $ \gcd(a_j,p)=1, p^m | b_j,$ and $$\label{eq:minmax}
2 d-1< \min \left\{ w, u_j , \frac{p^{k_j}}{w u _j ^{n_j-2}} \right\} ,\quad \max \left\{ w, u_j , \frac{p^{k_j}}{w u _j ^{n_j-2}} \right\} =w.$$
The proof is by induction on $j$. By assumption, we take an integer $n_1\geq 3$ with $(w/(2d-1))^{n_1-1}> p$. Then we have $\log _p w^{n_1}-\log _p w(2d-1)^{n_1-1}>1$. This implies that there exists an integer $k_1\geq 1$ such that $$w (2 d-1)^{n_1-1}<p^{k_1}<w^{n_1}.$$ Then we have $$\max \left\{ 2 d-1, \left( \frac{p^{k_1}}{w^2}\right) ^{\frac{1}{n_1-2}}\right\} <\min \left\{ w,\left( \frac{p^{k_1}}{(2 d-1)w} \right) ^{\frac{1}{n_1-2}} \right\} .$$ Let $r\geq 2$ be an integer such that $\gcd(r,p)=1$. By Lemma 2.5.9 in [@Allouche03], the set $\{ r^y/p^x \mid x,y\in \lZ _{\geq 0} \}$ is dense in $\lR_{>0}$. Therefore, we can take a rational number $u_1=a_1/b_1$ such that $a_1,b_1 \in \lZ _{>0}, \gcd (a_1,p)=1, p^m | b_1$, and $$\max \left\{ 2 d-1, \left( \frac{p^{k_1}}{w^2}\right) ^{\frac{1}{n_1-2}} \right\} <u_1<\min \left\{ w,\left( \frac{p^{k_1}}{(2 d-1)w} \right) ^{\frac{1}{n_1-2}} \right\} .$$ Thus, we have when $j=1$. Assume that holds for $j=1,\ldots ,i$. We take an integer $n_{i+1}\geq 3$ with with $(w/(2d-1))^{n_{i+1}-1}> p^{k_i}$. In a similar way to the above proof, we can take an integer $k_{i+1}$ with $k_i<k_{i+1}$ and a rational number $u_{i+1}$ which satisfy . This completes the proof.
Quadratic Laurent series are characterized by continued fractions as follows:
\[lem:ult\] Let $\xi =[a_0,a_1,\ldots ]$ be in $\lF _q((T^{-1}))$. Then $\xi $ is quadratic if and only if $(a_n)_{n\geq 0}$ is ultimately periodic.
See e.g. Théorème 4 in [@Mathan70 CHAPITRE IV] or Theorems 3 and 4 in [@Chaichana06].
The following lemma is well-known and immediately seen.
\[lem:height\] Let $P(X)$ be in $(\lF _q[T])[X]$. Assume that $P(X)$ can be factorized as $$P(X)=A\prod_{i=1}^{n} (X-\alpha _i),$$ where $A\in \lF _q[T]$ and $\alpha _i \in \overline{\lF _q(T)}$ for $1\leq i\leq n$. Then we have $$\begin{aligned}
\label{eq:heighteq}
H(P)=|A|\prod_{i=1}^{n} \max (1, |\alpha _i|).\end{aligned}$$ Furthermore, for $P(X), Q(X) \in (\lF _q[T])[X]$, we have $$\begin{aligned}
\label{eq:heighttwo}
H(P Q)=H(P)H(Q).\end{aligned}$$
Let $\alpha \in \overline{\lF _q(T)}$ be a quadratic number. If $\operatorname{insep}\alpha =1$, let $\alpha ' \neq \alpha $ be the Galois conjugate of $\alpha $. If $\operatorname{insep}\alpha =2$, let $\alpha ' =\alpha $.
\[lem:Galois\] Let $\alpha \in \overline{\lF _q(T)}$ be a quadratic number. If $\alpha \neq \alpha '$, then we have $$|\alpha -\alpha '|\geq H(\alpha )^{-1}.$$
This is clear by using the discriminant of the minimal polynomial of $\alpha $ (see e.g. [@Cassels86 Appendix A] for the definition and properties of the discriminant). We refer to Lemma 3.5 in [@Ooto17] for a direct proof.
We recall the Liouville inequalities for Laurent series over a finite field.
\[lem:Lio.inequ1\] Let $ P(X) \in (\lF_q[T])[X]$ be a non-constant polynomial of degree $m$ and $\alpha \in \overline{\lF _q(T)}$ be a number of degree $n$. Assume that $P(\alpha )\not= 0$. Then we have $$|P(\alpha )|\geq H(P)^{-n+1} H(\alpha )^{-m}.$$
See e.g. Lemma 4 in [@Muller93] or Proposition 3.2 in [@Ooto17].
\[lem:Lio.inequ2\] Let $\alpha ,\beta \in \overline{\lF _q(T)}$ be distinct numbers of degree $m$ and $n$, respectively. Then we have $$|\alpha -\beta |\geq H(\alpha )^{-n} H(\beta )^{-m}.$$
See e.g. Korollar 3 in [@Muller93] or Proposition 3.4 in [@Ooto17].
The lemma below is an immediate consequence of Lemmas \[lem:Lio.inequ1\] and \[lem:Lio.inequ2\].
\[lem:alg\] Let $n\geq 1$ be an integer and $\xi $ be an algebraic Laurent series of degree $d$. Then we have $$w_n(\xi ), w_n ^{*}(\xi )\leq d-1.$$
We give a key lemma for the proof of Theorems \[thm:mainconti1\] and \[thm:mainconti2\] as follows:
\[lem:bestquad\] Let $d\geq 2$ be an integer. Let $\xi $ be in $\lF_q((T^{-1}))$ and $\theta ,\rho ,\delta $ be positive numbers. Assume that there exists a sequence $(\alpha _j)_{j\geq 1}$ such that for any $j\geq 1$, $\alpha _j\in \overline{\lF _q(T)}$ is quadratic, $(H(\alpha _j))_{j\geq 1}$ is a divergent increasing sequence, and $$\begin{gathered}
\limsup_{j\rightarrow \infty } \frac{\log H(\alpha _{j+1})}{\log H(\alpha _j)}\leq \theta ,\\
d+\delta \leq \liminf_{j\rightarrow \infty } \frac{-\log |\xi -\alpha _j|}{\log H(\alpha _j)},\quad \limsup_{j\rightarrow \infty } \frac{-\log |\xi -\alpha _j|}{\log H(\alpha _j)}\leq d+\rho .\end{gathered}$$ If $2d\theta \leq (d-2+\rho )\delta $, then we have for all $2\leq n\leq d$, $$\label{eq:w_2;1}
d-1+\delta \leq w_n ^{*}(\xi )\leq d-1+\rho .$$ Furthermore, assume that there exist a non-negative number $\varepsilon $ and a positive number $c$ such that for any $j\geq 1, 0<|\alpha _j-\alpha _j '|\leq c$ and $$\label{eq:ipu}
\limsup_{j\rightarrow \infty }\frac{-\log |\alpha _j -\alpha _j '|}{\log H(\alpha _j)}\geq \varepsilon .$$ If $2d\theta \leq (d-2+\delta )\delta $, then we have for all $2\leq n\leq d$, $$\label{eq:w_2;2}
d-1+\delta \leq w_n ^{*}(\xi )\leq d-1+\rho ,\quad \varepsilon \leq w_n(\xi )-w_n ^{*}(\xi ).$$ Finally, assume that there exists a non-negative number $\chi $ such that $$\label{eq:kai}
\limsup_{i\rightarrow \infty} \frac{-\log |\alpha _i-\alpha _i '|}{\log H(\alpha _i)}\leq \chi .$$ Then we have for all $2\leq n\leq d$, $$\label{eq:w_2;3}
d-1+\delta \leq w_n ^{*}(\xi )\leq d-1+\rho ,\quad \varepsilon \leq w_n(\xi )-w_n ^{*}(\xi )\leq \chi .$$
Let $0<\iota <\delta $ be a real number. Then there exists a positive integer $c_0$ such that $$\frac{1}{H(\alpha _j)^{d+\rho +\iota }} \leq |\xi -\alpha _j|\leq \frac{1}{H(\alpha _j)^{d+\delta -\iota }},\quad H(\alpha _j)\leq H(\alpha _{j+1})\leq H(\alpha _j)^{\theta +\iota }$$ for all $j\geq c_0$. Since $\iota $ is arbitrary, we have $w_2 ^{*}(\xi )\geq d-1+\delta $. Let $\alpha \in \overline{\lF _q(T)} \setminus \{\alpha _j \mid j\geq 1 \}$ be an algebraic number of degree at most $d$ with $H(\alpha )\geq H(\alpha _{c_0})^{\frac{\delta }{2 \theta }}$.
Assume that $2d\theta \leq (d-2+\rho )\delta $. We define an integer $j_0\geq c_0$ such that $H(\alpha _{j_0})\leq H(\alpha )^{\frac{2(\theta +\iota )}{\delta -\iota }}<H(\alpha _{j_0 +1})$. Since $$H(\alpha )<H(\alpha _{j_0 +1})^{\frac{\delta -\iota }{2(\theta +\iota )}}\leq H(\alpha _{j_0})^{\frac{\delta -\iota }{2}},$$ we obtain $$|\alpha -\alpha _{j_0}|\geq H(\alpha )^{-2}H(\alpha _{j_0})^{-d}>H(\alpha _{j_0})^{-d-\delta +\iota }\geq |\xi -\alpha _{j_0}|$$ by Lemma \[lem:Lio.inequ2\]. We derive that $$\label{eq:llow}
|\xi -\alpha |=|\alpha -\alpha _{j_0}|\geq H(\alpha )^{-2}H(\alpha _{j_0})^{-d}\geq H(\alpha )^{-2-\frac{2 d(\theta +\iota )}{\delta -\iota }},$$ which implies $$w_d ^{*}(\xi )\leq \max \left( d-1+\rho +\iota , 1+\frac{2 d(\theta +\iota )}{\delta -\iota } \right) .$$ Since $\iota $ is arbitrarily small, follows.
Next, we assume that $2d\theta \leq (d-2+\delta )\delta $ and there exist a non-negative number $\varepsilon $ and a positive number $c$ such that for any $j\geq 1, 0<|\alpha _j-\alpha _j '|\leq c$ and holds. Since $\delta \leq \rho $, we have . By the assumption and , the sequence $(\alpha _j)_{j\geq 1}$ is the best approximation to $\xi $ of degree at most $d$, that is, $$\label{eq:llow2}
w_d ^{*}(\xi )=\limsup _{j\rightarrow \infty }\frac{-\log |\xi -\alpha _j|}{\log H(\alpha _j)}-1.$$ Therefore, we have $w_2 ^{*}(\xi )=\ldots =w_d ^{*}(\xi )$. In what follows, we show that $\varepsilon \leq w_n(\xi )-w_n ^{*}(\xi )$ for all $2\leq n\leq d$. For any $j\geq 1$, we denote by $P_j(X)=A_j(X-\alpha _j)(X-\alpha _j ')$ the minimal polynomial of $\alpha _j$. Since $|\xi -\alpha _j|\leq 1$ and $|\alpha _j-\alpha _j '|\leq c$ for $j\geq c_0$, we have $$\max (1,|\xi |)\asymp \max (1,|\alpha _j|)\asymp \max (1,|\alpha _j '|)$$ for $j\geq c_0$. It follows from Lemma \[lem:height\] that $H(P_j)\asymp |A_j|$ for $j\geq c_0$. By Lemma \[lem:Galois\], we have $|\xi -\alpha _j|<|\alpha _j-\alpha _j '|$ for $j\geq c_0$, which implies $|\xi -\alpha _j '|=|\alpha _j -\alpha _j '|$ for $j\geq c_0$. Hence, we have $$\label{eq:llow3}
|P_j (\xi )|\asymp H(P_j)|\xi -\alpha _j||\alpha _j-\alpha _j '|$$ for $j\geq c_0$. It follows from and that $w_d ^{*}(\xi )+\varepsilon \leq w_2(\xi )$. Thus, we have $\varepsilon \leq w_n(\xi )-w_n ^{*}(\xi )$ for all $2\leq n\leq d$.
Finally, we assume . By , we obtain $$\limsup _{j\rightarrow \infty} \frac{-\log |P_j(\xi )|}{\log H(P_j)}\leq w_2 ^{*}(\xi )+\chi .$$ Recall that $C_1(\alpha ,C,d)$ is defined in Lemma \[lem:poly\]. Put $c_1:=\max _{1\leq i\leq d}C_1(\xi ,1,i)$. Let $P(X)\in (\lF _q[T])[X]_{\min }$ be a polynomial of degree at most $d$ with $H(P)\geq c_1 ^{-\frac{1}{2}}H(\alpha _{c_0})^{\frac{\delta }{2\theta }}$ and $P(\alpha _j)\neq 0$ for all $j\geq 1$. We define an integer $j_1\geq c_0$ such that $H(\alpha _{j_1})\leq (c_1 H(P)^2)^{\frac{\theta +\iota }{\delta -\iota }}<H(\alpha _{j_1+1})$. Since $$H(P)<c_1 ^{-\frac{1}{2}}H(\alpha _{j_1+1})^{\frac{\delta -\iota }{2(\theta +\iota )}}\leq c_1 ^{-\frac{1}{2}}H(\alpha _{j_1})^{\frac{\delta -\iota }{2}},$$ we have $$|P(\xi )-P(\alpha _{j_1})|
\leq c_1 H(P)|\xi -\alpha _{j_1}|
< H(\alpha _{j_1})^{-d}H(P)^{-1}
\leq |P(\alpha _{j_1})|$$ by Lemmas \[lem:poly\] and \[lem:Lio.inequ1\]. Therefore, we obtain $$|P(\xi )|=|P(\alpha _{j_1})|\geq H(\alpha _{j_1})^{-d}H(P)^{-1} \geq c_1 ^{-\frac{d(\theta +\iota )}{\delta -\iota }}H(P)^{-1-\frac{2 d(\theta +\iota )}{\delta -\iota }}.$$ Hence, we get $$w_d(\xi )\leq \max \left( w_2 ^{*}(\xi )+\chi , 1+\frac{2 d(\theta +\iota )}{\delta -\iota } \right)$$ by Lemma \[lem:weak\]. Since $\iota $ is arbitrarily small, we have $w_d(\xi )\leq w_2 ^{*}(\xi )+\chi $. This completes the proof.
Proof of main results {#sec:mainproof}
=====================
We take a strictly increasing sequence of positive integers $(k_j)_{j\geq 1}$, a sequence of integers $(n_j)_{j\geq 1}$, and a sequence of rational numbers $(u_j)_{j\geq 1}$ as in Lemma \[lem:seq\]. For $j\geq 1$, we put $$\begin{gathered}
d_{1,j}:=\frac{b_j ^{n_j-2}(a-b)}{p^m}, \quad d_{i,j}:=\frac{a a_j ^{i-2}b_j ^{n_j-i-1}(a_j-b_j)}{p^m} \quad (2\leq i\leq n_j-1), \\
d_{n_j,j}:=\frac{p^{k_j}b b_j ^{n_j-2}-a a_j ^{n_j-2}}{p^m}.\end{gathered}$$ Then we have $d_{i,j}\in \lZ _{>0}$ and $\gcd(d_{n_j,j},p)=1$ for all $j\geq 1, 1\leq i\leq n_j$. Now we take polynomials $A_{1,j},\ldots ,A_{n_j,j}\in \lF _q[T]$ with $\deg A_{i,j}=d_{i,j}$. Put $$\xi _j:=[A_{1,j},\ldots ,A_{n_j,j},A_{1,j}^{p^{k_j}},\ldots ,A_{n_j,j}^{p^{k_j}},A_{1,j}^{p^{2 k_j}},\ldots ]\in \lF _q((T^{-1}))$$ and let $(p_{n,j}/q_{n,j})_{n\geq 0}$ be the convergent sequence of $\xi _j$. By Theorem \[thm:classIAiff\], $\xi _j$ is of Class IA. Therefore, by Lemma \[lem:classiadegree\], we deduce that $\xi _j$ is algebraic of degree $p^{k_j}+1$. For $1\leq i\leq n_j$, we put $$r_{i,j}:=\frac{d_{i,j}}{p^{k_j}(\sum _{\ell =1}^{i-1}d_{\ell ,j})+\sum _{\ell =i}^{n_j}d_{\ell ,j}}.$$ Then a straightforward computation shows that $$\begin{gathered}
r_{1,j}=\frac{a-b}{(p^{k_j}-1)b},\quad r_{i,j}=\frac{a_j-b_j}{(p^{k_j}-1)b_j}\quad (2\leq i\leq n_j-1), \\
r_{n_j,j}=\frac{p^{k_j}b b_j ^{n_j-2}-a a_j ^{n_j-2}}{(p^{k_j}-1)a a_j ^{n_j-2}}.\end{gathered}$$ By Theorem \[thm:supinf\] and Lemma \[lem:seq\], we obtain $$\limsup _{n\rightarrow \infty }\frac{\deg q_{n+1,j}}{\deg q_{n,j}}=w,\quad \liminf _{n\rightarrow \infty }\frac{\deg q_{n+1,j}}{\deg q_{n,j}}>2 d-1.$$ It follows from Proposition \[prop:conti.w\_d\] that $$w_1(\xi _j)=w_1 ^{*}(\xi _j)=\ldots =w_d(\xi _j)=w_d ^{*}(\xi _j)=w.$$ This completes the proof.
Let $(\varepsilon _n)_{n\geq 1}$ be a sequence over the set $\{ 0,1 \}$. We define recursively the sequences $(a_n)_{n\geq 0}, (P_n)_{n\geq -1}$, and $(Q_n)_{n\geq -1}$ by $$\begin{cases}
a_0=0,\ a_1=T+\varepsilon _1,\ a_n=T^{\lfloor (w-1)\deg Q_{n-1}\rfloor}+\varepsilon _n,\ n\geq 2,\\
P_{-1}=1,\ P_0=0,\ P_n=a_n P_{n-1}+P_{n-2},\ n\geq 1,\\
Q_{-1}=0,\ Q_0=1,\ Q_n=a_n Q_{n-1}+Q_{n-2},\ n\geq 1.
\end{cases}$$ Set $\xi _w:=[0,a_1,a_2,\ldots ]$. Then $(P_n/Q_n)_{n\geq 0}$ is the convergent sequence of $\xi _w$. It follows from Lemma \[lem:conti.lem\] (\[enum:q\]) that $$\lim _{n\rightarrow \infty } \frac{\deg Q_{n+1}}{\deg Q_n}=w.$$ Therefore, by Proposition \[prop:conti.w\_d\], we have $$w_1(\xi _w)=w_1 ^{*}(\xi _w)=\ldots =w_d(\xi _w)=w_d ^{*}(\xi _w)=w.$$
For any $j\geq 1$, we put $$\xi _{w,j}:=[0,a_{1,w},\ldots ,a_{\lfloor w^j\rfloor ,w},\overline{a}].$$ Then $\xi _{w,j}$ is quadratic by Lemma \[lem:ult\]. It follows from the proof of Theorem 1.1 in [@Ooto17] that $(H(\xi _{w,j}))_{j\geq 1}$ is a divergent increasing sequence, and $$\begin{gathered}
\lim _{j\rightarrow \infty }\frac{-\log |\xi _w -\xi _{w,j}|}{\log H(\xi _{w,j})}=w,\quad \lim _{j\rightarrow \infty }\frac{-\log |\xi _{w,j} -\xi _{w,j} '|}{\log H(\xi _{w,j})}=1,\\
\limsup _{j\rightarrow \infty }\frac{\log H(\xi _{w,j+1})}{\log H(\xi _{w,j})}\leq w.\end{gathered}$$ By the definition of $w$, we have $2dw\leq (w-2)(w-d)$. Applying Lemma \[lem:bestquad\] with $\delta =\rho =w-d, \varepsilon =\chi =1$, and $\theta =w$, we obtain for all $2\leq n\leq d$.
For any $j\geq 1$, we put $$\xi _{w,\eta, j}:=[0,a_{1,w,\eta },\ldots ,a_{\lfloor w^j\rfloor ,w,\eta },\overline{a,\ldots ,a,c}],$$ where the length of the periodic part is $\lfloor \eta w^j\rfloor $. Then $\xi _{w,\eta ,j}$ is quadratic by Lemma \[lem:ult\]. It follows from the proof of Theorem 1.2 in [@Ooto17] that $(H(\xi _{w,\eta ,j}))_{j\geq 1}$ is a divergent increasing sequence, and $$\begin{gathered}
\lim _{j\rightarrow \infty }\frac{-\log |\xi _{w,\eta } -\xi _{w,\eta ,j}|}{\log H(\xi _{w,\eta ,j})}=\frac{2 w}{2+\eta },\quad \lim _{j\rightarrow \infty }\frac{-\log |\xi _{w,\eta ,j} -\xi _{w,\eta ,j} '|}{\log H(\xi _{w,\eta ,j})}=\frac{2}{2+\eta },\\
\limsup _{j\rightarrow \infty }\frac{\log H(\xi _{w,\eta ,j+1})}{\log H(\xi _{w,\eta ,j})}\leq w.\end{gathered}$$ A direct computation shows that $$2 d w\leq \left(\frac{2 w}{2+\eta }-2 \right) \left( \frac{2 w}{2+\eta }-d \right) .$$ Applying Lemma \[lem:bestquad\] with $$\delta =\rho =\frac{2 w}{2+\eta }-d,\quad \varepsilon =\chi =\frac{2}{2+\eta },\quad \theta =w,$$ we have for all $2\leq n\leq d$.
Further remarks {#sec:conrem}
===============
In this section, we give some theorems associated to the main results.
Relationship between automatic sequences and Diophantine exponents
------------------------------------------------------------------
Let $k\geq 2$ be an integer. We denote by $\Sigma _k$ the set $\{ 0,1,\ldots ,k-1 \}$. A $k$-[*automaton*]{} is defined to be a sextuple $$A=(Q, \Sigma _k, \delta , q_0, \Delta ,\tau ),$$ where $Q$ is a finite set of [*states*]{}, $\delta :Q\times \Sigma _k\rightarrow Q$ is a [*transition function*]{}, $q_0 \in Q$ is an [*initial state*]{}, a finite set $\Delta $ is an [*output alphabet*]{}, and $\tau :Q\rightarrow \Delta $ is an [*output function*]{}. For $q\in Q$ and a finite word $W=w_0 w_1 \cdots w_n$ over $\Sigma _k$, we define $\delta (q,W)$ recursively by $\delta (q,W)=\delta (\delta (q,w_0 w_1\cdots w_{n-1}), w_n)$. For an integer $n\geq 0$, we put $W_n:=w_r w_{r-1}\cdots w_0$, where $\sum _{i=0}^{r}w_i k^i$ is the $k$-ary expansion of $n$. An infinite sequence $(a_n)_{n\geq 0}$ is said to be $k$-[*automatic*]{} if there exists a $k$-automaton $A=(Q, \Sigma _k, \delta , q_0, \Delta ,\tau )$ such that $a_n=\tau (\delta (q_0,W_n))$ for all $n\geq 0$.
Christol, Kamae, Mendes France, and Rauzy [@Christol80] characterized algebraic Laurent series by using finite automatons. More precisely, they showed that for a sequence $(a_n)_{n\geq 0}$ over $\lF _q$, the Laurent series $\sum _{n=0}^{\infty }a_n T^{-n}$ is algebraic if and only if $(a_n)_{n\geq 0}$ is $p$-automatic. It is known that for $m\geq 1$ and $k\geq 2$, a sequence $(a_n)_{n\geq 0}$ is $k$-automatic if and only if it is $k^m$-automatic (see Theorem 6.6.4 in [@Allouche03]). Therefore, we obtain the following corollary of Theorem \[thm:mainalg\].
\[cor:pauto\] Let $d,m\geq 1$ be integers and $w>2d-1$ be a rational number. Then there exists a $p^m$-automatic sequence $(a_n)_{n\geq 0}$ over $\lF _q$ such that $$w_1(\xi )=w_1 ^{*}(\xi )=\ldots =w_d(\xi )=w_d ^{*}(\xi )=w,$$ where $\xi =\sum _{n=0}^{\infty }a_n T^{-n}$.
In this subsection, we consider the problem of determining whether or not we can extend Corollary \[cor:pauto\] to $k$-automatic sequence for any integer $k\geq 2$. It is the natural problem in view of Corollary \[cor:pauto\].
Let $k\geq 2$ be an integer. We define a set $S_k$ of rational numbers as follows: $$S_k=\left\{ \frac{k^a}{\ell }{\mathrel{}\middle|\mathrel{}}a, \ell \in \lZ _{\geq 1}\right\} .$$
Bugeaud [@Bugeaud08] proved that for an integer $k\geq 2$ and $w\in S_k$ with $w>2$, there exists a $k$-automatic sequence $(a_n)_{n\geq 0}$ over $\{ 0,2\}$ such that $w_1(\xi )=w-1$, where $\xi =\sum _{n=0}^{\infty }a_n/3^n$. The proof of this result essentially depends on the Folding Lemma and an analogue of Lemma \[lem:conti.w\_1\]. It is known that the Folding Lemma holds for Laurent series over a finite field. For the statement and proof of the Folding Lemma, we refer the readers to [@Poorten92 Proposition 2] and [@Shallit79 the proof of Theorem 1]. We have the following theorem which is similar to Bugeaud’s result.
Let $k\geq 2$ be an integer and $w>2$ be in $S_k$. Then there exists a $k$-automatic sequence $(a_n)_{n\geq 0}$ over $\lF _q$ such that $w_1(\xi )=w-1$, where $\xi =\sum _{n=0}^{\infty }a_n T^{-n}$.
Using Lemma \[lem:bestrational\], we prove the following theorem.
Let $d\geq 1$ be an integer, $k\geq 2$ be an integer, and $w>(2d+1+\sqrt{4d^2+1})/2$ be in $S_k$. Then there exists a $k$-automatic sequence $(a_n)_{n\geq 0}$ over $\lF _q$ such that $$w_1(\xi )=w_1 ^{*}(\xi )=\ldots =w_d(\xi )=w_d^{*}(\xi )=w-1,$$ where $\xi =\sum _{n=0}^{\infty }a_n T^{-n}$.
Note that $(2d-1+\sqrt{4d^2+1})/2$ is greater than $2d-1$ for any $d\geq 1$.
Slightly modifying the proof of Theorem 1.2 in [@Bugeaud152], we deduce that there exists a $k$-automatic sequence $(a_n)_{n\geq 0}$ with $a_n \in \{ 0,1\}$ for all $n\geq 1$ which satisfies the following properties:
- $\displaystyle \frac{2d+1+\sqrt{4d^2+1}}{2}<\frac{n_{j+1}}{n_j}\leq w$ holds for all $j\geq 0$,
- there exist infinitely many $j\geq 0$ such that $\displaystyle \frac{n_{j+1}}{n_j}=w$,
where $$\{ n\in \lZ _{\geq 0}\mid a_n=1\} =:\{ n_0<n_1<n_2<\ldots \} .$$
We put $q_j:=T^{n_j}$ and $p_j:=1+T^{n_j-n_{j-1}}+\cdots +T^{n_j-n_0}$ for any $j\geq 0$. Then we have for any $j\geq 0$, $\gcd(p_j, q_j)=1$ and $p_j/q_j=\sum _{n=0}^{n_j}a_n T^{-n}.$ We put $\xi :=\sum _{n=0}^{\infty }a_n T^{-n}.$ A direct computation shows that $$\frac{-\log |\xi -p_{n_j}/q_{n_j}|}{\log |q_{n_j}|}=\frac{\log |q_{n_{j+1}}|}{\log |q_{n_j}|}=\frac{n_{j+1}}{n_j}$$ for any $j\geq 0$. By the definition of $w_1$, we obtain $w_1(\xi )\geq w-1$. Applying Lemma \[lem:bestrational\] with $\theta =w, \rho =w-d$, and $\delta =(1+\sqrt{4d^2+1})/2$, we deduce that $$w_1(\xi )=w_1 ^{*}(\xi )=\ldots =w_d(\xi )=w_d ^{*}(\xi )=w-1.$$
Analogues of Theorems \[thm:mainconti1\] and \[thm:mainconti2\] for real and $p$-adic numbers
---------------------------------------------------------------------------------------------
In this subsection, we give analogues of Theorems \[thm:mainconti1\] and \[thm:mainconti2\], which are generalizations of Theorems 4.1 and 4.2 in [@Bugeaud12], and Theorems 1 and 2 in [@Bugeaud15].
\[thm:rmainconti1\] Let $d\geq 2$ be an integer and $w\geq (3d+2+\sqrt{9d^2+4d+4})/2$ be a real number. Let $a,b$ be distinct positive integers. Let $(a_{n,w})_{n\geq 1}$ be a sequence given by $$a_{n,w}=\begin{cases}
b & \text{if } n=\lfloor w^i \rfloor \text{ for some integer } i\geq 0,\\
a & \text{otherwise}.
\end{cases}$$ Set the continued fraction $\xi _w:=[0,a_{1,w},a_{2,w},\ldots ]\in \lR $. Then we have $$\label{eq:rmainconti1}
w_n ^{*}(\xi _w)=w-1,\quad w_n(\xi _w)=w$$ for all $2\leq n\leq d$.
\[thm:rmainconti2\] Let $d\geq 2$ be an integer, $w\geq 121d^2$ be a real number, and $a,b,c$ be distinct positive integers. Let $0<\eta <\sqrt{w}/d$ be a positive number and put $m_i:=\lfloor (\lfloor w^{i+1}\rfloor -\lfloor w^i-1\rfloor )/\lfloor \eta w^i\rfloor \rfloor $ for all $i\geq 1$. Let $(a_{n,w,\eta })_{n\geq 1}$ be the sequence given by $$a_{n,w,\eta }=\begin{cases}
b & \text{if } n=\lfloor w^i \rfloor \text{ for some integer } i\geq 0,\\
c & \parbox{250pt}{$\text{if } n \neq \lfloor w^i\rfloor \text{ for all integers }i\geq 0, \text{and } n=\lfloor w^j\rfloor +m \lfloor \eta w^j\rfloor \text{ for some integers }1\leq m\leq m_j, j\geq 1,$}\\
a & \text{otherwise}.
\end{cases}$$ Set the continued fraction $\xi _{w,\eta }:=[0,a_{1,w,\eta },a_{2,w,\eta },\ldots ]\in \lR $. Then we have $$\label{eq:rmainconti2}
w_n ^{*}(\xi _{w,\eta })=\frac{2 w-2-\eta }{2+\eta },\quad w_n(\xi _{w,\eta })=\frac{2 w-\eta }{2+\eta }$$ for all $2\leq n\leq d$. Hence, we have $$w_n(\xi _{w,\eta })-w_n ^{*}(\xi _{w,\eta })=\frac{2}{2+\eta }$$ for all $2\leq n\leq d$.
It seems that for each $d\geq 3$, the real numbers $\xi $ defined by Theorems \[thm:rmainconti1\] and \[thm:rmainconti2\] are the first explicit continued fraction examples for which $w_d(\xi )$ and $w_d ^{*}(\xi )$ are different.
We denote by ${\left\lvert\cdot \right\rvert}_p$ the absolute value of $\lQ_p$ normalized to satisfy ${\left\lvertp\right\rvert}_p=p^{-1}$. We recall the definitions of Diophantine exponents $w_n$ and $w_n^{*}$ in $\lQ _p$. For $\xi \in \lQ_p$ and an integer $n\geq 1$, we denote by $w_n(\xi )$ (resp. $w_n^{*}(\xi )$) the supremum of the real numbers $w$ (resp. $w^{*}$) which satisfy $$0<|P(\xi )|_p\leq H(P)^{-w-1}\quad (\text{resp.\ } 0<|\xi -\alpha |_p\leq H(\alpha )^{-w^{*}-1})$$ for infinitely many $P(X)\in \lZ[X]$ of degree at most $n$ (resp. algebraic numbers $\alpha \in \lQ_p$ of degree at most $n$).
\[thm:pmainconti1\] Let $d\geq 2$ be an integer and $w\geq (3d+2+\sqrt{9d^2+4d+4})/2$ be a real number. Let $b$ be a positive integer and $(\varepsilon _j)_{j\geq 0}$ be a sequence in $\{ 0,1\}$. Let $(a_{n,w})_{n\geq 1}$ be a sequence given by $$a_{n,w}=\begin{cases}
b+3 i+2 & \text{if } n=\lfloor w^i \rfloor \text{ for some integer } i\geq 0,\\
b+3 i+\varepsilon _j & \text{if } \lfloor w^i \rfloor <n<\lfloor w^{i+1} \rfloor \text{ for some integer } i\geq 0.
\end{cases}$$ Set the Schneider’s $p$-adic continued fraction $\xi _w:=[a_{1,w},a_{2,w},\ldots ]\in \lQ _p$. Then we have $$\label{eq:pmainconti1}
w_n ^{*}(\xi _w)=w-1,\quad w_n(\xi _w)=w$$ for all $2\leq n\leq d$.
\[thm:pmainconti2\] Let $d\geq 2$ be an integer and $w\geq 121d^2$ be a real number. Let $b$ be positive integer, $(\varepsilon _j)_{j\geq 0}$ be a sequence in $\{ 0,1\}$, and $0<\eta <\sqrt{w}/d$ be a positive number. Let $(a_{n,w,\eta })_{n\geq 1}$ be the sequence given by $$a_{n,w,\eta }=\begin{cases}
b+4 i+3 & \text{if } n=\lfloor w^i \rfloor \text{ for some integer } i\geq 0,\\
b+4 i+2 & \parbox{210pt}{$\text{if } \lfloor w^i \rfloor <n<\lfloor w^{i+1} \rfloor \text{ for some integer } i\geq 0 \text{ and } (n-\lfloor w^i \rfloor )/\lfloor \eta w^i \rfloor \in \lZ ,$}\\
b+4 i+\varepsilon _i & \parbox{210pt}{$\text{if } \lfloor w^i \rfloor <n<\lfloor w^{i+1} \rfloor \text{ for some integer } i\geq 0 \text{ and } (n-\lfloor w^i \rfloor )/\lfloor \eta w^i \rfloor \not\in \lZ .$}
\end{cases}$$ Set the Schneider’s $p$-adic continued fraction $\xi _{w,\eta }:=[a_{1,w,\eta },a_{2,w,\eta },\ldots ]\in \lQ _p$. Then we have $$\label{eq:pmainconti2}
w_n ^{*}(\xi _{w,\eta })=\frac{2 w-2-\eta }{2+\eta },\quad w_n(\xi _{w,\eta })=\frac{2 w-\eta }{2+\eta }$$ for all $2\leq n\leq d$. Hence, we have $$w_n(\xi _{w,\eta })-w_n ^{*}(\xi _{w,\eta })=\frac{2}{2+\eta }$$ for all $2\leq n\leq d$.
The definition and notation of the Schneider’s $p$-adic continued fractions can be found in [@Bugeaud15]. It seems that for each $d\geq 3$, the $p$-adic numbers $\xi $ defined by Theorems \[thm:pmainconti1\] and \[thm:pmainconti2\] are the first explicit continued fraction examples for which $w_d(\xi )$ and $w_d ^{*}(\xi )$ are different.
In what follows, we prepare lemmas in order to prove the above theorems. We omit the details of proofs of these lemmas.
We denote by $\lZ[X]_{\min }$ the set of all non-constant, irreducible, primitive polynomials in $\lZ[X]$ whose leading coefficients are positive. For $\xi \in \lR$ and an integer $n\geq 1$, we denote by $\tilde{w}_n(\xi )$ the supremum of the real numbers $w$ which satisfy $$0<|P(\xi )|\leq H(P)^{-w}$$ for infinitely many $P(X) \in \lZ[X]_{\min }$ of degree at most $n$. For $\xi \in \lQ_p$ and an integer $n\geq 1$, we denote by $\tilde{w}_n(\xi )$ the supremum of the real numbers $w$ which satisfy $$0<|P(\xi )|_p\leq H(P)^{-w-1}$$ for infinitely many $P(X) \in \lZ[X]_{\min }$ of degree at most $n$.
Using Gelfond’s Lemma (see Lemma A.3 in [@Bugeaud04]) instead of , we obtain an analogue of Lemma \[lem:weak\] for real numbers.
\[lem:weakr\] Let $n\geq 1$ be an integer and $\xi $ be a real number. Then we have $$w_n (\xi )=\tilde{w}_n (\xi ).$$
Using Gelfond’s Lemma and the fact that $|a||a|_p\geq 1$ for non-zero integers $a$, we obtain an analogue of Lemma \[lem:weak\] for $p$-adic numbers.
\[lem:weakp\] Let $n\geq 1$ be an integer and $\xi $ be in $\lQ _p$. Then we have $$w_n (\xi )=\tilde{w}_n (\xi ).$$
Analogues of Lemma \[lem:poly\] for real and $p$-adic numbers follow in a similar way to the proof of Lemma \[lem:poly\].
\[lem:polyr\] Let $\alpha ,\beta $ be real numbers and $P(X) \in \lZ[X]$ be a non-constant polynomial of degree $d$. Let $C\geq 0$ be a real number. Assume that $|\alpha -\beta |\leq C$. Then there exists a positive constant $C_2(\alpha ,C,d)$, depending only on $\alpha ,C,$ and $d$ such that $$|P(\alpha )-P(\beta )|\leq C_2(\alpha ,C,d)|\alpha -\beta |H(P).$$
\[lem:polyp\] Let $\alpha ,\beta $ be in $\lQ _p$ and $P(X) \in \lZ[X]$ be a non-constant polynomial of degree $d$. Let $C\geq 0$ be a real number. Assume that $|\alpha -\beta |_p\leq C$. Then there exists a positive constant $C_3(\alpha ,C,d)$, depending only on $\alpha ,C,$ and $d$ such that $$|P(\alpha )-P(\beta )|_p\leq C_3(\alpha ,C,d)|\alpha -\beta |_p.$$
Lemmas \[lem:weak\], \[lem:poly\], \[lem:height\] , and \[lem:Galois\]–\[lem:Lio.inequ2\] are used in the proof of Lemma \[lem:bestquad\]. Analogues of Lemmas \[lem:Galois\]–\[lem:Lio.inequ2\] for real numbers and Lemmas \[lem:Galois\] and \[lem:Lio.inequ2\] for $p$-adic numbers are already known (see p.730 in [@Bugeaud12], Theorem A.1 and Corollary A.2 in [@Bugeaud04], and Lemmas 3.2 and 2.5 in [@Pejkovic12]). Slightly modifying the proof of Lemma 2.5 in [@Pejkovic12], we obtain an analogue of Lemma \[lem:Lio.inequ1\] for $p$-adic numbers. That is, for non-constant polynomial $P(X)\in \lZ [X]$ of degree $m$ and algebraic number $\alpha \in \lQ_p$ of degree $n$, $$\label{eq:liop}
P(\alpha )=0 \text{ or } |P(\alpha )|_p\geq cH(P)^{-n}H(\alpha )^{-m},$$ where $c$ is a positive constant depending only on $m$ and $n$. The inequality is probably known. However, we were unable to find it in the literature.
Using Lemma A.2 in [@Bugeaud04] instead of , in addition to analogues of \[lem:weak\], \[lem:poly\], and \[lem:Galois\]–\[lem:Lio.inequ2\], we obtain an analogue of Lemma \[lem:bestquad\] for real numbers.
\[lem:bestquadr\] Let $d\geq 2$ be an integer. Let $\xi $ be a real number, and $\theta ,\rho ,\delta $ be positive numbers. Assume that there exists a sequence $(\alpha _j)_{j\geq 1}$ such that for any $j\geq 1$, $\alpha _j\in \lR$ is quadratic, $(H(\alpha _j))_{j\geq 1}$ is a divergent increasing sequence, and $$\begin{gathered}
\limsup_{j\rightarrow \infty } \frac{\log H(\alpha _{j+1})}{\log H(\alpha _j)}\leq \theta ,\\
d+\delta \leq \liminf_{j\rightarrow \infty } \frac{-\log |\xi -\alpha _j|}{\log H(\alpha _j)},\quad \limsup_{j\rightarrow \infty } \frac{-\log |\xi -\alpha _j|}{\log H(\alpha _j)}\leq d+\rho .
\end{gathered}$$ If $2d\theta \leq (d-2+\rho )\delta $, then we have for all $2\leq n\leq d$, $$d-1+\delta \leq w_n ^{*}(\xi )\leq d-1+\rho .$$ Furthermore, assume that there exist a non-negative number $\varepsilon $ and a positive number $c$ such that for any $j\geq 1, 0<|\alpha _j-\alpha _j '|\leq c$ and $$\limsup_{j\rightarrow \infty }\frac{-\log |\alpha _j -\alpha _j '|}{\log H(\alpha _j)}\geq \varepsilon .$$ If $2d\theta \leq (d-2+\delta )\delta $, then we have for all $2\leq n\leq d$, $$d-1+\delta \leq w_n ^{*}(\xi )\leq d-1+\rho ,\quad \varepsilon \leq w_n(\xi )-w_n ^{*}(\xi ).$$ Finally, assume that there exists a non-negative number $\chi $ such that $$\limsup_{i\rightarrow \infty} \frac{-\log |\alpha _i-\alpha _i '|}{\log H(\alpha _i)}\leq \chi .$$ Then we have for all $2\leq n\leq d$, $$d-1+\delta \leq w_n ^{*}(\xi )\leq d-1+\rho ,\quad \varepsilon \leq w_n(\xi )-w_n ^{*}(\xi )\leq \chi .$$
\[lem:bestquadp\] Let $d\geq 2$ be an integer. Let $\xi $ be in $\lQ _p$ with $|\xi |_p\leq 1$ and $\theta ,\rho ,\delta $ be positive numbers. Assume that there exists a sequence $(\alpha _j)_{j\geq 1}$ such that for any $j\geq 1$, $\alpha _j\in \lQ_p$ is quadratic, $(H(\alpha _j))_{j\geq 1}$ is a divergent increasing sequence, and $$\begin{gathered}
\limsup_{j\rightarrow \infty } \frac{\log H(\alpha _{j+1})}{\log H(\alpha _j)}\leq \theta ,\\
d+\delta \leq \liminf_{j\rightarrow \infty } \frac{-\log |\xi -\alpha _j|_p}{\log H(\alpha _j)},\quad \limsup_{j\rightarrow \infty } \frac{-\log |\xi -\alpha _j|_p}{\log H(\alpha _j)}\leq d+\rho .
\end{gathered}$$ If $2d\theta \leq (d-2+\rho )\delta $, then we have for all $2\leq n\leq d$, $$d-1+\delta \leq w_n ^{*}(\xi )\leq d-1+\rho .$$ Furthermore, assume that there exists a non-negative number $\varepsilon $ such that for any $j\geq 1, 0<|\alpha _j-\alpha _j '|_p\leq 1$ and $$\limsup_{j\rightarrow \infty }\frac{-\log |\alpha _j -\alpha _j '|_p}{\log H(\alpha _j)}\geq \varepsilon .$$ If $2d\theta \leq (d-2+\delta )\delta $, then we have for all $2\leq n\leq d$, $$d-1+\delta \leq w_n ^{*}(\xi )\leq d-1+\rho ,\quad \varepsilon \leq w_n(\xi )-w_n ^{*}(\xi ).$$ Finally, assume that there exists a non-negative number $\chi $ such that $$\limsup_{i\rightarrow \infty} \frac{-\log |\alpha _i-\alpha _i '|_p}{\log H(\alpha _i)}\leq \chi .$$ Then we have for all $2\leq n\leq d$, $$d-1+\delta \leq w_n ^{*}(\xi )\leq d-1+\rho ,\quad \varepsilon \leq w_n(\xi )-w_n ^{*}(\xi )\leq \chi .$$
Note that the proof of Lemma \[lem:bestquadp\] uses analogues of Lemmas \[lem:weak\], \[lem:poly\], and \[lem:Galois\]–\[lem:Lio.inequ2\] and the following: Let $P_j(X)=A_j(X-\alpha _j)(X-\alpha _j')\in \lZ[X]_{\min }$ be the minimal polynomial of $\alpha _j$. Then, by the proof of Lemma 5 in [@Bugeaud15], we have $|A_j|_p=1$ for sufficiently large $j$.
For any $j\geq 2$, we put $$\xi _{w,j}:=[0,a_{1,w},\ldots ,a_{\lfloor w^j\rfloor ,w},\overline{a}]\in \lR.$$ It follows from the proof of Theorem 4.1 in [@Bugeaud11] that $\xi _{w,j}$ are quadratic irrationals, $(H(\xi _{w,j}))_{j\geq 1}$ is a divergent increasing sequence, and $$\begin{gathered}
\lim _{j\rightarrow \infty }\frac{-\log |\xi _w -\xi _{w,j}|}{\log H(\xi _{w,j})}=w,\quad \lim _{j\rightarrow \infty }\frac{-\log |\xi _{w,j} -\xi _{w,j} '|}{\log H(\xi _{w,j})}=1,\\
\limsup _{j\rightarrow \infty }\frac{\log H(\xi _{w,j+1})}{\log H(\xi _{w,j})}\leq w.
\end{gathered}$$ Hence, we have by Lemma \[lem:bestquadr\].
For any $j\geq 2$, we put $$\xi _{w,\eta, j}:=[0,a_{1,w,\eta },\ldots ,a_{\lfloor w^j\rfloor ,w,\eta },\overline{a,\ldots ,a,c}]\in \lR,$$ where the length of the periodic part is $\lfloor \eta w^j\rfloor $. It follows from the proof of Theorem 4.3 in [@Bugeaud11] that $\xi _{w,\eta ,j}$ are quadratic irrationals, $(H(\xi _{w,\eta ,j}))_{j\geq 1}$ is a divergent increasing sequence, and $$\begin{gathered}
\lim _{j\rightarrow \infty }\frac{-\log |\xi _{w,\eta } -\xi _{w,\eta ,j}|}{\log H(\xi _{w,\eta ,j})}=\frac{2 w}{2+\eta },\quad \lim _{j\rightarrow \infty }\frac{-\log |\xi _{w,\eta ,j} -\xi _{w,\eta ,j} '|}{\log H(\xi _{w,\eta ,j})}=\frac{2}{2+\eta },\\
\limsup _{j\rightarrow \infty }\frac{\log H(\xi _{w,\eta ,j+1})}{\log H(\xi _{w,\eta ,j})}\leq w.
\end{gathered}$$ Hence, we have by Lemma \[lem:bestquadr\].
For any $j\geq 2$, we put $$\xi _{w,j}:=[a_{1,w},\ldots ,a_{\lfloor w^j\rfloor ,w},\overline{a_{\lfloor w^j\rfloor +1,w}}]\in \lQ_p.$$ It follows from the proof of Theorem 1 in [@Bugeaud15] that $\xi _{w,j}$ are quadratic irrationals, $(H(\xi _{w,j}))_{j\geq 1}$ is a divergent increasing sequence, and $$\begin{gathered}
\lim _{j\rightarrow \infty }\frac{-\log |\xi _w -\xi _{w,j}|_p}{\log H(\xi _{w,j})}=w,\quad \lim _{j\rightarrow \infty }\frac{-\log |\xi _{w,j} -\xi _{w,j} '|_p}{\log H(\xi _{w,j})}=1,\\
\limsup _{j\rightarrow \infty }\frac{\log H(\xi _{w,j+1})}{\log H(\xi _{w,j})}\leq w.
\end{gathered}$$ Hence, we have by Lemma \[lem:bestquadp\].
For any $j\geq 2$, we put $$\xi _{w,\eta, j}:=[a_{1,w,\eta },\ldots ,a_{\lfloor w^j\rfloor ,w,\eta },\overline{a_{\lfloor w^j\rfloor +1,w,\eta },\ldots ,a_{\lfloor w^j\rfloor +\lfloor \eta w^j\rfloor,w,\eta }}]\in \lQ _p.$$ It follows from the proof of Theorem 2 in [@Bugeaud15] that $\xi _{w,\eta ,j}$ are quadratic irrationals, $(H(\xi _{w,\eta ,j}))_{j\geq 1}$ is a divergent increasing sequence, and $$\begin{gathered}
\lim _{j\rightarrow \infty }\frac{-\log |\xi _{w,\eta } -\xi _{w,\eta ,j}|_p}{\log H(\xi _{w,\eta ,j})}=\frac{2 w}{2+\eta },\quad \lim _{j\rightarrow \infty }\frac{-\log |\xi _{w,\eta ,j} -\xi _{w,\eta ,j} '|_p}{\log H(\xi _{w,\eta ,j})}=\frac{2}{2+\eta },\\
\limsup _{j\rightarrow \infty }\frac{\log H(\xi _{w,\eta ,j+1})}{\log H(\xi _{w,\eta ,j})}\leq w.
\end{gathered}$$ Hence, we have by Lemma \[lem:bestquadp\].
Acknowledgements {#acknowledgements .unnumbered}
----------------
The author would like to express his gratitude to Prof. Shigeki Akiyama for the helpful comments on Lemma \[lem:seq\] and improving the language of this paper. The author is deeply grateful to Prof. Hajime Kaneko for the helpful comments on Corollary \[cor:linind\] and Lemma \[lem:seq\]. The author also wishes to express his thanks to Prof. Dinesh S. Thakur for several helpful comments. The author would like to thank the referee for careful reading of this paper and giving helpful comments.
[9]{} J.-P. Allouche, J. Shallit, [*Automatic Sequences: Theory, Applications, Generalizations*]{}, Cambridge University Press, Cambridge (2003).
M. Amou, [*Approximation to certain transcendental decimal fractions by algebraic numbers*]{}, J. Number Theory **37** (1991), no. 2, 231–241.
R. C. Baker, [*On approximation with algebraic numbers of bounded degree*]{}, Mathematika **23** (1976), no. 1, 18–31.
Y. Bugeaud, [*Approximation by algebraic numbers*]{}, Cambridge Tracts in Mathematics, **160** Cambridge University Press, Cambridge, 2004.
Y. Bugeaud, [*Mahler’s classification of numbers compared with Koksma’s. III*]{}, Publ. Math. Debrecen **65** (2004), no. 3-4, 305–316.
Y. Bugeaud, [*Diophantine approximation and Cantor sets*]{}, Math. Ann. **341** (2008), no. 3, 677–684.
Y. Bugeaud, [*On simultaneous rational approximation to a real number and its integral powers*]{}, Ann. Inst. Fourier (Grenoble) **60** (2010), no. 6, 2165–2182.
Y. Bugeaud, A. Dujella, [*Root separation for irreducible integer polynomials*]{}, Bull. Lond. Math. Soc. **43** (2011), no. 6, 1239–1244.
Y. Bugeaud, N. Budarina, D. Dickinson, H. O’Donnell, [*On simultaneous rational approximation to a $p$-adic number and its integral powers*]{}, Proc. Edinb. Math. Soc. (2) **54** (2011), no. 3, 599–612.
Y. Bugeaud, [*Continued fractions with low complexity: transcendence measures and quadratic approximation*]{}, Compos. Math. **148** (2012), no. 3, 718–750.
Y. Bugeaud, T. Pejković, [*Quadratic approximation in $\lQ _p$*]{}, Int. J. Number Theory **11** (2015), no. 1, 193–209.
Y. Bugeaud, [*Quadratic approximation to automatic continued fractions*]{}, J. Théor. Nombres Bordeaux **27** (2015), no. 2, 463–482.
J. W. S. Cassels, [*Local fields*]{}, London Mathematical Society Student Texts, **3** Cambridge University Press, Cambridge, (1986).
T. Chaichana, V. Laohakosol, A. Harnchoowong, [*Linear independence of continued fractions in the field of formal series over a finite field*]{}, Thai J. Math. **4** (2006), no. 1, 163–177.
H-J. Chen, [*Distribution of Diophantine approximation exponents for algebraic quantities in finite characteristic*]{}, J. Number Theory **133** (2013), no. 11, 3620–3644.
G. Christol, T. Kamae, M. Mendes France, G. Rauzy, [*Suites algébriques, automates et substitutions*]{}, (French) Bull. Soc. Math. France **108** (1980), no. 4, 401–419.
A. Firicel, [*Rational approximations to algebraic Laurent series with coefficients in a finite field*]{}, Acta Arith. **157** (2013), no. 4, 297–322.
J. F. Koksma, [*Über die Mahlersche Klasseneinteilung der transzendenten Zahlen und die Approximation komplexer Zahlen durch algebraische Zahlen*]{}, (German) Monatsh. Math. Phys. **48**, (1939), 176–189.
A. Lasjaunias, [*A survey of Diophantine approximation in fields of power series*]{}, Monatsh. Math. **130** (2000), no. 3, 211–229.
K. Mahler, [*Zur Approximation der Exponentialfunktionen und des Logarithmus, I, II*]{}, (German) J. reine angew. Math., **166**, (1932), 118–150.
K. Mahler, [*On a theorem of Liouville in fields of positive characteristic*]{}, Canadian J. Math. **1**, (1949), 397–400.
B. de Mathan, [*Approximations diophantiennes dans un corps local*]{}, (French) Bull. Soc. Math. France Suppl. Mém. **21** (1970), 93pp.
B. de Mathan, [*Approximation exponents for algebraic functions in positive characteristic*]{}, Acta Arith. **60** (1992), no. 4, 359–370.
R. Müller, [*Algebraische Unabhängigkeit der Werte gewisser Lückenreihen in nicht-archimedisch bewerteten Körpern*]{}, (German) Results Math., **24**, (1993), no. 3–4, 288–297.
T. Ooto, [*Quadratic approximation in $\lF _q((T^{-1}))$*]{}, Osaka J. Math. **54** (2017), no. 1, 129–156.
T. Pejković, [*Polynomial root separation and applications*]{}, PhD Thesis, Université de Strasbourg and University of Zagreb, Strasbourg, 2012.
N. Popescu, A. Zaharescu, [*On the structure of the irreducible polynomials over local fields*]{}, J. Number Theory **52** (1995), no. 1, 98–118.
A. J. van der Poorten, J. Shallit, [*Folded continued fractions*]{}, J. Number Theory **40** (1992), no. 2, 237–250.
K. F. Roth, [*Rational approximations to algebraic numbers*]{}, Mathematika **2** (1955), 1–20.
W. M. Schmidt, [*On continued fractions and Diophantine approximation in power series fields*]{}, Acta Arith. **95** (2000), no. 2, 139–166.
J. Shallit, [*Simple continued fractions for some irrational numbers*]{}, J. Number Theory **11** (1979), 209–217.
V. G. Sprindzuk, [*Mahler’s problem in metric number theory*]{}, Izdat. “Nauka iTehnika”, Minsk, (Russian). English translation by B. Volkmann, [*Translations of Mathematical Monographs*]{}, **25**, American Mathematical Society, Providence, RI, 1969.
D. S. Thakur, [*Diophantine approximation exponents and continued fractions for algebraic power series*]{}, J. Number Theory **79** (1999), no. 2, 284–291.
D. S. Thakur, [*Higher Diophantine approximation exponents and continued fraction symmetries for function fields*]{}, Proc. Amer. Math. Soc. **139** (2011), no. 1, 11–19.
D. S. Thakur, [*Higher Diophantine approximation exponents and continued fraction symmetries for function fields II*]{}, Proc. Amer. Math. Soc. **141** (2013), no. 8, 2603–2608.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
In this paper, the evolutions of longitudinal proton structure function have been obtained at small-$x$ upto next-to-next-to-leading order using a hard pomeron behaviour. In our paper, evolutions of gluonic as well as heavy longitudinal structure functions have been obtained separately and the total contributions have been calculated. The total longitudinal structure functions have been compared with results of Donnachie-Landshoff (DL) model, Color Dipole (CD) model, $k_{T}$ factorization and H1 data.
PACS number(s): 12.38.-t, 12.38.Bx, 14.70.Dj
Keywords : [Longitudinal structure function; Gluon distribution; QCD; Small-$x$; Regge-like behavior]{}
author:
- 'G. R. Boroun'
- 'B. Rezaei'
- 'J. K. Sarma'
title: A Phenomenological Solution Small $x$ to the Longitudinal Structure Function Dynamical Behaviour
---
I. Introduction
---------------
The measurement of the longitudinal structure function $F_{L}(x,Q^{2})$ is of great theoretical importance, since it may allow us to distinguish between different models describing the QCD evolution small $x$. In deep-inelastic scattering (DIS), the structure function measurements remain incomplete until the longitudinal structure function $F_{L}$ is actually measured \[1\]. The longitudinal structure function in DIS is one of the observables from which the gluon distribution can be unfolded. The dominant contribution small $x$ to $F_{L}(x,Q^{2})$ comes from the gluon operators. Hence a measurement of $F_{L}(x,Q^{2})$ can be used to extract the gluon structure function and therefore the measurement of $F_{L}$ provides a sensitive test of perturbative QCD (pQCD) \[2-3\].\
The experimental determination of $F_{L}$ is in general difficult and requires a measurement of the inelastic cross section at the same values of $x$ and $Q^{2}$ but for different center-of-mass energy of the incoming beams. This was achieved at the DESY electron-proton collider HERA by changing the proton beam energy with the lepton beam energy fixed. HERA collected $e^{+}p$ collision data with the H1 and ZEUS detectors at a positron beam energy of $27.5 GeV$ and a proton beam energies of $920, 575$ and $460 GeV$, which allowed a measurement of structure functions at $x$ values $5{\times}10^{-6}{\leq}x{\leq}0.02$ and $Q^{2}$ values $0.2~ GeV^{2} {\leq}Q^{2}{\leq}800 ~GeV^{2}$ \[4\].\
Since the longitudinal structure function $F_{L}$ contains rather large heavy flavor contributions in the small-$x$ region, therefore the measurement of these observables have told us about the different scheme used to calculate the heavy quark contribution to the structure function and also the dependence of parton distribution functions (PDFs) on heavy quark masses \[5\]. For PDFs we need to use the corresponding massless Wilson coefficients up to next-to-next-to leading order (NNLO) \[6-14\], but we determine heavy contributions of longitudinal structure function in leading order and next-to-leading order by using massive Wilson coefficients in the asymptotic region $Q^{2}{\gg}m_{h}^{2}$, where $m_h$ is the mass of heavy quark \[15-21\].\
The dominant small $x$ role is played by gluons, and the basic dynamic quantity is the unintegrated gluon distribution $f(x,Q_{t}^{2})$, where $Q_{t}$ its transverse momentum. In the leading $\ln(1/x)$ approximation, the Lipatov equation, which takes into account all the $LL(1/x)$ terms has the following form \[22-27\] $$\begin{aligned}
f(x,Q_{t}^{2})&=&f^{0}(x,Q_{t}^{2})+\frac{N_{c}\alpha_{s}}{\pi}\int_{x}^{1}\frac{dx'}{x'}\int\frac{d^{2}q}{{\pi}q^{2}}[\frac{Q_{t}^{2}}{(\mathbf{q}+\mathbf{Q_{t}})^{2}}\nonumber\\
&&f(x',(\mathbf{q}+\mathbf{Q_{t}})^{2})-f(x',Q_{t}^{2})\Theta(Q_{t}^{2}-q^{2})],\nonumber\\\end{aligned}$$ where $$f(x,Q_{t}^{2}){\equiv}
\frac{{\partial}[xg(x,Q^{2})]}{{\partial}\ln
Q^{2}}|_{Q^{2}=Q_{t}^{2}}.$$ The NLO corrections can be find in \[28-29\]. This equation sums the ladder diagrams with a gluon exchange accompanied by virtual corrections that are responsible for the gluon reggeization. The analytical solution small $x$ is given by $$\begin{aligned}
f(x,Q_{t}^{2}){\sim}\mathcal{R}(x,Q_{t}^{2})x^{-\lambda_{BFKL}}\end{aligned}$$ where $\lambda_{BFKL}=4\frac{N_{c}\alpha_{s}}{\pi}\ln(2)$ at LO and at NLO it has the following form $$\lambda_{BFKL}=4\frac{N_{c}\alpha_{s}}{\pi}\ln(2)[1-\frac{\alpha_{s}\beta_{0}}{\pi}(\ln2-\frac{\pi^{2}}{16})].$$ The quantity $1+\lambda_{BFKL}$ is equal to the intercept of the so-called BFKL Pomeron. In the phenomenological analysis of the high energy behavior of hadronic as well as photoproduction total cross section, the value of the intercept is determined as $\alpha_{soft}{\approx}1.08$ (as this is the effective soft Pomeron) \[30\]. In DIS, a second hard-Pomeron must be added with a larger intercept $\alpha_{hp}{\approx}1.4$ \[31-35\] and recently decreases to the value $1.317$ where estimated directly from the data on the proton unpolrized structure function \[36\].\
It is tempting, however, to explore the possibility of obtaining approximate analytic solutions of the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) \[37-39\] equations themselves in the restricted domain of small-$x$ at least. Approximate solutions of the DGLAP equations have been reported \[40-48\] with considerable phenomenological success.\
The objective of this paper is the calculation of the evolution equation of $F_{L}$ using a hard Pomeron behavior small $x$ at LO up to NNLO. Therefore we concentrate on the Pomeron in our calculations, although clearly good fits relative to results show that the gluon distribution and the singlet structure function need a model having hard Pomeron. Our paper is organized as follows : in section $I$, which is the introduction, we described the background in short. Section $II$ is the theory where we have discussed the non-singlet, light and heavy part of longitudinal structure function separately in details. Section $III$ is the results and discussions on the results. Section $IV$ is the conclusions where overall conclusions were given in brief. Lastly in appendix A and in appendix B we have put the required explicit forms of splitting functions and coefficient functions respectively.\
II. Theory
----------
In perturbative QCD, the longitudinal structure function can be written as \[6,11-16\] $$\begin{aligned}
x^{-1}F_{L}&=&C_{L,ns}{\otimes}q_{ns}+<e^{2}>(C_{L,q}{\otimes}q_{s}+C_{L,g}{\otimes}g)\nonumber\\
&&+x^{-1}F^{heavy}_{L}\end{aligned}$$ where ${\otimes}$ denotes the common convolution in the $N_{f}=3$ light quark flavor sector, and $q_{i}$ and $g$ represent the number distributions of quarks and gluons, respectively, in the fractional hadron momentum. $q_{s}$ stands for the flavour-singlet quark distribution, $q_{s}=\sum_{u,d,s}(q+\overline{q})$, and $q_{ns}$ is the corresponding non-singlet combination. The average squared charge ($=\frac{2}{9}$ for light quarks) is represented by $<e^{2}>$. The symbol ${\otimes}$ represents the standard Mellin convolution and is given by $$\begin{aligned}
A(x){\otimes}B(x)=\int_{x}^{1}\frac{dy}{y}A(y)B(\frac{x}{y}).\end{aligned}$$ The perturbative expansion of the coefficient functions can be written \[11-14\] as $$\begin{aligned}
C_{L,a}(\alpha_{s},x)=\sum_{n=1}(\frac{\alpha_{s}}{4\pi})^{n}c^{(n)}_{L,a}(x).\end{aligned}$$ In Eq.7, the superscript of the coefficients on the right-hand side represents the order in $\alpha_{s}$ and not, as for the splitting functions, the ‘$m$’ in $N^{m}LO$ \[11-13\]. According to Eq.5 we display the individual parton distributions separately, then discuss those evolutions as $$F^{total}_{L}=F^{ns}_{L}+F^{light}_{L}(=F^{q}_{L}+F^{g}_{L})+F^{heavy}_{L}.$$
[**A)**]{} [**Evolution of non-singlet longitudinal structure function:**]{}\
The non-singlet longitudinal structure function $F_{L,ns}$ obtained from the connection between the quark coefficient function $C_{L,ns}$ and quark distribution $q_{ns}$ is given by \[49\], $$\begin{aligned}
\mathcal{F}^{ns}_{L}(x,Q^{2}){\equiv}~x^{-1}F^{ns}_{L}(x,Q^{2})=C_{L,ns}{\otimes}q_{ns}(x,Q^{2}).\end{aligned}$$ By differentiating Eq.9 with respect to $Q^{2}$ by means of the evolution equations for $a_{s}=\frac{\alpha_{s}}{4\pi}$ and $q_{ns}(x,Q^{2})$, $$\begin{aligned}
\frac{da_{s}}{d{\ln}Q^{2}}&=&\beta(a_{s})=-\beta_{0}a_{s}^{2}-\beta_{1}a_{s}^{3}-\beta_{2}a_{s}^{4}-..,\end{aligned}$$
where $$\begin{aligned}
\beta_{0}&=&\frac{11}{3}N_{c}-\frac{4}{3}T_{f},\nonumber\\
\beta_{1}&=&\frac{34}{3}N_{c}^{2}-\frac{20}{3}N_{c}T_{f}-4C_{F}T_{f},\nonumber\\
\beta_{2}&=&\frac{2857}{54}N_{C}^{3}+2C_{F}^{2}T_{f}-\frac{205}{9}C_{F}N_{C}T_{f}\nonumber\\
&&-\frac{1415}{27}N_{C}^{2}T_{f}+\frac{44}{9}C_{F}T_{f}^{2}+\frac{158}{27}N_{C}T_{f}^{2},\end{aligned}$$ are the one-loop, two-loop and three-loop corrections to the QCD $\beta$-function and $$\begin{aligned}
N_{C}=3,~ ~C_{F}=\frac{N_{C}^{2}-1}{2N_{C}}=\frac{4}{3},~~
T_{f}=T_{R}N_{f}=\frac{1}{2}N_{f},\end{aligned}$$ where $N_{C}$ is the number of colors and $N_{f}$ is the number of active flavors,
and $$\begin{aligned}
\frac{dq_{ns}}{d{\ln}Q^{2}}=P_{ns}{\otimes}q_{ns}.\end{aligned}$$ The non-singlet evolution equation for the longitudinal structure function large $x$ is obtained as
$$\begin{aligned}
\frac{d\mathcal{F}^{ns}_{L}(x,Q^{2})}{d{\ln}Q^{2}}&=&\{P_{ns}(a_{s})+\beta(a_{s})\frac{d}{da_{s}}{\ln}c_{L,ns}(a_{s})\}{\otimes}\mathcal{F}^{ns}_{L}\nonumber\\
&&=K_{L,ns}{\otimes}\mathcal{F}^{ns}_{L}(x,Q^{2}).\end{aligned}$$
\
[**B)**]{} [**Evolution of light longitudinal structure function:**]{}\
Now, we present our evolution for the light longitudinal structure function in electromagnetic DIS at three loops, where ‘light’ refers to the common $u,d,s$ quarks and their anti quarks, and gluon distributions, as \[5-16, 49-53, 11-12\] $$\begin{aligned}
F^{light}_{L}(x,Q^{2})&=&C_{L,q}{\otimes}F^{s}_{2}(x,Q^{2})+<e^{2}>C_{L,g}{\otimes}G(x,Q^{2})\nonumber\\
&&=\sum_{n=1}(\frac{\alpha_{s}}{4\pi})^{n}[c^{(n)}_{L,q}(x){\otimes}F^{s}_{2}(x,Q^{2})\nonumber\\
&&+<e^{2}>c^{(n)}_{L,g}(x){\otimes}G(x,Q^{2})]\nonumber\\
&&{\equiv}\sum_{n=1}F^{(n),light}_{L}(x,Q^{2}),\end{aligned}$$ where $F_{2}^{s}$ refer to the singlet structure function and $G(=xg)$ is the gluon distribution function. The singlet part of Wilson coefficients is, that decomposed into the non-singlet and a pure singlet contribution, denoted by $$\begin{aligned}
c^{(n)}_{L,q}=c^{(n)}_{L,ns}+c^{(n)}_{L,ps}.\end{aligned}$$ We start by differentiating Eq.15 with respect to ${\ln}Q^{2}$ as
$$\begin{aligned}
\frac{{\partial}F^{light}_{L}(x,Q^{2})}{{\partial}\ln
Q^{2}}&=&\sum_{n=1}n\frac{{d}{\ln}\alpha_{s}}{{{d}\ln
Q^{2}}}[(\frac{\alpha_{s}}{({4\pi})})^{n}[c^{(n)}_{L,q}(x){\otimes}F^{s}_{2}(x,Q^{2})+<e^{2}>c^{(n)}_{L,g}(x){\otimes}G(x,Q^{2})]]\nonumber\\
&&+\sum_{n=1}(\frac{\alpha_{s}}{4\pi})^{n}
[c^{(n)}_{L,q}(x){\otimes}\frac{{\partial}F^{s}_{2}(x,Q^{2})}{{\partial}\ln
Q^{2}}+<e^{2}>c^{(n)}_{L,g}{\otimes}\frac{{\partial}G(x,Q^{2})}{{\partial}\ln
Q^{2}}]\nonumber\\
&&=\frac{{d}{\ln}\alpha_{s}}{{{d}\ln Q^{2}}}[\sum_{n=1}
n{\times}F^{(n),light}_{L}(x,Q^{2})]+\sum_{n=1}(\frac{\alpha_{s}}{4\pi})^{n}
[c^{(n)}_{L,q}(x){\otimes}\frac{{\partial}F^{s}_{2}(x,Q^{2})}{{\partial}\ln
Q^{2}}+<e^{2}>c^{(n)}_{L,g}{\otimes}\frac{{\partial}G(x,Q^{2})}{{\partial}\ln
Q^{2}}].\nonumber\\\end{aligned}$$
The general mathematical structure of the DGLAP equation is \[37-39\] $$\begin{aligned}
\frac{{\partial}xf(x,Q^{2})}{{\partial}\ln Q^{2}}=
P(x,\alpha_{s}(Q^{2})){\otimes}xf(x,Q^{2}).\end{aligned}$$ The perturbative expansion of the kernels and of the beta function at LO up to NNLO are respectively $$\begin{aligned}
P(x,\alpha_{s})&=&(\frac{\alpha_{s}}{2\pi})P^{(LO)}(x)+(\frac{\alpha_{s}}{2\pi})^{2}P^{(NLO)}(x)\nonumber\\
&&+(\frac{\alpha_{s}}{2\pi})^{3}P^{(NNLO)}(x)+... ~.\end{aligned}$$ The DGLAP evolution equations for the singlet quark structure function and the gluon distribution are given by \[37-39,50-51\]
$$\frac{{\partial}G(x,Q^{2})}{{\partial}{\ln}Q^{2}}=P_{gg}(x,\alpha_{s}(Q^{2})){\otimes}
G(x,Q^{2})+P_{gq}(x,\alpha_{s}(Q^{2})){\otimes} F_{2}^{s}(x,Q^{2})$$
and
$$\frac{{\partial}F_{2}^{s}(x,Q^{2})}{{\partial}{\ln}Q^{2}}=
P_{qq}(x,\alpha_{s}(Q^{2})){\otimes}
F_{2}^{s}(x,Q^{2})+2n_{f}P_{qg}(x,\alpha_{s}(Q^{2})){\otimes}
G(x,Q^{2}),$$
After substituting Eqs.20 and 21 in Eq.17 and using Eq.15, we can not find an evolution equation for the singlet longitudinal structure function, because it contains both singlet and gluon. But, at small values of $x$ ($x \leq 10^{-3}$), the gluon contribution to the light $F_{L}$ structure function dominates over the singlet and non-singlet contribution \[49\]. Therefore $F^{light}_{L} {\rightarrow} F^{g}_{L}$ and we have the gluonic longitudinal structure function as $$\begin{aligned}
F^{g}_{L}(x,Q^{2})&=&\sum_{n=1}(\frac{\alpha_{s}}{4\pi})^{n}<e^{2}>c^{(n)}_{L,g}(x){\otimes}G(x,Q^{2})\nonumber\\
&&{\equiv}\sum_{n=1}F^{(n),g}_{L}(x,Q^{2}).\end{aligned}$$ By differentiating this equation with respect to $Q^{2}$ by means of the evolution equations for $\alpha_{s}(Q^{2})$ and $G(x,Q^{2})$ according to Eq.20 at small-$x$ and assuming gluon distribution is dominant, we find that $$\begin{aligned}
\frac{{\partial}F^{g}_{L}(x,Q^{2})}{{\partial}\ln
Q^{2}}&=&\frac{{d}{\ln}\alpha_{s}}{{d}\ln
Q^{2}}[\sum_{n=1}n{\times}F^{(n),g}_{L}(x,Q^{2})]\nonumber\\
&&+P_{gg}{\otimes}F^{g}_{L}(x,Q^{2}).\end{aligned}$$ The explicit forms of the splitting functions up to third-order are given in Appendix A. Eq.23 leads to the gluonic longitudinal evolution equation small $x$, where it can be calculated by hard-Pomeron behavior for the gluon distribution function up to NNLO. This issue is the subject of the next section.\
[**C)**]{} [**Evolution of heavy longitudinal structure function:**]{}\
One of the important areas of research at accelerators is the study of heavy flavor production. Heavy flavors can be produced in electron-positron, hadron-hadron, photon-hadron and lepton-hadron interactions. We concentrate on the last and in particular on electron-proton collisions which investigate heavy flavor production experimentally at HERA. In pQCD calculations the production of heavy quarks at HERA (and recently at LHC) proceeds dominantly via the direct boson-gluon fusion (BGF), where the photon interacts indirectly with a gluon in the proton by the exchange of a heavy quark pair \[54-61\]. The data for heavy quark (c, b) production in the BGF dynamic, have been theoretically described in the fixed-flavor number factorization scheme by the fully predictive fixed-order perturbation theory. With respect to the recent measurements of HERA, the charm contribution to the structure function small $x$ is a large fraction of the total, as this value is approximately $30\%$ ($1\%$) fraction of the total for the charm (bottom) quarks respectively. This behavior is directly related to the growth of the gluon distribution at small-$x$. We know that the gluons couple only through the strong interaction, consequently the gluons are not directly probed in DIS. Therefore, the study of charm production in deep inelastic electron-proton scattering indirectly via the $g{\rightarrow}q\bar{q}$ transition is given by the reaction $$\begin{aligned}
e^{-}(l_{1})+P(p){\rightarrow}e^{-}(l_{2})+C(p_{1})\overline{C}(p_{2})+X,\end{aligned}$$ where $X$ stands for any final hadronic state.\
We now derive our master formula for evolution of the $F_{L}^{c}$ for small values of $x$, which has the advantage of being independent of the gluon distribution function. In the range of small-$x$ , where only the gluon and quark-singlet contributions matter, while the non-singlet contributions are negligibly small, we have \[18\] $$\begin{aligned}
F_{L}^{c}(x,Q^{2})=\sum_{a}\sum_{l}C_{L,a}^{l}(x,Q^{2}){\otimes}xf_{a}^{l}(x,Q^{2}),\end{aligned}$$ with parton label $a=g, q, \overline{q}$, where $q$ generically denotes the light-quark flavours and $l = \pm $ labels the usual $+$ and $-$ linear combinations of the gluon and quark-singlet contributions, $C^{l}_{L,a}(x,Q^{2})$ is the DIS coefficient function, which can be calculated perturbatively in the parton model of QCD (Appendix B). A further simplification is obtained by neglecting the contributions due to incoming light quarks and antiquarks in Eq.25, which is justified because they vanish at LO and are numerically suppressed at NLO for small values of $x$. Therefore, Eq.25 at small values of $x$ can be rewritten as $$\begin{aligned}
F_{L}^{c}(x,Q^{2})&=&C_{L,g}^{c}(x,Q^{2}){\otimes}G(x,Q^{2})\nonumber\\
&&{\equiv}\sum_{n=1}F^{(n),c}_{L}(x,Q^{2}),\end{aligned}$$ where $n$ is the order of $\alpha_{s}$. Exploiting the derivatives of the charm longitudinal structure function with respect to ${\ln}Q^{2}$ and inserting the DGLAP evolution equation, we find that $$\begin{aligned}
\frac{{\partial}F^{c}_{L}(x,Q^{2})}{{\partial}\ln
Q^{2}}&=&\frac{{d}{\ln}\alpha_{s}}{{d}\ln
Q^{2}}\sum_{n=1}[n{\times}F^{(n),c}_{L}(x,Q^{2})]\nonumber\\
&&+P_{gg}{\otimes}F^{c}_{L}(x,Q^{2})+\frac{d{\ln}C_{L,g}^{c}}{d{\ln
}Q^{2}}{\otimes}F^{c}_{L}(x,Q^{2}).\nonumber\\\end{aligned}$$
III. Results and Discussions
----------------------------
According to the last subsections we can determine gluonic longitudinal structure function up to NNLO and also charm longitudinal structure function up to NLO. The small-$x$ region of the DIS offers a unique possibility to explore the Regge limit of pQCD. Phenomenologically, the Regge pole approach to DIS implies that the charm structure function is sums of powers in $x$. The simplest fit to the small-$x$ data corresponds to $F^{c}_{L}(x,Q^{2})=f_{c}x^{-\lambda}$, where it is controlled by pomeron exchange small $x$. HERA shows that this behavior is according to the gluon distribution small $x$, as $g{\rightarrow}c\overline{c}$. In this limit, the gluon distribution will become large, so its contribution to the evolution of the parton distribution becomes dominant. Therefore the gluon distribution has a rapid rise behavior small $x$, that is $xg(x,Q^{2})= f_{g}x^{-\lambda}$, where $\lambda$ is corresponding to the hard-Pomeron intercept \[30-33,62\]. Exploiting the small-$x$ asymptotic behavior for the gluon distribution and charm structure functions to the evolution equations of the gluonic longitudinal structure function and charm longitudinal structure function respectively (Eqs.23, 27), evolution of the longitudinal structure function at small-$x$ can be found as $$\begin{aligned}
F_{L}(x,Q^{2})|_{x{\rightarrow}0}&=&F_{L}^{light}(x,Q^{2})({\rightarrow}F^{g}_{L}(x,Q^{2}))+F^{c}_{L}(x,Q^{2})\nonumber\\
&=&\sum_{n=1}F^{(n),g}_{L}(x,Q_{0}^{2})I^{(n)}_{g}\nonumber\\
&+&\sum_{n=1}F^{(n),c}_{L}(x,Q_{0}^{2})I^{(n)}_{c},\end{aligned}$$ where $$\begin{aligned}
I^{(n)}_{g}&=&\exp\Big(\int_{Q_{0}^{2}}^{Q^{2}}{d}\ln
Q^{2}(\sum_{n=1}n\frac{{d}{\ln}\alpha_{s}}{{d}\ln
Q^{2}}+P_{gg}{\otimes}x^{\lambda})\Big),\nonumber\\\end{aligned}$$ and $$\begin{aligned}
I^{(n)}_{c}&=&\exp\Big(\int_{Q_{0}^{2}}^{Q^{2}}{d}\ln
Q^{2}(\sum_{n=1}n\frac{{d}{\ln}\alpha_{s}}{{d}\ln
Q^{2}}+P_{gg}{\otimes}x^{\lambda}\nonumber\\
&&+\frac{{d}{\ln}C_{L,g}^{c}}{{d}\ln
Q^{2}}{\otimes}x^{\lambda})\Big).\end{aligned}$$
Simplifying Eqs. (29) and (30) we get the compact forms
$$\begin{aligned}
I^{(n)}_{g}&=&
\prod_{n=1}\Big(\frac{{\alpha_{s}}_{n}(Q^2)}{{\alpha_{s}}_{n}(Q_{0}^2)}\Big)^{n}
{\cdot} \Big( \frac{Q^2}{Q_{0}^{2}}
\Big)^{P_{gg}{\otimes}x^{\lambda}},\end{aligned}$$
and
$$\begin{aligned}
I^{(n)}_{c}&=&
\prod_{n=1}\Big(\frac{{\alpha_{s}}_{n}(Q^2)}{{\alpha_{s}}_{n}(Q_{0}^2)}\Big)^{n}
{\cdot} \Big(\frac{Q^2}{Q_{0}^{2}}
\Big)^{P_{gg}{\otimes}x^{\lambda}} \nonumber\\
&& {\cdot} \Big( \frac{C_{L,g}^{c}(Q^2)}{C_{L,g}^{c}(Q_{0}^2)}
\Big)
{\otimes} x^{\lambda}.\end{aligned}$$
In Figs.1-3, we present the small $x$ behavior of the $F_{L}$ structure function according to the evolution equation (28) as a function of $x$ for $Q^{2}=12$, $45$ and $120~ GeV^{2}$. In the left hand of these figures, we present the heavy contribution $F_{L}^{c}$, gluonic contribution $F_{L}^{g}$ and total $F_{L}^{Total}$ (heavy + gluonic) of longitudinal structure function with results of DL \[30-33,62\] and CD models \[63\]. In the right hand side, $F_{L}^{Total}$ has been presented with H1 data \[54\] with total error and on-shell limit of the $k_{T}$ factorization \[64\], where the transverse momentum of the gluon $k^{2}$ is much smaller than the virtuality of the photon ( $k^{2}{<<}Q^{2}$) and this is consistent with the collinear factorization as the $k_{T}$ factorization formula can be determined from the inclusive cross section in dipole representation. In all the cases longitudinal structure function $F_L$ increases towards smaller $x$ for a fixed $Q^2$. We compared our $F_{L}^{g}$ results with the results of DL model and $F_{L}^{c}$ results with the results of CD model. We observed that our $F_{L}^{c}$ results are somewhat higher than those of CD model in all $Q^2$. But though our $F_{L}^{g}$ results are somewhat higher than those of DL model, their differences decreases when $Q^2$ increase and they almost coincide at $120 GeV^2$. On the otherhand, we observed that H1 data have been well described by our results as well as the results of $k_{T}$ factorization. Of course our results are slightly above than those of $k_{T}$ factorization. When $Q^2$ increases, our results become in better agreement with the data.\
In Figs. 4-5, we present the same results of $F_{L}$ structure function as a function of $Q^2$ for small-$x$ values $x=0.001$ and $x=0.0004$. In all the cases longitudinal structure function $F_L$ increases towards higher $Q^2$ for a fixed $x$ and smaller $x$ for a fixed $Q^2$. Our $F_{L}^{g}$ results are very close to DL results especially at $x=0.001$. But our $F_{L}^{c}$ results are slightly higher than those of CD results. Also it is observed that the rate of increment of heavy longitudinal structure function $F_{L}^{c}$ is more than that of gluonic longitudinal structure function $F_{L}^{g}$, and both approach to closer values towards higher $Q^2$ values. Again comparing the $Q^2$-evolutions of total (heavy + gluonic) longitudinal structure function $F_{L}^{Total}$ with the results of $k_{T}$ factorization, it is seen that our results are comparable to H1 results, especially at higher-$x$, $x=0.001$; but somewhat higher in smaller-$x$, $x=0.0004$ in higher $Q^2$. On the other hand, $k_{T}$ factorization results are better in all the cases.\
In our calculations, we use the DL model of the gluon distribution and also we set the running coupling constant according to the Table 1. For heavy contribution to the longitudinal structure function, we choose $m_{c}=1.3~GeV$ and the renormalization scale is $<\mu^{2}>=4m^{2}_{c}+Q^{2}/2$.\
IV. Conclusions
---------------
In conclusion, we have observed that the hard-pomeron behaviour for the longitudinal structure function dynamical behaviour gives the heavy quark effects to the light flavours at small-$x$. We can see in all figures the increase of our results for longitudinal structure function $F_{L}(x,Q^{2})$ towards smaller $x$ and higher $Q^2$ which is consistent with QCD calculations, reflecting the rise of gluon and charm (heavy) distribution inside proton in this region. Our results for gluonic and charm (heavy) longitudinal structure function do not exactly tally with results of DL and CD models respectively, as formers are somewhat higher than laters. Though $F_{L}^{g}$ is more or less comparable with results of DL models, $F_{L}^{c}$ is somewhat higher than that of CD models results. Our total results $F_{L}^{Total}$ and those of $k_{T}$ factorization are well within the data range. Lastly, one important conclusion is that charm (heavy) contribution to total longitudinal structure function is considerable one and can not be neglected especially at smaller $x$ and higher $Q^2$ region.
Appendix A
----------
The explicit forms of the first-, second- and third-order splitting functions are respectively \[49-52\] $$\begin{aligned}
P^{\rm LO}_{gg}(z)&=&2C_{A}(\frac{z}{(1-z)_{+}}+\frac{(1-z)}{z}+z(1-z))\nonumber\\
&&+\delta(1-z)\frac{(11C_{A}-4N_{f}T_{R})}{6},\end{aligned}$$ where the $^{,}\text{plus}^{,}$ distribution is defined by $$\int_{0}^{1} dz \frac{f(z)}{(1-z)_{+}}=\int_{0}^{1}dz
\frac{f(z)-f(1)}{1-z},\nonumber$$
$$\begin{aligned}
P_{gg}^{\rm NLO}&=&C_FT_F(-16+8z+\frac{20}{3}z^2+\frac{4}{3z}\nonumber\\
&&-(6+10z)\ln z-(2+2z)\ln ^{2}z)\nonumber\\
&&+C_AT_F(2-2z+\frac{26}{9}(z^2-z^{-1})\nonumber\\
&&-\frac{4}{3}(1+z)\ln z-\frac{20}{9}P_{gg}(z))\nonumber\\
&&+C_A^2(\frac{27}{2}(1-z)+\frac{67}{9}(z^2-z^{-1})\nonumber\\
&&-(\frac{25}{3}-\frac{11}{3}z+\frac{44}{3}z^2)\ln z\nonumber\\
&&+4(1+z)\ln ^{2}z+2P_{gg}(-z)S_2(z)\nonumber\\
&&+(\frac{67}{9}-4\ln z\ln(1-z)\nonumber\\
&&+\ln ^{2}z-\frac{\pi^2}{3})P_{gg}(z)),\nonumber\\\end{aligned}$$
where $$\begin{aligned}
P_{gg}(z)=\frac{1}{(1-z)_{+}}+\frac{1}{z}-2+z(1-z),\nonumber\end{aligned}$$ and the function $
S_2(z)=\int_{\frac{z}{1+z}}^{\frac{1}{1+z}}\frac{dy}{y}{\ln}(\frac{1-y}{y})$ is defined in terms of the dilogarithm function as $$\begin{aligned}
S_{2}(z)=-2Li_{2}(-z)+\frac{1}{2}\ln ^{2}z-2\ln
z\ln(1+z)-\frac{\pi^{2}}{6},\nonumber\end{aligned}$$
and
$$\begin{aligned}
P_{gg}^{\rm NNLO}&=&2643.521D0+4425.894\delta(1-z)\nonumber\\
&&+3589L1-20852+3968z-3363z^2\nonumber\\
&&+4848z^3+L0L1(7305+8757L0)\nonumber\\
&&+274.4L0-7471L0^2+72L0^3-144L0^4\nonumber\\
&&+14214z^{-1}+2675.8z^{-1}L0\nonumber\\
&&+N_f(-412.172D0-528.723\delta(1-z)\nonumber\\
&&-320L1-350.2+755.7z-713.8z^2\nonumber\\
&&+559.3z^3+L0L1(26.15-808.7L0)\nonumber\\
&&+1541L0+491.3L0^2+\frac{832}{9}L0^3\nonumber\\
&&+\frac{512}{27}L0^4+182.96z^{-1}\nonumber\\
&&+157.27z^{-1}L0)+N_f^2(-\frac{16}{9}D0\nonumber\\
&&+6.4630\delta(1-z)-13.878+153.4z\nonumber\\
&&-187.7z^2+52.75z^3\nonumber\\
&&-L0L1(115.6-85.25z+63.23L0)\nonumber\\
&&-3.422L0+9.680L0^2-\frac{32}{27}L0^3\nonumber\\
&&-\frac{680}{243}z^{-1}),\end{aligned}$$
where $L0=\ln z,~~ L1=\ln(1-z)$ and $D0=\frac{1}{(1-z)_{+}}$.\
Appendix B
----------
The $C^{c}_{L,g}$ is the charm coefficient longitudinal function in LO and NLO analysis and it is given by $$\begin{aligned}
C_{L,g}(z,\zeta)&{\rightarrow}&C^{(0)}_{L,g}(z,\zeta)+a_{s}(\mu^{2})[C_{L,g}^{(1)}(z,\zeta)\\\nonumber
&&+\overline{C}_{L,g}^{(1)}(z,\zeta)ln\frac{\mu^{2}}{m_{c}^{2}}].\end{aligned}$$ In LO analysis, this coefficient can be found in Refs.\[15-16\], as $$\begin{aligned}
C^{(0)}_{g,L}(z,\zeta)=-4z^{2}{\zeta}ln\frac{1+\beta}{1-\beta}+2{\beta}z(1-z),\end{aligned}$$ where $\beta^{2}=1-\frac{4z\zeta}{1-z}$ and $\mu$ is the mass factorization scale, which has been put equal to the renormalization scales $\mu^{2}=4m_{c}^{2}$ or $\mu^{2}=4m_{c}^{2}+Q^{2}$, and in the NLO analysis we can use the compact form of these coefficients according to the Refs.\[17-21\].\
[|l|c|c|]{} & $ \alpha_{s}(M_{Z}^{2})$ &$
\Lambda_{QCD}(MeV)$\
LO & 0.130 & 220\
NLO & 0.119 & 323\
NNLO & 0.1155 & 235\
**References**\
1. A. Gonzalez-Arroyo, C. Lopez, and F.J. Yndurain, Phys. Lett. B98- 218(1981).\
2. A. M. Cooper-Sarkar, G. Inglman, K. R. Long, R. G. Roberts, and D. H. Saxon , Z. Phys. C39- 281(1988).\
3. R. G. Roberts, The structure of the proton, (Cambridge University Press 1990)Cambridge.\
4. Aaron, F. D., et al., (H1 Collaboration), Phys. Lett. B665- 139(2008).\
5. A. D. Martin, W. J. Stirling, R. S. Thorne, G. Watt, Eur. Phys. J. C70- 51(2010).\
6. S. Moch and A.Vogt, JHEP0904- 218(1981).\
7. R. S. Thorne, Phys. Lett.B418- 371(1998).\
8. R. S. Thorne, arXiv:hep-ph/0511351 (2005).\
9. A. D. Martin, W. J. Stirling, R. S.Thorne, Phys. Lett. B 635- 305(2006).\
10. A. D. Martin, W. J. Stirling, R. S.Thorne, Phys. Lett. B636- 259(2006).\
11. S. Moch, J. Vermaseren and A. Vogt, Nucl. Phys. B688- 101(2004).\
12. S. Moch, J. Vermaseren and A. Vogt, Nucl. Phys. B691- 129(2004).\
13. Moch, J. Vermaseren and A. Vogt, Phys. Lett. B606- 123(2005).\
14. M. Gluck, C. Pisano and E. Reya, Phys. Rev. D77- 074002 (2008).\
15. M. Gluk, E. Reya and A. Vogt, Z. Phys. C67- 433(1995).\
16. M. Gluk, E. Reya and A. Vogt, Eur. Phys. J. C5- 461(1998).\
17. E. Laenen, S. Riemersma, J. Smith and W. L. van Neerven, Nucl. Phys. B392- 162(1993).\
18. A. Y. Illarionov, B. A. Kniehl and A. V. Kotikov, Phys. Lett.B663- 66(2008).\
19. S. Catani, M. Ciafaloni and F. Hautmann, Preprint CERN-Th.6398/92, in Proceeding of the Workshop on Physics at HERA (Hamburg)2-690(1991).\
20. S. Catani and F. Hautmann, Nucl. Phys. B427- 475(1994).\
21. S. Riemersma, J. Smith and W. L. van Neerven, Phys. Lett. B347- 143(1995).\
22. E. A. Kuraev, L. N. Lipatov and V. S. Fadin, Sov.Phys.JETP44- 443(1976).\
23. E. A. Kuraev, L. N. Lipatov and V. S. Fadin, Sov. Phys. JETP45- 199(1977).\
24. Y. Y. Balitsky and L. N. Lipatov, Sov. Journ. Nucl. Phys.28- 822(1978).\
25. L.V.Gribov, E.M.Levin and M.G.Ryskin, Phys.Rep.100- 1(1983).\
26. J. Kwiecinski, arXiv:hep-ph/9607221 (1996).\
27. J. Kwiecinski, A.D.Martin and P.J.Sutton, Phys.Rev.D44- 2640(1991).\
28. K.Kutak and A.M.Stasto, Eur.Phys.J.C41- 343(2005).\
29. S.Bondarenko, arXiv:hep-ph/0808.3175(2008).\
30. A. Donnachie and P. V. Landshoff, Phys. Lett. B296- 257(1992).\
31. A. Donnachie and P. V. Landshoff, Phys. Lett. B437- 408(1998 ).\
32. A. Donnachie and P. V. Landshoff, Phys. Lett. B550- 160(2002 ).\
33. P.V.Landshoff, arXiv:hep-ph/0203084(2002).\
34. P. Desgrolard, M. Giffon, E. Martynov and E. Predazzi, Eur. Phys. J. C18- 555(2001).\
35. P. Desgrolard, M. Giffon and E. Martynov, Eur. Phys. J. C7- 655(1999).\
36. A.A.Godizov, Nucl.Phys.A927 36(2014).\
37. Yu. L. Dokshitzer, Sov. Phys. JETP6- 641(1977 ).\
38. G. Altarelli and G. Parisi, Nucl. Phys. B126- 298(1997 ).\
39. V. N. Gribov and L. N. Lipatov, Sov. J. Nucl. Phys.28- 822(1978).\
40. M. B. Gay Ducati and V. P. B. Goncalves, Phys. Lett. B390- 401(1997).\
41. K. Pretz, Phys.Lett.B311- 286(1993).\
42. K. Pretz, Phys. Lett. B332- 393(1994).\
43. A. V. Kotikov, arXiv:hep-ph/9507320(1995).\
44. G. R. Boroun, JETP, Vol. 106,4- 701(2008).\
45. G. R. Boroun and B. Rezaei, Eur. Phys. J. C72- 2221(2012).\
46. G. R. Boroun and B. Rezaei, Eur. Phys. J. C73- 2412(2013).\
47. M. Devee, R Baishya and J. K. Sarma, Eur. Phys.J. C72- 2036(2012).\
48. R. Baishya, U. Jamil and J. K. Sarma, Phys. Rev. D79- 034030(2009).\
49. S. Moch and A. Vogt, JHEP0904- 081(2009).\
50. R. K. Ellis , W. J. Stirling and B. R. Webber, QCD and Collider Physics(Cambridge University Press)(1996).\
51. C. Pisano, arXiv:hep-ph/0810.2215 (2008).\
52. A. Retey, J. Vermaseren , Nucl. Phys. B604- 281(2001).\
53. B. Lampe, E. Reya, Phys. Rep.332- 1(2000).\
54. V. Andreev et al. \[H1 Collaboration\], Accepted by Eur. Phys. J. C, arXiv:hep-ex/1312.4821 (2013).\
55. N. Ya. Ivanov, Nucl. Phys. B814- 142(2009).\
56. N. Ya. Ivanov, Eur. Phys. J. C59- 647(2009).\
57. I. P. Ivanov and N. Nikolaev, Phys. Rev. D65- 054004(2002).\
58. G. R. Boroun and B. Rezaei, Nucl. Phys. B857-143(2012).\
59. G. R. Boroun and B. Rezaei, Eur. Phys. Lett100- 41001(2012).\
60. G. R. Boroun and B. Rezaei, Nucl. Phys. A929- 119(2014).\
61. G.R.Boroun, Nucl. Phys. B884- 684(2014).\
62. R. D. Ball and P. V. landshoff, J. Phys. G26- 672(2000).\
63. N. N. Nikolaev and V. R. Zoller, Phys. Lett. B509- 283(2001).\
64. K. Golec-Biernat and A. M. Stasto, Phys. Rev. D80 014006(2009).\
65. A. D. Martin, R. G. Roberts, W. J. Stirling, R. S. Thorne, Phys. Lett. B531- 216(2002).\
66. A. D. Martin, R. G. Roberts, W. J. Stirling, R. S. Thorne, Eur. Phys. J. C23- 73(2002).\
67. A. D. Martin, R. G. Roberts, W. J. Stirling, R. S. Thorne, Phys. Lett. B604- 61(2004).\
![[*left*]{}: Dynamical light and heavy contributions to the total $F_{L}$ small $x$ for $Q^{2}=12~GeV^{2}$ at NNLO analysis, compared with DL \[30-33,62\] and CD \[63\] models respectively.\
[*right*]{}: The total $F_{L}$ compared with $k_{T}$ factorization \[64\] and H1 data \[54\] with total error.[]{data-label="Fig1"}](Fig1){width="50.00000%"}
![As in Fig. 1 but for $Q^{2}=45~GeV^{2}$. []{data-label="Fig2"}](Fig2){width="50.00000%"}
![As in Fig. 1 but for $Q^{2}=120~GeV^{2}$. []{data-label="Fig3"}](Fig3){width="50.00000%"}
![[*left*]{}: Dynamical light and heavy contributions to the total $F_{L}$ small $x$ for $x=0.001$ at NNLO analysis, compared with DL \[30-33,62\] and CD \[63\] models respectively.\
[*right*]{}: The total $F_{L}$ compared with $k_{T}$ factorization \[64\] and H1 data \[54\] with total error.[]{data-label="Fig4"}](Fig4){width="50.00000%"}
![As in Fig. 4 but for $x=0.0004$. []{data-label="Fig5"}](Fig5){width="50.00000%"}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A rainbow neighbourhood of a graph $G$ with respect to a proper colouring ${\mathcal{C}}$ of $G$ is the closed neighbourhood $N[v]$ of a vertex $v$ in $G$ such that $N[v]$ consists of vertices from all colour classes in $G$ with respect to ${\mathcal{C}}$. The number of vertices in $G$ which yield a rainbow neighbourhood of $G$ is called its rainbow neighbourhood number. In this paper, we show that all results known so far about the rainbow neighbourhood number of a graph $G$ implicitly refer to a minimum number of vertices which yield rainbow neighbourhoods in respect of the minimum proper colouring where the colours are allocated in accordance with the rainbow neighbourhood convention. Relaxing the aforesaid convention allows for determining a maximum rainbow neighbourhood number of a graph $G$. We also establish the fact that the minimum and maximum rainbow neighbourhood numbers are respectively, unique and therefore a constant for a given graph.'
author:
- 'Johan Kok$^\ast$, Sudev Naduvath$^\dagger$'
- Orville Buelban
title: Reflection on Rainbow Neighbourhood Numbers
---
**Keywords:** rainbow neighbourhood, rainbow neighbourhood number. **Mathematics Subject Classification 2010:** 05C15, 05C38, 05C75, 05C85.
Introduction
============
For general notation and concepts in graphs and digraphs see [@BM; @FH; @DBW]. Unless mentioned otherwise all graphs $G$ are simple, connected and finite graphs.
A set of distinct colours $\mathcal{C}= \{c_1,c_2,c_3,\dots,c_\ell\}$ is said to be a *proper vertex colouring* of a graph $G$, denoted $c:V(G) \mapsto \mathcal{C}$, is an assignment of colours to the vertices of $G$ such that no two adjacent vertices have the same colour. The cardinality of a minimum proper colouring of $G$ is called the *chromatic number* of $G$, denoted by $\chi(G)$. We call such a colouring a *$\chi$-colouring* or a [*chromatic colouring*]{} of $G$.
When a vertex colouring is considered with colours of minimum subscripts, the colouring is called a [*minimum parameter colouring*]{}. Unless stated otherwise, we consider minimum parameter colour sets throughout this paper. The colour class of $G$ with respect to a colour $c_i$ is the set of all vertices of $G$ having the colour $c_i$ and the cardinality of this colour class is denoted by $\theta(c_i)$.
In this paper, while $\chi$-colouring the vertices of a graph $G$, we follow the convention that we colour maximum possible number of vertices of $G$ with $c_1$, then colour the maximum possible number of remaining uncoloured vertices with colour $c_2$, and proceeding like until the last colour $c_{\chi(G)}$ is also assigned to some vertices. This convention is called *rainbow neighbourhood convention* (see [@KSJ; @KS1]). Such colouring is called a $\chi^-$-colouring.
For the main part of this paper the notation remains as found in the literature [@KSJ; @KS1; @KS2; @SKSK; @SSKK]. Later we introduce an appropriate change in subsection 2.1.
Recall that the closed neighbourhood $N[v]$ of a vertex $v \in V(G)$ which contains at least one coloured vertex from each colour class of $G$ with respect to the chromatic colouring, is called a *rainbow neighbourhood* of $G$. We say that vertex $v$ yields a rainbow neighbourhood. The number of rainbow neighbourhoods in $G$ (the number of vertices which yields rainbow neighbourhoods) is call the *rainbow neighbourhood number* of $G$, denoted by $r_\chi(G)$.
We recall the following important results on the rainbow neighbourhood number for certain graphs provided in [@KSJ].
\[Thm-1.1\] [[@KSJ]]{} For any graph $G$ of order $n$, we have $\chi(G) \leq r_\chi(G) \leq n$.
\[Thm-1.2\] [[@KSJ]]{} For any bipartite graph $G$ of order $n$, $r_\chi(G)=n$.
We observe that if it is possible to permit a chromatic colouring of any graph $G$ of order $n$ such that the star subgraph obtained from vertex $v$ as center and its open neighbourhood $N(v)$ the pendant vertices, has at least one coloured vertex from each colour for all $v \in V(G)$ then $r_\chi(G)=n$. Certainly, to examine this property for any given graph is complex.
\[Lem-1.3\] [[@KSJ]]{} For any graph $G$, the graph $G'= K_1+G$ has $r_\chi(G')=1+r_\chi(G)$.
Uniqueness of Rainbow Neighbourhood Number
==========================================
Since a $\chi^-$-colouring does not necessarily ensure a unique colour allocation to the vertices, the question arises whether or not the rainbow neighbourhood number is unique (a constant) for any minimum proper colouring with colour allocation in accordance to the rainbow neighbourhood convention. The next theorem answers in the affirmative.
\[Thm-2.1\] Any graph $G$ with minimum proper colouring as per the rainbow neighbourhood convention has a unique minimum rainbow neighbourhood number $r_\chi(G)$.
First, consider any minimum proper colouring say, $\mathcal{C} = \{\underbrace{c_{x}, c_y, c_w,\ldots,c_z}_{\chi(G)-entries}\}$ and assume the colours are ordered (or labeled) in some context. Now, according to the ordering the rainbow neighbourhood convention, beginning with $c_x$, then $c_y$, then $\cdots, c_z$, the colouring maximises the allocation of same colours and therefore minimises those vertices which can have closed neighbourhoods having at least one of each colour in respect of a $\chi$-colouring. The aforesaid follows because, assume that at least one vertex’s colour say, $c(v) = c_\ell$ can be interchanged with vertex $u$’s colour, $c(u) = c_t$ to cause at least one or more yielding vertices, not necessarily distinct from $v$ or $u$, to yield a rainbow neighbourhood any more. However, neither vertex $v$ nor vertex $u$ yielded initially. Else, $N[v]$ had at least one of each colour and will now have $c_t$ adjacent to itself. Similarly, $c_\ell$ will be adjacent to itself in $N[u]$. In both cases we have a contradiction to a proper colouring.
So, $c_\ell \notin c(N[u])$, $c_t \notin c(N[v])$. Hence, the interchange can result in additional vertices yielding rainbow neighbourhoods and, not in a reduction of the number of vertices yielding rainbow neighbourhoods. Similar reasoning for all pairs of vertices which resulted in pairwise colour exchange leads to the same conclusion. This settles the claim that $r_\chi(G)$ is a minimum.
Furthermore, without loss of generality a minimum parameter colouring set may be considered to complete the proof.
If we relax connectedness it follows that the null graph (edgeless graph) on $n \geq 1$ vertices denoted by, $\mathfrak{N}_n$ has, $\chi(\mathfrak{N}_n) = 1$ and $r_\chi(\mathfrak{N})n) = n$ which is unique and therefore, a constant over all $\chi$-colourings. Immediate induction shows it is true $\forall~n \in {\mathbb{N}}$. From Theorem \[Thm-1.2\], it follows that the same result holds for graphs $G$ with $\chi(G)=2$. Hence, the result holds for all graphs with $\chi(G)= 1$ or $\chi(G)=2$. Assume that it holds for all graphs $G$ with $3 \leq \chi(G) \leq k$. Consider any graph $H$ with $\chi(H)=k+1$. It is certain that at least one such graph exists for example, any $G+K_1$ for which $\chi(G)=k$.
Consider the set of vertices $\mathcal{C}_{k+1} =\{v_i \in V(H):c(v_i) = c_{k+1}\}$. Consider the induced subgraph $H' = \langle V(H)-\mathcal{C}_{k+1}\rangle$. Clearly, all vertices $u_i \in V(H')$ which yielded rainbow neighbourhoods in $H$ also yield rainbow neighbourhoods in $H'$ with $\chi(H')=k$. Note that $r_\chi(H')$ is a unique number hence, is a constant. It may also differ from $r_\chi(H)$ in any way, that is, greater or less. It is possible that a vertex which did not yield a rainbow neighbourhood in $H$ could possibly yield such in $H'$.
For any vertex $v_j \in \mathcal{C}_{k+1}$ construct $H'' = H' \diamond v_j$ such that all edges $v_ju_m \in E(H'')$ has $u_m\in V(H')$ and $v_ju_m \in V(H)$. Now $c(v_j) = c_{k+1}$ remains and either, $v_j$ yields a rainbow neighbourhood in $H''$ together with those in $H'$ or it does not. Consider $H''$ and by iteratively constructing $H''', H'''', \cdots, H^{''''\dots '~(|\mathcal{C}|+1)-times},\ \forall~v_i \in \mathcal{C}_{k+1}$ and with reasoning similar to that in the case of $H' \diamond v_j$, the result follows for all graphs $H$ with $\chi(H) = k+1$. Note that any vertex $u_\ell$ which yielded a rainbow neighbourhood in $H'$ and not in $H$, cannot yield same in $H$ following the iterative reconstruction of $H$. Therefore, the result holds for all graphs with $\chi(G)=n,\ n \in {\mathbb{N}}$.
Maximum rainbow neighbourhood number $r^+_\chi(G)$ permitted by a minimum proper colouring
------------------------------------------------------------------------------------------
If the allocation of colours is only in accordance with a minimum proper colouring then different numbers of vertices can yield rainbow neighbourhoods in a given graph.
**Example:** It is known that $r_\chi(C_n) = 3$, $n \geq 3$ if and only if $n$ is odd. Consider the vertex labeling of a cycle to be consecutively and clockwise, $v_1,v_2,v_3,\dots ,v_n$. So for, the cycle $C_7$ with rainbow neighbourhood convention colouring, $c(v_1)=c_1,c(v_2)=c_2,c(v_3)=c_1,c(v_4)=c_2,c(v_5)=c_1,c(v_6)=c_2$ and $c(v_7)=c_3$ it follows that vertices $v_1,v_6,v_7$ yield rainbow neighbourhoods. By recolouring vertex $v_4$ to $c(v_4)=c_3$ a minimum proper colouring is permitted with vertices $v_1,v_3,v_5,v_6,v_7$ yielding rainbow neighbourhoods in $C_7$. It is easy to verify that this recolouring (not unique) provides a maximum number of rainbow neighbourhoods in $C_7$.
A review of results known to the authors shows that thus far, the minimum rainbow neighbourhood number is implicitly defined [@KSJ; @KS1; @KS2; @SKSK; @SSKK]. It is proposed that henceforth the notation $r^-_\chi(G)$ replaces $r_\chi(G)$ and that $r_\chi(G)$ only refers to the number of rainbow neighbourhoods found for any given minimum proper colouring allocation. Therefore, $r^-_\chi(G)=\min\{r_\chi(G): \text{over all permissible colour allocations}\}$ and $r^+_\chi(G)=\max\{r_\chi(G):\text{over all permissible colour allocations}\}$. For any null graph as well as for any graph $G$ with $\chi(G) = 2$ and for complete graphs, $K_n$ it follows that $r^-(\mathfrak{N}_n) = r^+(\mathfrak{N}_n)$, $r^-(G) = r^+(G)$ and $r^-(K_n) = r^+(K_n)$ .
\[Thm-2.2\] Any graph $G$ has a minimum proper colouring which permits a unique maximum (therefore, constant) rainbow neighbourhood number, $r^+_\chi(G)$.
Similar to the proof of Theorem \[Thm-2.1\].
\[Prop-2.3\] For cycle $C_n$, $n$ is odd and $\ell = 0,1,2,\dots$:
1. $r^+_\chi(C_{7+4\ell}) = 3 +2(\ell + 1)$ and,
2. $r^+_\chi(C_{9+4\ell}) = 3 +2(\ell + 1)$.
Consider the conventional vertex labeling of a cycle $C_n$ to be consecutively and clockwise, $v_1,v_2,v_3,\dots ,v_n$. It can easily be verified that $C_3$, $C_5$ have $r^-_\chi(C_3) = r^+_\chi(C_3) = r^-_\chi(C_5) = r^+_\chi(C_5) = 3$. For $C_5$ let $c(v_1) = c_1,c(v_2)=c_2,c(v_3)=c_1, c(v_4)=c_2~and~c(v_5)=c_3$. To obtain $C_7$ and without loss of generality, insert two new vertices $v'_1,v'_2$ clockwise into the edge $v_1v_2$. Colour the new vertices $c(v'_1) = c_2,c(v'_2)=c_3$. Clearly, a minimum proper colouring is permitted in doing such and it is easy to verify that, $r^+_\chi(C_7) = 5$ in that vertices $v'_1,v_2$ yield additional rainbow neighbourhoods. By re-labeling the vertices of $C_7$ conventionally and repeating the exact same procedure to obtain $C_9$, thereafter $C_{11}$, thereafter $C_{13}$, $\cdots$, both parts (i) and (ii) follow through mathematical induction.
Determining the parameter $r^+_\chi(G)$ seems to be complex and a seemingly insignificant derivative of a graph say $G'$ can result in a significant variance between $r^+_\chi(G)$ and $r^+_\chi(G')$. The next proposition serves as an illustration of this observation. We recall that a sunlet graph $S_n$ on $2n$, $n \geq 3$ is obtained by attaching a pendant vertex to each vertex of cycle $C_n$.
\[Prop-2.4\] For sunlet $S_n$, $n$ is odd and $\ell = 0,1,2,\dots$:
1. $r^+_\chi(S_{7+4\ell}) = n$ and
2. $r^+_\chi(S_{9+4\ell}) = n$.
Colour the induced cycle $C_n$ of the sunlet graph $S_n$ similar to that found in the proof of Proposition \[Prop-2.3\]. Clearly, with respect to the induced cycle, each vertex $v_i$ on $C_n$ has $c(N[v_i])_{v_i \in \langle V(C_n)\rangle}$ either $\{c_1,c_2,c_3\}$ or $\{c_1,c_2\}$. It is trivially true that each corresponding pendant vertex can be coloured either $c_1$, or $c_2$ or $c_3$ to ensure that $c(N[v_i])_{v_i \in V(S_n)} = \{c_1,c_2,c_3\}$. Hence, the $n$ cycle vertices each yields a rainbow neighbourhood.
There are different definitions of a sun graph in the literature. The *empty-sun graph*, denoted by $S^{\bigodot}_n=C_n\bigodot K_1$, is the graph obtained by attaching an additional vertex $u_i$ with edges $u_iv_i$ and $u_iv_{i+1}$ for each edge $v_iv_{i+1} in E(C_n)$, $1\leq i \leq n-1$ and similarly vertex $u_n$ to edge $v_nv_1$.
\[Prop-2.5\] For an empty-sun graph $S^{\bigodot}_n$, $n \geq 3$, $r^+_\chi(S^{\bigodot}_n) = 2n$.
For $n$ is odd, the result is a direct consequence of the minimum proper colouring required in the proof of Proposition \[Prop-2.4\]. For $n$ is even, let $c(u_i) = c_3$, $\forall i$. Clearly, all vertices in $V(S^{\bigodot}_n)$ yield a rainbow neighbourhood.
Conclusion
==========
Note that for many graphs, $r^-_\chi(G)=r^+_\chi(G)$. Despite this observation, it is of interest to characterise those graphs for which $r^-_\chi(G) \neq r^+_\chi(G)$. From Proposition \[Prop-2.3\], it follows that for certain families of graphs such as odd cycles, the value $r^+_\chi(C_n)$ can be infinitely large whilst $r^-_\chi(C_n)=3$.
[ An efficient algorithm for the allocation of colours in accordance with a minimum proper colouring to obtain $r^+_\chi(G)$ is not known. ]{}
Recalling that the clique number $\omega(G)$ is the order of the largest maximal clique in $G$, the next corollary follows immediately from Theorem 1.1.
Any graph $G$ of order $n$ has, $\omega(G) \leq r^-_\chi(G)$.
[ For weakly perfect graphs (as well as for perfect graphs) it is known that $\omega(G) = \chi(G)$. If weakly perfect graphs for which $\chi(G) = r^-_\chi(G)$ can be characterised, the powerful result, $\omega(G) = r^-_\chi(G)$ will be concluded. ]{}
[ An efficient algorithm to find a minimum proper colouring in accordance with the rainbow neighbourhood convention has not been found yet. ]{}
The next lemma may assist in a new direction of research in respect of the relation between degree sequence of a graph $G$ and, $r^-_\chi(G)$ and $r^+_\chi(G)$.
\[Lem-3.2\] If a vertex $v \in V(G)$ yields a rainbow neighbourhood in $G$ then $d_G(v) \geq \chi(G) -1$.
The proof is straight forward.
Lemma \[Lem-3.2\] motivates the next probability corollary.
A vertex $v\in V(G)$ possibly yields a rainbow neighbourhood in $G$ if and only if $d_G(v) \geq \chi(G) -1$.
These few problems indicate there exists a wide scope for further research.
[99]{}
J.A. Bondy and U.S.R. Murty, *Graph theory*, Springer, New York, (2008).
F. Harary, **Graph theory**, Narosa Publ., New Delhi, (2001).
J. Kok, S. Naduvath and M.K. Jamil, Rainbow neighbourhood number of graphs, [*preprint*]{}, arXiv: 1703.01089 \[math.GM\]
J. Kok and S. Naduvath, Rainbow neighbourhood equate number of graphs, [*communicated*]{}.
J. Kok and S. Naduvath, An Essay on compônentă analysis of graphs, [*preprint*]{}, arXiv: 1705.02097 \[math.GM\].
C. Susanth, S.J. Kalayathankal, N.K. Sudev and J. Kok, A note on the rainbow neighbourhood number of graphs, [*Nat. Acad. Sci. Lett.*]{}, to appear.
S. Naduvath, C. Susanth, S.J. Kalayathankal and J. Kok, Some new results on the rainbow neighbourhood number of graphs, [*Nat. Acad. Sci. Lett.*]{}, to appear.
D.B. West, *Introduction to graph theory*, Prentice-Hall India., New Delhi, (2001).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
In this paper, we consider $\alpha$-harmonic functions in the half space $\mathbb{R}^n_+$: $$\left\{\begin{array}{ll}
(-{\mbox{$\bigtriangleup$}})^{\alpha/2} u(x)=0,~u(x)>0, & \qquad x\in\mathbb{R}^n_+, \\
u(x)\equiv0, & \qquad x\notin\mathbb{R}^{n}_{+}.
\end{array}\right.
\label{1}$$ We prove that all solutions of (\[1\]) have to assume the form $$u(x)=\left\{\begin{array}{ll}Cx_n^{\alpha/2}, & \qquad x\in\mathbb{R}^n_+, \\
0, & \qquad x\notin\mathbb{R}^{n}_{+},
\end{array}\right.
\label{2}$$ for some positive constant $C$.
author:
- Wenxiong Chen Congming Li Lizhi Zhang Tingzhi Cheng
title: 'A Liouville theorem for $\alpha$-harmonic functions in $\mathbb{R}^n_+$'
---
[**Key words**]{} The fractional Laplacian, $\alpha$-harmonic functions, uniqueness of solutions, Liouville theorem, Poisson representation.\
\
\
Introduction
============
The fractional Laplacian in $R^n$ is a nonlocal pseudo-differential operator, assuming the form $$(-\Delta)^{\alpha/2} u(x) = C_{n,\alpha} \, \lim_{\epsilon {{\mbox{$\rightarrow$}}}0} \int_{\mathbb{R}^n\setminus B_{\epsilon}(x)} \frac{u(x)-u(z)}{|x-z|^{n+\alpha}} dz,
\label{Ad7}$$ where $\alpha$ is any real number between $0$ and $2$. This operator is well defined in $\cal{S}$, the Schwartz space of rapidly decreasing $C^{\infty}$ functions in $\mathbb{R}^n$. In this space, it can also be equivalently defined in terms of the Fourier transform $$\widehat{(-\Delta)^{\alpha/2} u} (\xi) = |\xi|^{\alpha} \hat{u} (\xi),$$ where $\hat{u}$ is the Fourier transform of $u$. One can extend this operator to a wider space of distributions.
Let $$L_{\alpha}=\{u: \mathbb{R}^n\rightarrow \mathbb{R} \mid \int_{\mathbb{R}^n}\frac{|u(x)|}{1+|x|^{n+\alpha}} \, d x <\infty\}.$$
Then in this space, we defined $(-\Delta)^{\alpha/2} u$ as a distribution by $$< (-\Delta)^{\alpha/2}u(x), \phi> \, = \, \int_{\mathbb{R}^n} u(x) (-\Delta)^{\alpha/2} \phi (x) dx , \;\;\; \forall \, \phi \in C_0^{\infty}(\mathbb{R}^n) .$$
Let $$\mathbb{R}^n_+ = \{x=(x_1, \cdots, x_n\ \mid x_n > 0 \}$$ be the upper half space. We say that $u$ is $\alpha$-harmonic in the upper half space if $$\int_{\mathbb{R}^n} u(x) (-\Delta)^{\alpha/2} \phi (x) dx = 0, \;\;\; \forall \, \phi \in C_0^{\infty}(\mathbb{R}^n_+) .$$
In this paper, we consider the Dirichlet problem for $\alpha$-harmonic functions $$\left\{\begin{array}{ll}
(-{\mbox{$\bigtriangleup$}})^{\alpha/2} u(x)=0,~u(x)>0, & \qquad x\in\mathbb{R}^n_+, \\
u(x)\equiv0, & \qquad x\notin\mathbb{R}^{n}_{+},
\end{array}\right.
\label{1.1}$$ It is well-known that $$u(x)=\left\{\begin{array}{ll}Cx_n^{\alpha/2}, & \qquad x\in\mathbb{R}^n_+, \\
0, & \qquad x\notin\mathbb{R}^{n}_{+},
\end{array}\right.$$ is a family of solutions for problem (\[1.1\]) with any positive constant $C$.
A natural question is: [*Are there any other solutions?*]{}
Our main objective here is to answer this question and prove
Let $0<\alpha<2$, $u\in L_{\alpha}$. Assume $u$ is a solution of $$\left\{\begin{array}{ll}
(-{\mbox{$\bigtriangleup$}})^{\alpha/2} u(x)=0,~u(x)>0, & \qquad x\in\mathbb{R}^n_+, \\
u(x)\equiv0, & \qquad x\notin\mathbb{R}^{n}_{+}.
\end{array}\right.$$ then $$u(x)=\left\{\begin{array}{ll}Cx_n^{\alpha/2}, & \qquad x\in\mathbb{R}^n_+, \\
0, & \qquad x\notin\mathbb{R}^{n}_{+},
\end{array}\right.$$ for some positive constant $C$. \[mthm1\]
We will prove this theorem in the next section.
The Proof of the Liouville Theorem
==================================
In this section, we prove Theorem \[mthm1\]. The main ideas are the following.
We first obtain the Poisson representation of the solutions. We show that for $|x-x_r|<r$
$$u(x)=\int_{|y-x_r|>r}P_r(x-x_r,y-x_r)u(y)dy,
\label{P}$$
where $x_r=(0, \cdots,0,r)$ , and $P_r(x-x_r,y-x_r)$ is the Poisson kernel for $|x-x_r|<r$ : $$\begin{aligned}
&&P_{r}(x-x_r,y-x_r)\nonumber\\&=&\left\{\begin{array}{ll}
\frac{\Gamma(n/2)}{\pi^{\frac{n}{2}+1}} \sin\frac{\pi\alpha}{2}
\left[\frac{r^{2}-|x-x_r|^{2}}{|y-x_r|^{2}-r^{2}}\right]^{\frac{\alpha}{2}}\frac{1}{|x-y|^{n}},\qquad& |y-x_r|>r,\\
0,& \text{elsewhere}.
\end{array}
\right.\end{aligned}$$
Then, for each fixed $x\in\mathbb{R}^n_+$, we evaluate first derivatives of $u$ by using (\[P\]). Letting $r \rightarrow \infty$, we derive $$\frac{\partial u}{\partial x_i}(x)=0,~~~~~i=1,2,\cdots,n-1.$$ and $$\frac{\partial u}{\partial x_n}(x)=\frac{\alpha}{2x_n}u(x).$$ These yield the desired results.
In the following, we use $C$ to denote various positive constants.
[*Step 1.*]{}
In this step, we obtain the Poisson representation (\[P\]) for the solutions of (\[1.1\]).
Let $$\hat{u}(x)=\left\{
\begin{array}{ll}
\int_{|y-x_r|>r}P_r(x-x_r,y-x_r)u(y)dy, & \qquad |x-x_r|<r, \\
u(x), & \qquad |x-x_r|\geq r.
\end{array}
\right.
\label{max}$$ We will prove that $\hat{u}$ is $\alpha$ -harmonic in $B_r(x_r) $. The proof is similar to that in [@CL]. It is quite long and complex, hence for reader’s convenience, we will present it in the next section.
Let $w(x)=u-\hat{u}$ , then $$\left\{
\begin{array}{ll}
(-{\mbox{$\bigtriangleup$}})^{\alpha/2}w(x)=0, & \qquad |x-x_r|<r, \\
w(x)\equiv0, & \qquad |x-x_r|\geq r.
\end{array}
\right.$$
To show that $w\equiv0$ ,we employ the following Maximum Principle.
(Silvestre, [@Si]) Let $\Omega$ be a bounded domain in $\mathbb{R}^{n}$, and assume that $v$ is a lower semi-continuous function on $\overline{\Omega}$ satisfying $$\left\{\begin{array}{ll}
(-{\mbox{$\bigtriangleup$}})^{\frac{\alpha}{2}}v\geq0, &\mbox{in } \Omega,\\
v\geq0,&\mbox{on } \mathbb{R}^{n}\backslash\Omega.
\end{array}
\right.$$ then $v\geq0$ in $\Omega$.
Applying this lamma to both $v=w$ and $v=-w$ , we conclude that $$w(x)\equiv0.$$ Hence $$\hat{u}(x)\equiv u(x).$$ This verifies (\[P\]).
[*Step 2.*]{}
We will show that for each fixed $x\in\mathbb{R}^n_+$ , $$\frac{\partial u}{\partial x_i}(x)=0,~~~~~i=1,2,\cdots,n-1.
\label{b1}$$ and $$\frac{\partial u}{\partial x_n}(x)=\frac{\alpha}{2x_n}u(x).
\label{b2}$$
From (\[b1\]), we conclude that $u(x)=u(x_n)$, and this, together with (\[b2\]), immediately implies $$u(x)=Cx_n^{\alpha/2},$$ therefore $$u(x)=\left\{
\begin{array}{ll}
Cx_n^{\alpha/2}, & \qquad x\in\mathbb{R}^n_+, \\
0, & \qquad x\notin\mathbb{R}^n_+.
\end{array}
\right.$$ And this is what we want to derive.
Now, what left is to prove (\[b1\]) and (\[b2\]). Through an elementary calculation, one can derive that, for $i=1,2,\cdots,n-1,$ $$\begin{aligned}
\frac{\partial u}{\partial x_i}(x)&=&\int_{|y-x_r|>r}\left(\frac{-\alpha x_i}{r^2-|x-x_r|^2}+\frac{n(y_i-x_i)}{|y-x|^2}\right)P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&=&\int_{|y-x_r|>r}\frac{-\alpha x_i}{r^2-|x-x_r|^2}P_r(x-x_r,y-x_r)u(y)dy\nonumber\\&\quad&+\int_{|y-x_r|>r}\frac{n(y_i-x_i)}{|y-x|^2}P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&:=&I_1+I_2.\label{b3}\end{aligned}$$
For each fixed $x\in B_r(x_r)\subset\mathbb{R}^n_+$ and for any given $\epsilon >0$ , we have $$\begin{aligned}
|I_1|&=&\left|\int_{|y-x_r|>r}\frac{-\alpha x_i}{r^2-|x-x_r|^2}P_r(x-x_r,y-x_r)u(y)dy\right|\nonumber\\
&\leq&\int_{|y-x_r|>r}\left|\frac{-\alpha x_i}{r^2-|x-x_r|^2}\right|P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&=&\left|\frac{\alpha x_i}{r^2-|x-x_r|^2}\right|\int_{|y-x_r|>r}P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&=&\left|\frac{\alpha x_i}{r^2-|x-x_r|^2}\right|u(x)\nonumber\\
&\leq&\frac{C}{r}\nonumber\\
&<&\epsilon,~~~\text{for sufficiently large ~$r$~.}\label{b0}\end{aligned}$$
Here and in below, the letter $C$ stands for various positive constants.
It is more delicate to estimate $I_2$. For each $R>0$, we divide the region $|y-x_r|>r$ into two parts: one inside the ball $|y|<R$ and one outside the ball.
$$\begin{aligned}
|I_2|&=&\left|\int_{|y-x_r|>r}\frac{n(y_i-x_i)}{|y-x|^2}P_r(x-x_r,y-x_r)u(y)dy\right|\nonumber\\
&\leq&\int_{|y-x_r|>r}\left|\frac{n(y_i-x_i)}{|y-x|^2}\right|P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&=&\int_{\substack{|y-x_r|>r\\|y|>R}}\left|\frac{n(y_i-x_i)}{|y-x|^2}\right|P_r(x-x_r,y-x_r)u(y)dy\nonumber\\&\quad&+\int_{\substack{|y-x_r|>r\\|y|\leq R}}\left|\frac{n(y_i-x_i)}{|y-x|^2}\right|P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&:=&I_{21}+I_{22}.\label{b4}\end{aligned}$$
For the $\epsilon>0$ given above, when $|y|>R$ , we can easily derive $$\left|\frac{n(y_i-x_i)}{|y-x|^2}\right|\leq\frac{n}{|y-x|}\leq\frac{C}{R}<\epsilon,$$ for sufficiently large $R$. Fix this $R$, then $$\begin{aligned}
I_{21}&<&\int_{\substack{|y-x_r|>r\\|y|>R}}\epsilon P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&\leq&\epsilon\int_{|y-x_r|>r}P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&=&\epsilon u(x)\nonumber\\
&\leq&C\epsilon.\label{b5}\end{aligned}$$
To estimate $I_{22}$, we employ the expression of the Poisson kernel. $$\begin{aligned}
I_{22}&=& C \int_{\substack{|y-x_r|>r\\|y|\leq R}}\left[\frac{r^2-|x-x_r|^2}{|y-x_r|^2-r^2}\right]^{\alpha/2}\frac{u(y)}{|x-y|^n}\left|\frac{n(y_i-x_i)}{|y-x|^2}\right|dy\nonumber\\
&\leq& C \int_{\substack{|y-x_r|>r\\|y|\leq R}}\left[\frac{2x_nr-|x|^2}{2y_nr-|y|^2}\right]^{\alpha/2}\frac{u(y)}{|x-y|^n}\frac{1}{|y-x|}dy\nonumber\\
&\leq&C\int_{\substack{|y-x_r|>r\\|y|\leq R}}\left[\frac{2x_nr-|x|^2}{2y_nr-|y|^2}\right]^{\alpha/2}\frac{u(y)}{|x-y|^{n+1}}dy\nonumber\\
&=&C\int_{\substack{|y-x_r|>r\\|y|\leq R, \, y_n>0}}\left[\frac{2x_nr-|x|^2}{2y_nr-|y|^2}\right]^{\alpha/2}\frac{u(y)}{|x-y|^{n+1}}dy\nonumber\\
&\leq&C_R \int_{\substack{|y-x_r|>r\\|y|\leq R, \, y_n>0}}\left[\frac{2x_nr-|x|^2}{2y_nr-|y|^2}\right]^{\alpha/2}\frac{1}{|x-y|^{n+1}}dy.\label{b6}\end{aligned}$$
Here we have used the fact that the $\alpha$-harmonic function $u$ is bounded in the region $$D_{R,r}=\left\{y=(y',y_n)\left|\right.|y-x_r|>r,~|y|<R,~y_n>0\right\}.$$ The bound depends on $R$, however is independent of $r$, since $D_{R, r_1} \subset D_{R, r_2}$ for $r_1>r_2$. For each such fixed open domain $D_{R,r}$, the bound of the $\alpha$-harmonic function $u$ can be derived from the interior smoothness ( see, for instance [@BKN] and [@FW]) and the estimate up to the boundary ( see [@RS]).
Set $y=(y',y_n), \sigma=|y'|$, for fixed $x$ and sufficiently large $r$, we have
$$\begin{aligned}
\left[\frac{2x_nr-|x|^2}{2y_nr-|y|^2}\right]^{\alpha/2}&\leq&\frac{Cr^{\alpha/2}}{|2y_nr-|y|^2|^{\alpha/2}}\nonumber\\&=&\frac{Cr^{\alpha/2}}{|\sigma^2-2y_nr+y_n^2|^{\alpha/2}}\nonumber\\&=&\frac{Cr^{\alpha/2}}{|(y_n-r)^2+\sigma^2-r^2|^{\alpha/2}}.\label{b7}\end{aligned}$$
and $$\frac{1}{|x-y|^{n+1}}\leq\frac{C}{(1+|y|)^{n+1}}\leq\frac{C}{(1+|y'|)^{n+1}}=\frac{C}{(1+\sigma)^{n+1}}.\label{b8}$$
For convenience of estimate, we amplify the domain $D_{R,\,r}$ a little bit. Define $$\hat{D}_{R,\,r}=\left\{y=(y',y_n)\in\mathbb{R}^n_+\left|\right. |y-x_r|>r,~|y'|\leq R, 0<y_n<\bar{y}_n \right\}.$$ Here $\bar{y}_n$ satisfies $$(\bar{y}_n-r)^2+\sigma^2-r^2=0, \label{b9}$$ so that $\bar{y}=(y',\bar{y_n})\in\partial\hat{D}_{R,\,r}\cap\partial B_r(x_r)$. Then it is easy to see that $$D_{R,\,r}\setminus \hat{D}_{R,\,r}
\label{tian}$$
From (\[b9\]), for sufficiently large $r$ (much larger than $R$ ), we have $$\bar{y}_n= r - \sqrt{r^2-\sigma^2}.$$ Set $$y_n=r-s\sqrt{r^2-\sigma^2}.\label{b10}$$ Then for $0<y_n<\bar{y}_n$, $$1<s<\frac{r}{\sqrt{r^2-\sigma^2}},\label{b11}$$ and $$dy_n=-\sqrt{r^2-\sigma^2}ds.\label{b12}$$
Continuing from the right side of (\[b6\]), we integrate in the direction of $y_n$ first, and then integrate with respect to $y'$, setting $r$ sufficiently large (much larger than $R$), by(\[b6\]), (\[b7\]), (\[b8\]), (\[tian\]),( \[b10\]), (\[b11\]), and(\[b12\]), we derive $$\begin{aligned}
I_{22}&\leq&C\int_{\hat{D}_{R,\,r}}\left[\frac{2x_nr-|x|^2}{2y_nr-|y|^2}\right]^{\alpha/2}\frac{1}{|x-y|^{n+1}}dy\nonumber\\
&\leq&C\int_{|y'|<R}\int_0^{\bar{y}_n}\frac{r^{\alpha/2}}{|(y_n-r)^2+\sigma^2-r^2|^{\alpha/2}}dy_n\frac{1}{(1+\sigma)^{n+1}}dy'\nonumber\\
&\leq&C\int_0^R\int_0^{\bar{y}_n}\frac{r^{\alpha/2}}{|(y_n-r)^2+\sigma^2-r^2|^{\alpha/2}}dy_n\frac{1}{(1+\sigma)^{n+1}}\sigma^{n-2}d\sigma \qquad \label{b13}\\
&\leq&C\int_0^R\int_1^{\frac{r}{\sqrt{r^2-\sigma^2}}}\frac{r^{\alpha/2}}{[(s\sqrt{r^2-\sigma^2})^2-(r^2-\sigma^2)]^{\alpha/2}}\left|\sqrt{r^2-\sigma^2}\right|ds\frac{\sigma^{n-2}}{(1+\sigma)^{n+1}}d\sigma\nonumber\\
&=&C\int_0^Rr^{\alpha/2}(r^2-\sigma^2)^{\frac{1-\alpha}{2}}\int_1^{\frac{r}{\sqrt{r^2-\sigma^2}}}\frac{1}{(s^2-1)^{\alpha/2}}ds\frac{\sigma^{n-2}}{(1+\sigma)^{n+1}}d\sigma\nonumber\\
&\leq&C\int_0^Rr^{\alpha/2}(r^2-\sigma^2)^{\frac{1-\alpha}{2}}\int_1^{\frac{r}{\sqrt{r^2-\sigma^2}}}\frac{1}{(s-1)^{\alpha/2}}ds\frac{\sigma^{n-2}}{(1+\sigma)^{n+1}}d\sigma\label{b14}\\
&\leq&C\int_0^Rr^{\alpha/2}r^{1-\alpha}\int_1^{\frac{r}{\sqrt{r^2-\sigma^2}}}\frac{1}{(s-1)^{\alpha/2}}ds\frac{\sigma^{n-2}}{(1+\sigma)^{n+1}}d\sigma\label{b15}\\
&=&C\int_0^Rr^{1-\alpha/2}\left(\frac{r}{\sqrt{r^2-\sigma^2}}-1\right)^{1-\alpha/2}\frac{\sigma^{n-2}}{(1+\sigma)^{n+1}}d\sigma\nonumber\\
&=&C\int_0^Rr^{1-\alpha/2}\left(\frac{\sigma^2}{\left(r+\sqrt{r^2-\sigma^2}\right)\sqrt{r^2-\sigma^2}}\right)^{1-\alpha/2}\frac{\sigma^{n-2}}{(1+\sigma)^{n+1}}d\sigma\nonumber\\
&\leq&C\int_0^Rr^{1-\alpha/2}\left(\frac{1}{r^2}\right)^{1-\alpha/2}\frac{\sigma^{n-2+2-\alpha}}{(1+\sigma)^{n+1}}d\sigma\nonumber\\
&=&Cr^{\alpha/2-1}\int_0^R\frac{\sigma^{n-\alpha}}{(1+\sigma)^{n+1}}d\sigma\nonumber\\
&\leq&Cr^{\alpha/2-1}\nonumber\\
&=&\frac{C}{r^{1-\alpha/2}}.\label{b16}\end{aligned}$$
In the above, we derived (\[b13\]) by letting $|y'|=\sigma$ . (\[b14\]) is valid because $$\begin{aligned}
\frac{1}{(s^2-1)^{\alpha/2}}&=&\frac{1}{(s+1)^{\alpha/2}}\frac{1}{(s-1)^{\alpha/2}}\nonumber\\&\leq&\frac{1}{(1+1)^{\alpha/2}}\frac{1}{(s-1)^{\alpha/2}}\nonumber\\&=&\frac{1}{2^{\alpha/2}}\frac{1}{(s-1)^{\alpha/2}}\nonumber\\&\leq&\frac{1}{(s-1)^{\alpha/2}}.\nonumber\end{aligned}$$ Since $R$ is fixed and $\sigma^2\leq R^2$ , when $r$ is sufficiently large ( much larger than $R$ ), we have $r^2-\sigma^2>0$, and the value of $(r^2-\sigma^2)^{\frac{1-\alpha}{2}}$ can be dominated by $(r^2)^{\frac{1-\alpha}{2}}~$(i.e. $r^{1-\alpha}$), this verifies (\[b15\]).
For the $\epsilon>0$ given above and the fixed $R$ , since $0<\alpha<2$ , then by (\[b16\]) we can easily get $$I_{22}\leq C\frac{1}{r^{1-\alpha/2}}<\epsilon,\label{b17}$$ for sufficiently large $r$ .
From (\[b3\]), (\[b0\]), (\[b4\]), (\[b5\]), and(\[b17\]), we derive $$\left|\frac{\partial u}{\partial x_i}(x)\right|<C\epsilon,$$ for sufficiently large $R$ and much larger $r$.
The fact that $\epsilon$ is arbitrary implies $$\left|\frac{\partial u}{\partial x_i}(x)\right| = 0.$$ This proves (\[b1\]).
Now, let’s prove (\[b2\]). Similarly, for fixed $x\in B_r(x_r)\subset\mathbb{R}^n_+$ , through an elementary calculation, one can derive that $$\begin{aligned}
\frac{\partial u}{\partial x_n}(x)&=&\int_{|y-x_r|>r}\left(\frac{\alpha (r- x_n)}{r^2-|x-x_r|^2}+\frac{n(y_n-x_n)}{|y-x|^2}\right)P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&=&\int_{|y-x_r|>r}\frac{\alpha (r- x_n)}{r^2-|x-x_r|^2}P_r(x-x_r,y-x_r)u(y)dy\nonumber\\&\quad&+\int_{|y-x_r|>r}\frac{n(y_n-x_n)}{|y-x|^2}P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&:=&J_1+J_2.\label{cc1}\end{aligned}$$
Similarly to $I_2$, for sufficiently large $r$, we can also derive $$|J_2|\leq C\epsilon,$$ for any $\epsilon>0$ . That is $$J_2\rightarrow0, \quad\text{as}~~r\rightarrow\infty.\label{cc2}$$
Now we estimate $J_1$. $$\begin{aligned}
J_1&=&\frac{\alpha (r- x_n)}{r^2-|x-x_r|^2}\int_{|y-x_r|>r}P_r(x-x_r,y-x_r)u(y)dy\nonumber\\
&=&\frac{\alpha (r- x_n)}{2x_nr-|x|^2}u(x).\end{aligned}$$ It follows that $$J_1\rightarrow\frac{\alpha}{2x_n}u(x),\quad \text{as}~~r\rightarrow\infty.\label{cc4}$$
By (\[cc1\]), (\[cc2\]), and(\[cc4\]), for each fixed $x\in B_r(x_r)\subset\mathbb{R}^n_+$, letting $r\rightarrow\infty$ , we arrive at $$\frac{\partial u}{\partial x_n}(x)=\frac{\alpha}{2x_n}u(x).$$ This verifies (\[b2\]), and hence completes the proof of Theorem \[mthm1\].\
$\hat{u}(x)$ is $\alpha$ -harmonic in $B_r(x_r)$
===================================================
In this section, we prove
$\hat{u}(x)$ defined by (\[max\]) in the previous section is $\alpha$-harmonic in $B_r(x_{r})$. \[thm3.1\]
The proof consists of two parts. First we show that $\hat{u}$ is harmonic in the average sense (Lemma \[lem3.1\]), then we show that it is $\alpha$-harmonic (Lemma \[lem3.2\]).
Let $$\varepsilon^{(r)}_{\alpha}(x)=\left\{\begin{array}{ll}
0, &|x|<r.\\
\frac{\Gamma(n/2)}{\pi^{\frac{n}{2}+1}} \sin\frac{\pi\alpha}{2}\frac{r^\alpha}{(|x|^2-r^2)^{\frac{\alpha}{2}}|x|^n}, &|x|>r.
\end{array}
\right.$$
We say that $u$ is $\alpha$-harmonic in the average sense (see [@L]) if for small $r$, $$\varepsilon^{(r)}_{\alpha}\ast u(x)=u(x).$$
Let $$\begin{aligned}
&&P_{r}(x-x_r,y-x_r)\nonumber\\&=&\left\{\begin{array}{ll}
\frac{\Gamma(n/2)}{\pi^{\frac{n}{2}+1}} \sin\frac{\pi\alpha}{2}
\left[\frac{r^{2}-|x-x_r|^{2}}{|y-x_r|^{2}-r^{2}}\right]^{\frac{\alpha}{2}}\frac{1}{|x-y|^{n}},\qquad& |y-x_r|>r,|x-x_r|<r\\
0,& \text{elsewhere}.
\end{array}
\right.\end{aligned}$$
Let $u(x)$ be any measurable function outside $B_r(x_{r})$ for which $$\int_{\mathbb{R}^n}\frac{|u(z)|}{(1+|z-x_{r}|)^{n+\alpha}}dz<\infty.
\label{d1}$$ Let $$\hat{u}(x)=\left\{\begin{array}{ll}
\int_{|y-x_{r}|>r}P_{r}(y-x_{r},x-x_{r})u(y)dy,& |x-x_{r}|<r,\\
u(x),& |x-x_{r}|\geq r.
\end{array}
\right.$$ Then $\hat{u}(x)$ is $\alpha$-harmonic in the average sense in $B_{r}(x_{r})$, i.e. for sufficiently small $\delta$, we have $$(\varepsilon_\alpha^{(\delta)}\ast\hat{u})(x)=\hat{u}(x),\quad |x-x_{r}|<r,$$ where $\ast$ is the convolution. \[lem3.1\]
**Proof.**
The outline is as follows.
*i)* Approximate $u$ by a sequence of smooth, compactly supported functions $\{u_k\}$, such that $u_k(x) {{\mbox{$\rightarrow$}}}u(x)$ and $$\int_{|z-x_{r}|>r}\frac{|u_k(z)-u(z)|}{|z-x_{r}|^n(|z-x_{r}|^2-r^2)^\frac{\alpha}{2}}dz {{\mbox{$\rightarrow$}}}0.
\label{d2}$$
This is possible under our assumption (\[d1\]).
*ii)* For each $u_k$, find a signed measure $\nu_k$ such that $supp\,\nu_k\subset B^c_r(x_{r})$ and $$u_k(x)=U_{\alpha}^{\nu_k}(x), \;\; |x-x_{r}|>r.$$ Then $$\hat{u}_k(x)=U_{\alpha}^{\nu_k}(x), \quad |x-x_{r}|<r.$$
*iii)* It is easy to see that $\hat{u}_k(x)$ is $\alpha$-harmonic in the average sense for $|x-x_{r}|<r$. That is, for each fixed small $\delta>0$, $$(\varepsilon_\alpha^{(\delta)}\ast\hat{u}_k)(x)=\hat{u}_k(x).$$ By showing that as $k {{\mbox{$\rightarrow$}}}\infty$ $$\varepsilon_\alpha^{(\delta)}\ast\hat{u}_k {{\mbox{$\rightarrow$}}}\varepsilon_\alpha^{(\delta)}\ast\hat{u},$$ and $$\hat{u}_k {{\mbox{$\rightarrow$}}}\hat{u},$$ we arrive at $$(\varepsilon_\alpha^{(\delta)}\ast\hat{u})(x)=\hat{u}(x),\quad |x-x_{r}|<r.$$
Now we carry out the details.
*i)* There are several ways to construct such a sequence $\{u_k\}$. One is to use the mollifier. Let $$u|_{B_k(x_{r})}(x)=\left\{\begin{array}{ll}
u(x),&|x-x_{r}|<k,\\
0,&|x-x_{r}|\geq k,
\end{array}
\right.$$ and $$J_\epsilon(u|_{B_k(x_{r})})(x)=\int_{\mathbb{R}^n}j_\epsilon(x-y)u|_{B_k(x_{r})}(y)dy.$$
For any $\delta>0$, let $k$ be sufficiently large (larger than $r$) such that $$\int_{|z-x_{r}|\geq k}
\frac{|u(z)|}{|z-x_{r}|^n(|z-x_{r}|^2-r^2)^{\frac{\alpha}{2}}}dz<\frac{\delta}{2}.$$ For each such $k$, choose $\epsilon_k$ such that $$\int_{B_{k+1}\backslash B_r}
\frac{|u_k(z)-u|_{B_k(x_{r})}(z)|}{|z-x_{r}|^n(|z-x_{r}|^2-r^2)^{\frac{\alpha}{2}}}dz
<\frac{\delta}{2},$$ where $u_k=J_{\epsilon_k}(u|_{B_k(x_{r})})$. It then follows that $$\begin{aligned}
&\quad&\int_{|z-x_{r}|>r}\frac{|u_k(z)-u(z)|}{|z-x_{r}|^n(|z-x_{r}|^2-r^2)^{\frac{\alpha}{2}}}dz\\
&\leq&\int_{B_{k+1}(x_{r})\backslash B_r(x_{r})}\frac{|u_k(z)-u|_{B_k(x_{r})}(z)|+
|u|_{B_k(x_{r})}(z)-u(z)|}{|z-x_{r}|^n(|z-x_{r}|^2-r^2)^{\frac{\alpha}{2}}}dz\\
&&+\int_{|z-x_{r}|>k+1}\frac{|u(z)|}{|z-x_{r}|^n(|z-x_{r}|^2-r^2)^{\frac{\alpha}{2}}}dz\\
&=&\int_{B_{k+1(x_{r})}\backslash B_r(x_{r})}\frac{|u_k(z)-u|_{B_k(x_{r})}(z)|}{|z-x_{r}|^n(|z-x_{r}|^2-r^2)^{\frac{\alpha}{2}}}dz+
\int_{|z-x_{r}|\geq k}\frac{|u(z)|}{|z-x_{r}|^n(|z-x_{r}|^2-r^2)^{\frac{\alpha}{2}}}dz\\
&<&\frac{\delta}{2}+\frac{\delta}{2}=\delta.\end{aligned}$$
Therefore, as $k {{\mbox{$\rightarrow$}}}\infty$, $$\int_{|z-x_{r}|>r}\frac{|u_k(z)-u(z)|}{|z-x_{r}|^n(|z-x_{r}|^2-r^2)^\frac{\alpha}{2}}dz {{\mbox{$\rightarrow$}}}0.
\label{w1}$$
*ii)* For each $u_k$, there exists a signed measure $\psi_k$ such that $$u_k(x)=U_{\alpha}^{\psi_k}(x).$$ Indeed, let $\psi_k(x)= C (-{\mbox{$\bigtriangleup$}})^{\frac{\alpha}{2}}u_k(x)$, then $$\begin{aligned}
U_{\alpha}^{\psi_k}(x)&=&\int_{\mathbb{R}^n}\frac{C}{|x-y|^{n-
\alpha}}(-{\mbox{$\bigtriangleup$}})^{\frac{\alpha}{2}}u_k(y)dy\\
&=&\int_{\mathbb{R}^n}(-{\mbox{$\bigtriangleup$}})^{\frac{\alpha}{2}}\left[\frac{C}{|x-y|^{n-
\alpha}}\right]u_k(y)dy\label{d3}\\
&=&\int_{\mathbb{R}^n}\delta(x-y)u_k(y)dy=u_k(x).
\end{aligned}$$ Here we have used the fact that $\frac{C}{|x-y|^{n-\alpha}}$ is the fundamental solution of $(-{\mbox{$\bigtriangleup$}})^{\alpha/2}$.
Let $\psi_k|_{B_r(x_{r})}$ be the restriction of $\psi_k$ on $B_r(x_{r})$ and $$\tilde{\psi}_k(y)=\int_{|x-x_{r}|<r}P_r(y-x_r, x-x_r)\psi_k|_{B_r(x_{r})}(x)dx,$$ we have $$U_{\alpha}^{\tilde{\psi}_k}(x)=U_{\alpha}^{\psi_k|_{B_r(x_{r})}}(x), \;\; |x-x_{r}|> r,$$ and $supp\,\tilde{\psi}_k\subset B^c_r(x_{r}).$ Here we use the fact (see (1.6.12$'$) [@L]) that $$\frac{1}{|z-x|^{n-\alpha}}=\int_{|y-x_{r}|>r}\frac{P_r(y-x_r,x-x_r)}{|z-y|^{n-\alpha}}dy,\qquad
\:|x-x_{r}|<r,\: |z-x_{r}|>r.
\label{Po}$$
Let $\nu_k=\psi_k-\psi_k|_{B_r(x_{r})}+\tilde{\psi}_k$, then $supp\,\nu_k\subset B^c_r(x_{r})$, and $$U_{\alpha}^{\nu_k}(x)=U_{\alpha}^{\psi_k}(x)+U_{\alpha}^{\tilde{\psi}_k}(x)
-U_{\alpha}^{\psi_k|_{B_r(x_{r})}}(x)=U_{\alpha}^{\psi_k}(x),\quad |x-x_{r}|>r.$$ That is $$u_k(x) = U_{\alpha}^{\nu_k}(x) ,\quad |x-x_{r}|>r.$$
Again by (\[Po\]), we deduce $$\hat{u}_k(x) = U_{\alpha}^{\nu_k}(x) , \quad |x-x_{r}|<r.$$ In this case $\hat{u}_k$ is $\alpha$-harmonic (in the sense of average) in the region $|x-x_{r}|<r$ (see [@L]).
*iii)*For each fixed $x$, we first have $$\hat{u}_k(x) {{\mbox{$\rightarrow$}}}\hat{u}(x).$$ In fact, by (\[w1\]), $$\begin{aligned}
\hat{u}_k(x)-\hat{u}(x)&=&\int_{|y-x_{r}|>r}P_r(y-x_{r},x-x_{r})[u_k(y)-u(y)]dy\\
&=&C\int_{|y-x_{r}|>r}\frac{(r^2-|x-x_{r}|^2)^{\frac{\alpha}{2}}[u_k(y)-u(y)]}{(
|y-x_{r}|^2-r^2)^{\frac{\alpha}{2}}|x-y|^n}dy\\&{{\mbox{$\rightarrow$}}}&0.\end{aligned}$$ Next, we show that, for each fixed $\delta>0$ and fixed $x$, $$(\varepsilon_\alpha^{(\delta)}\ast\hat{u}_k)(x) {{\mbox{$\rightarrow$}}}(\varepsilon_\alpha^{(\delta)}\ast\hat{u})(x).
\label{d8}$$
Indeed, $$\begin{aligned}
&&(\varepsilon_\alpha^{(\delta)}\ast\hat{u}_k)(x)- (\varepsilon_\alpha^{(\delta)}\ast\hat{u})(x)\\
&=&C\int_{|y-x|>\delta}\frac{\delta^\alpha[\hat{u}_k(y)-
\hat{u}(y)]}{(|x-y|^2-\delta^2)^{\frac{\alpha}{2}}|x-y|^n}dy\\
&=&C\{\int_{\substack{|y-x|>\delta\\ |y-x_{r}|<r-\eta}}
\frac{\delta^\alpha[\hat{u}_k(y)-
\hat{u}(y)]}{(|x-y|^2-\delta^2)^{\frac{\alpha}{2}}|x-y|^n}dy\\
&&+\int_{\substack{|y-x|>\delta \\r-\eta<|y-x_{r}|<r}}
\frac{\delta^\alpha[\hat{u}_k(y)-
\hat{u}(y)]}{(|x-y|^2-\delta^2)^{\frac{\alpha}{2}}|x-y|^n}dy\\
&&+
\int_{\substack{|y-x|>\delta \\|y-x_{r}|>r}}
\frac{\delta^\alpha[\hat{u}_k(y)-
\hat{u}(y)]}{(|x-y|^2-\delta^2)^{\frac{\alpha}{2}}|x-y|^n}dy\}\\
&=&C(I_1+I_2+I_3).\end{aligned}$$
For each fixed $x$ with $|x-x_{r}|<r$, choose $\delta$ and $\eta$ such that $$B_\delta(x)\cap B^c_{r-2\eta}(x_{r})=\emptyset.$$
It follows from (\[w1\]) that as $k {{\mbox{$\rightarrow$}}}\infty$, $$I_3=\int_{\substack{|y-x|>\delta \\
|y-x_{r}|>r}}\frac{\delta^\alpha[u_k(y)-
u(y)]}{(|x-y|^2-\delta^2)^{\frac{\alpha}{2}}|x-y|^n}dy~ {{\mbox{$\rightarrow$}}}~ 0.$$
$$\begin{aligned}
I_2&=&\int_{\substack{ |y-x|>\delta \\
r-\eta<|y-x_{r}|<r}}
\frac{\delta^\alpha \int_{|z-x_{r}|>r}P_r(z-x_{r},y-x_{r})[u_k(z)-u(z)]dz}
{(|x-y|^2-\delta^2)^{\frac{\alpha}{2}}|x-y|^n}dy\\
&=&C\delta^\alpha\int_{|z-x_{r}|>r}\frac{u_k(z)-u(z)}{(|z-x_{r}|^2-r^2)^{\frac{\alpha}{2}}}
\int_{\substack{ |y-x|>\delta \\
r-\eta<|y-x_{r}|<r}}
\frac{(r^2-|y-x_{r}|^2)^{\frac{\alpha}{2}}dy}
{(|x-y|^2-\delta^2)^{\frac{\alpha}{2}}|x-y|^n|z-y|^n}dz\\
&=&C\delta^\alpha\int_{|z-x_{r}|>r}\frac{u_k(z)-u(z)}{(|z-x_{r}|^2-r^2)^{\frac{\alpha}{2}}}
\cdot I_{21}(x,z)dz.\end{aligned}$$
Noting that in the ring $r-\eta<|y-x_{r}|<r$, we have $$|x-y|>\eta+\delta.$$ It then follows that $$\begin{aligned}
&&I_{21}(x,z)\nonumber\\
&\leq&\frac{1}{(2\eta\delta+\eta^2)^{\frac{\alpha}{2}}(\eta+\delta)^n}
\int_{r-\eta<|y-x_{r}|<r}\frac{(r^2-|y-x_{r}|^2)^{\frac{\alpha}{2}}dy}{|z-y|^n}
\nonumber\\
&=&C\int_{r-\eta}^r (r^2-\tau^2)^{\frac{\alpha}{2}}
\left\{\int_{S_\tau} \frac{1}{|z-y|^n} d\sigma_y\right\}d\tau\nonumber\\
&=&C\int_{r-\eta}^r (r^2-\tau^2)^{\frac{\alpha}{2}}
\left\{\int_0^\pi \frac{\omega_{n-2}(\tau\sin\theta)^{n-2}\tau d\theta}
{(\tau^2+|z-x_{r}|^2-2\tau|z-x_{r}|\cos\theta)^{\frac{n}{2}}}\right\}d\tau\nonumber\\
&=&C\int_{r-\eta}^r (r^2-\tau^2)^{\frac{\alpha}{2}}\frac{1}{\tau^n}
\int_0^\pi \frac{\tau^{n-1}\sin^{n-2}\theta d\theta}{(
(\frac{|z-x_{r}|}{\tau})^2-2\frac{|z-x_{r}|}{\tau}\cos\theta+1)^{\frac{n}{2}}}d\tau
\label{d5}\\
&=&C\int_{r-\eta}^r \frac{(r^2-\tau^2)^{\frac{\alpha}{2}}}{\tau}
\frac{d\tau}{(\frac{|z-x_{r}|}{\tau})^{n-2}((\frac{|z-x_{r}|}{\tau})^2-1)}
\int_{0}^\pi\sin^{n-2}\beta d\beta \label{d6}\\
&<&\frac{Cr^{n-1}}{|z-x_{r}|^{n-2}}\int_{r-\eta}^r
\frac{(r^2-\tau^2)^{\frac{\alpha}{2}}}{|z-x_{r}|^2-\tau^2} d\tau\nonumber\\
&=&\frac{Cr^{n-1}}{|z-x_{r}|^{n-2}}\cdot J.\nonumber\end{aligned}$$ In the above, to derive (\[d6\]) from (\[d5\]), we have made the following substitution (See Appendix in [@L]): $$\frac{\sin\theta}{\sqrt{(\frac{|z-x_{r}|}{\tau})^2-2\frac{|z-x_{r}|}{\tau}\cos\theta+1}}
=\frac{\sin\beta}{\frac{|z-x_{r}|}{\tau}},$$
To estimate the last integral $J$, we consider
*(a)*For $r<|z-x_{r}|<r+1$, $$J\leq\int_{r-\eta}^r\frac{(r+\tau)^{\frac{\alpha}{2}-1}}{(r-
\tau)^{1-\frac{\alpha}{2}}}d\tau\leq C_{\alpha,r}.$$
*(b)*For $|z-x_{r}|\geq r+1$, obviously, $$J\sim\frac{1}{|z-x_{r}|^2}, \mbox{ for $|z-x_{r}|$ large}.\nonumber$$
In summary, $$I_{21}(x,z)\sim\left\{\begin{array}{ll}
1, &\mbox{ for $|z-x_{r}|$ near r},\\
|z-x_{r}|^n,&\mbox{ for $|z-x_{r}|$ large}.
\end{array}
\right.
\nonumber$$
Therefore, by (\[w1\]), as $k {{\mbox{$\rightarrow$}}}\infty$, $$I_2=\delta^\alpha\int_{|z-x_{r}|>r}\frac{u_k(z)-u(z)}{(|z-x_{r}|^2 -r^2)^{\alpha/2}} I_{21}(x,z)dz ~{{\mbox{$\rightarrow$}}}~ 0.$$
Now what remains is to estimate $$I_1=\delta^\alpha\int_{|z-x_{r}|>r}\frac{u_k(z)-u(z)}{(|z-x_{r}|^2-r^2)^{\frac{\alpha}{2}}}
I_{11}(x,z)dz,$$ where $$I_{11}(x,z)=\int_{\substack{|y-x|>\delta \\
|y-x_{r}|< r-\eta}}
\frac{(r^2-|y-x_{r}|^2)^{\frac{\alpha}{2}}dy}
{(|x-y|^2-\delta^2)^{\frac{\alpha}{2}}|x-y|^n|z-y|^n}.$$
$$\begin{aligned}
I_{11}(x,z)
&\leq&\frac{r^\alpha}{\delta^n}\int_{\substack{ |y-x|>\delta \\
|y-x_{r}|< r-\eta}}
\frac{dy}{(|x-y|^2-\delta^2)^{\frac{\alpha}{2}}|z-y|^n}\\
&\leq&\frac{r^\alpha}{\delta^n(|z-x_{r}|-r+\eta)^n}
\int_{\delta<|y-x|<2r}\frac{dy}
{(|x-y|^2-\delta^2)^{\frac{\alpha}{2}}}\\
&=&\frac{r^\alpha}{\delta^n(|z-x_{r}|-r+\eta)^n}
\int_\delta^{2r}
\frac{\omega_{n-1}\tau^{n-1}d\tau}{(\tau^2-\delta^2)^{\frac{\alpha}{2}}}\\
&\leq&\frac{C}{|z-x_{r}|^n}.\end{aligned}$$
By (\[w1\]), as $k {{\mbox{$\rightarrow$}}}\infty$, we have $I_1 {{\mbox{$\rightarrow$}}}0.$ This verifies (\[d8\]) and hence completes the proof.
$$\lim_{r\rightarrow0}\frac{1}{r^\alpha}\left[u(x)-\varepsilon^{(r)}_{\alpha}\ast u(x)\right]
=c(-{\mbox{$\bigtriangleup$}})^\frac{\alpha}{2}u(x).
\label{c0}$$
where $c=\frac{\Gamma(n/2)}{\pi^{\frac{n}{2}+1}} \sin\frac{\pi\alpha}{2}$. \[lem3.2\]
**Proof.** $$\begin{aligned}
&&\frac{1}{r^\alpha}\left[u(x)-\varepsilon^{(r)}_{\alpha}\ast u(x)\right]\nonumber\\
&=&\frac{1}{r^\alpha}u(x)- c\int_{|y-x|>r}\frac{u(y)}{(|x -y|^2-r^2)^{\frac{\alpha}{2}}|x-y|^n}dy\nonumber\\
&=&c\int_{|y-x|>r}
\frac{u(x)-u(y)}{(|x-y|^2-r^2)^{\frac{\alpha}{2}}|x-y|^n}dy.
\label{c1}\end{aligned}$$ Here we have used the property that $$\int_{|y-x|>r}\varepsilon^{(r)}_{\alpha}(x-y)=1.$$
Compare (\[c1\]) with $$(-{\mbox{$\bigtriangleup$}})^\frac{\alpha}{2}u(x)=\lim_{r\rightarrow0}\int_{|y-x|>r}
\frac{u(x)-u(y)}{|x-y|^{\alpha+n}}dy.$$ One may expect that $$\lim_{r\rightarrow0}\int_{|y-x|>r}
\frac{u(x)-u(y)}{|x-y|^{\alpha+n}}dy=\lim_{r\rightarrow0}\int_{|y-x|>r}
\frac{u(x)-u(y)}{(|x-y|^2-r^2)^{\frac{\alpha}{2}}|x-y|^n}dy.$$
Indeed, consider $$\begin{aligned}
&&\int_{|y-x|>r}
\frac{u(x)-u(y)}{|x-y|^{n}}\left(\frac{1}{(|x-y|^2-r^2)^{\frac{\alpha}{2}}}
-\frac{1}{|x-y|^{\alpha}}\right)dy\nonumber\\
&=&\int_{r<|y-x|<1}\frac{u(x)-
u(y)}{|x-y|^{n}}\left(\frac{1}{(|x-y|^2-r^2)^{\frac{\alpha}{2}}}
-\frac{1}{|x-y|^{\alpha}}\right)dy\nonumber\\
&&+\int_{|y-x|\geq1}\frac{u(x)-
u(y)}{|x-y|^{n}}\left(\frac{1}{(|x-y|^2-r^2)^{\frac{\alpha}{2}}}
-\frac{1}{|x-y|^{\alpha}}\right)dy\nonumber\\
&=&I_{1}+I_{2}.\label{c2}\end{aligned}$$
It is easy to see that as $r\rightarrow0$, $I_2$ tends to zero. Actually, same conclusion is true for $I_1$. $$\begin{aligned}
I_1&=&\int_{r<|y-x|<1}\frac{\nabla u(x)(y-x)+O(|y-x|^2)}{|x-
y|^{n}}\left(\frac{1}{(|x-y|^2-r^2)^{\frac{\alpha}{2}}}
-\frac{1}{|x-y|^{\alpha}}\right)dy\nonumber\\
\label{c2.5}\\
&\leq&C\int_{r<|y-x|<1}\frac{|x-
y|^2}{|x-y|^n}\left(\frac{1}{(|x-y|^2-r^2)^{\frac{\alpha}{2}}}
-\frac{1}{|x-y|^{\alpha}}\right)dy\label{c3}\\
&=&C\int_r^1\frac{\tau^2}{\tau^n}\left(\frac{1}{(\tau^
2-r^2)^{\frac{\alpha}{2}}}
-\frac{1}{\tau^{\alpha}}\right)\tau^{n-1}d\tau\label{c4}\\
&\leq&C\int_1^\infty\left(\frac{1}{r^\alpha(s^
2-1)^{\frac{\alpha}{2}}}
-\frac{1}{r^\alpha s^\alpha}\right)sr^2ds\label{c5}\\
&=&Cr^{2-\alpha}\int_1^\infty\left(\frac{s^\alpha-(s^
2-1)^{\frac{\alpha}{2}}}{(s^
2-1)^{\frac{\alpha}{2}}s^\alpha}\right)sds.\label{c6}\end{aligned}$$ Equation (\[c2.5\]) follows from the Taylor expansion. Due to symmetry, we have $$\int_{r<|y-x|<1}\frac{\nabla u(x)(y-x)}{|x-
y|^{n}}\left(\frac{1}{(|x-y|^2-r^2)^{\frac{\alpha}{2}}}
-\frac{1}{|x-y|^{\alpha}}\right)dy=0$$ and get (\[c3\]). By letting $|y-x|=\tau$ and $\tau=rs$ respectively, one obtains (\[c4\]) and (\[c5\]). It is easy to see that the integral in (\[c6\]) converges near 1. To see that it also converges near infinity, we estimate $$s^\alpha-(s^2-1)^{\frac{\alpha}{2}}.$$
Let$f(t)=t^{\alpha/2}$. By the *mean value theorem*,
$$\begin{aligned}
f(s^2)-f(s^2-1)
&=&f^\prime(\xi)(s^2-(s^2-1))\\
&=&\frac{\alpha}{2}\xi^{\frac{\alpha}{2}-1}\sim s^{\alpha-2}, \mbox{ for $s$ sufficiently large.}\end{aligned}$$
This implies that $$\frac{s^\alpha-(s^2-1)^{\frac{\alpha}{2}}}{(s^2-1)^{\frac{\alpha}{2}}s^\alpha}s
\sim\frac{s^{\alpha-2}s}{(s^2-1)^{\frac{\alpha}{2}}s^\alpha}
\sim\frac{1}{s^{1+\alpha}}.$$ Now it is obvious that (\[c6\]) converges near infinity. Thus we have $$\int_1^\infty\left(\frac{s^\alpha-(s^
2-1)^{\frac{\alpha}{2}}}{(s^
2-1)^{\frac{\alpha}{2}}s^\alpha}\right)sds<\infty.$$ Since $0<\alpha<2$, as $r\rightarrow0$, (\[c6\]) goes to zero, i.e. $I_1$ converges to zero. Together with (\[c1\]) and (\[c2\]), we get (\[c0\]). This proves the lemma.
[CL]{}
K. Bogdan, T. Kulczycki, A. Nowak, Gradient estimates for harmonic and q-harmonic functions of symmetric stable processes, Illinois J. Math. 46(2002)541-556.
W. Chen and Y. Li, A new Liouville theorem for the fractional Laplacian, submitted to Nonlinear Analysis A, 2014.
M. Fall, Entire s-harmonic functions are affine, 2014, Arxiv: 1407.5934v.
M. Fall and T. Weth, Monotonicity and nonexistence results for some fractional elliptic problems in the half space Arxiv: 1309.7230.
N. S. Landkof, Foundations of modern potential theory, Springer-Verlag Berlin Heidelberg, New York, 1972. Translated from the Russian by A. P. Doohovskoy, Die Grundlehren der mathematischen Wissenschaften, Band 180.
X. Ros-Oton and J. Serra, The Dirichlet problem for the fractional Laplacian: regularity up to the boundary, arXiv: 1207.5985vl.
L. Silvestre, Regularity of the obstacle problem for a fractional power of the Laplace operator, Comm. Pure Appl. Math. 60(2007) 67-112.
[*Authors’ Addresses and E-mails:*]{}
Wenxiong Chen
Department of Mathematical Sciences
Yeshiva University
New York, NY, 10033 USA
wchen@yu.edu
Congming Li
Department of Applied Mathematics
University of Colorado,
Boulder CO USA
cli@clorado.edu
Lizhi Zhang
School of Mathematics and Information Science
Henan Normal University
azhanglz@163.com
Tingzhi Cheng
Department of Mathematics
Shanghai JiaoTong University
nowitzki1989@126.com.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We recently generalised the lattice permutation condition for Young tableaux to Kronecker tableaux and hence calculated a large new class of stable Kronecker coefficients labelled by co-Pieri triples. In this extended abstract we discuss important families of co-Pieri triples for which our combinatorics simplifies drastically.'
author:
- 'C. Bowman'
- 'M. De Visscher'
- 'J. Enyang'
bibliography:
- 'bib.bib'
title: |
The lattice permutation condition\
for Kronecker tableaux\
(Extended abstract)
---
Introduction
============
Perhaps the last major open problem in the complex representation theory of symmetric groups is to describe the decomposition of a tensor product of two simple representations. The coefficients describing the decomposition of these tensor products are known as the [*Kronecker coefficients*]{} and they have been described as ‘perhaps the most challenging, deep and mysterious objects in algebraic combinatorics’. Much recent progress has focussed on the stability properties enjoyed by Kronecker coefficients.
Whilst a complete understanding of the Kronecker coefficients seems out of reach, the purpose of this work is to attempt to understand the [*stable*]{} Kronecker coefficients in terms of oscillating tableaux. Oscillating tableaux hold a distinguished position in the study of tensor product decompositions [@MR1035496; @MR3090983; @MR2264927] but surprisingly they have never before been used to calculate Kronecker coefficients of symmetric groups. In this work, we see that the oscillating tableaux defined as paths on the graph given in \[brancher\] (which we call Kronecker tableaux) provide bases of certain modules for the partition algebra, $P_s(n)$, which is closely related to the symmetric group. We hence add a new level of structure to the classical picture — this extra structure is the key to our main result: the co-Pieri rule for stable Kronecker coefficients.
$$\begin{tikzpicture}[scale=0.4]
\begin{scope} \draw (0,3) node { $\scalefont{0.5}\varnothing$ };
\draw (-3,0) node { $ \frac{1}{2}$ }; \draw (-3,3) node { $ 0$ };
\draw (-3,-3) node { $ 1$ }; \draw (-3,-6) node { $ 1\frac{1}{2}$ };
\draw (-3,-9) node { $ 2$ }; \draw (-3,-12) node { $ 2\frac{1}{2}$ };
\draw (-3,-15) node { $ 3$ };
\draw (0,0) node { $\scalefont{0.5}\varnothing$ };
\draw (0,-3) node { \text{ $\scalefont{0.5}\varnothing$ }} ; \draw (+3,-3) node {
$
\scalefont{0.4}\yng(1)$
} ;
\draw (0,-6) node { \text{ $\scalefont{0.5}\varnothing$ }};
\draw (3,-6) node { $ \,
\scalefont{0.5}\yng(1) $ };
\draw[<-] (0.0,1) -- (0,2); \draw[->] (0.0,-0.75) -- (0,-2.25);
\draw[->] (0.5,-0.75) -- (2.5,-2.25); \draw[->] (2.5,-3.75) -- (0.5,-5.25); \draw[->] (0,-3.75) -- (0,-5.25); \draw[->] (3,-3.75) -- (3,-5.25);
\draw[->] (0,-6.75) -- (0,-8.25); \draw[->] (03,-6.75) -- (3,-8.25);
\draw[->] (0.5,-6.75) -- (2.5,-8.25); \draw[->] (4,-6.75) -- (8,-8.25); \draw[->] (3.5,-6.75) -- (5,-8.25);
\draw (+0,-9) node { $\scalefont{0.5}\varnothing$ };
\draw (+3,-9) node { $
\scalefont{0.4}\yng(1) $ } ;
\draw (+6,-9) node { $
\scalefont{0.4}\yng(2)$ } ;
\draw (+9,-9) node { $
\scalefont{0.4}\yng(1,1)$ } ;
\draw[<-] (0,-11.25) -- (0,-9.75); \draw[<-] (03,-11.25) -- (3,-9.75); \draw[<-] (06,-11.25) -- (6,-9.75); \draw[<-] (9,-11.25) -- (9,-9.75);
\draw[<-] (0.5,-11.25) -- (2.5,-9.75); \draw[<-] (4,-11.25) -- (8,-9.75); \draw[<-] (3.5,-11.25) -- (5,-9.75);
\draw (+0,-12) node { $\scalefont{0.5}\varnothing$ };
\draw (+3,-12) node { $
\scalefont{0.4}\yng(1)$ } ;
\draw (+6,-12) node { $
\scalefont{0.4}\yng(2)$ } ;
\draw (+9,-12) node { $
\scalefont{0.4}\yng(1,1)$ } ;
\draw[->] (0,-12.75) -- (0,-14.25); \draw[->] (03,-12.75) -- (3,-14.25); \draw[->] (06,-12.75) -- (6,-14.25); \draw[->] (9,-12.75) -- (9,-14.25);
\draw[->] (0.5,-12.75) -- (2.5,-14.25); \draw[->] (4,-12.75) -- (8,-14.25); \draw[->] (3.5,-12.75) -- (5,-14.25);
\draw (+0,-15) node { $\scalefont{0.5}\varnothing$ };
\draw (+3,-15) node { $
\scalefont{0.4}\yng(1) $ } ;
\draw (+6,-15) node { $
\scalefont{0.4}\yng(2)$ } ;
\draw (+9,-15) node { $
\scalefont{0.4}\yng(1,1)$ } ;
\draw[->] (6.5,-12.75) -- (12,-14.25); \draw[->] (7,-12.75) -- (14.75,-14.25);
\draw[->] (9.5,-12.75) -- (15.25,-14.25); \draw[->] (10,-12.75) -- (17.7,-14.1);
\draw (12,-15) node { $
\scalefont{0.4}\yng(3)$ } ;
\draw (15,-15) node { $
\scalefont{0.4}\yng(2,1)$ } ;
\draw (18,-15) node { $
\scalefont{0.4}\yng(1,1,1)$ } ;
\end{scope}\end{tikzpicture}$$
\[brancher\]
A momentary glance at the graph given in \[oscillate\] reveals a very familiar subgraph: namely Young’s graph (with each level doubled up). The stable Kronecker coefficients labelled by triples from this subgraph are well-understood — the values of these coefficients can be calculated via a tableaux counting algorithm known as the Littlewood–Richardson rule [@LR34]. This rule has long served as the hallmark for our understanding of Kronecker coefficients. The Littlewood–Richardson rule was discovered as a rule of two halves (as we explain below). In [@BDE] we succeed in generalising one half of this rule to all Kronecker tableaux, and thus solve one half of the stable Kronecker problem. Our main result unifies and vastly generalises the work of Littlewood–Richardson [@LR34] and many other authors [@RW94; @Rosas01; @ROSAANDCO; @BWZ10; @MR2550164]. Most promisingly, our result counts explicit homomorphisms and thus works on a structural level above any description of a family of Kronecker coefficients since those first considered by Littlewood–Richardson [@LR34].
In more detail, given a triple of partitions $(\lambda,\nu, \mu)$ and with $|\mu|=s$, we have an associated skew $P_s(n)$-module spanned by the Kronecker tableaux from $\lambda$ to $\nu$ of length $s$, which we denote by $\Delta_s(\nu \setminus\lambda ) $. For $\lambda=\scalefont{0.5}\varnothing$ and $n{\geqslant}2s$ these modules provide a complete set of non-isomorphic $P_s(n)$-modules (and we drop the partition $\scalefont{0.5}\varnothing$ from the notation). The stable Kronecker coefficients are then interpreted as the dimensions, $$\label{dagger}\tag{$\dagger$}
\overline{g}(\lambda,\nu,\mu)
= \dim_{\mathbb{Q}}( {\operatorname{Hom}}_{ P_{s}(n)}( \Delta_{s}(\mu), \Delta_s(\nu \setminus\lambda ) ) )$$for $n{\geqslant}2s$. Restricting to the Young subgraph, or equivalently to a triple $(\lambda,\nu,\mu)$ of so-called [*maximal depth*]{} such that $|\lambda| + |\mu| = |\nu|$, these modules specialise to the usual simple and skew modules for symmetric groups; hence the multiplicities $\overline{g}(\lambda,\nu,\mu)$ are the Littlewood–Richardson coefficients. We hence recover the well-known fact that the Littlewood–Richardson coefficients appear as the subfamily of stable Kronecker coefficients labelled by triples of maximal depth. The tableaux counted by the Littlewood–Richardson rule satisfy 2 conditions: the [*semistandard*]{} and [*lattice permutation*]{} conditions. In [@BDE] we generalise the lattice permutation condition to Kronecker tableaux.
Let $(\lambda,\nu,\mu)$ be a [co-Pieri]{} triple or a triple of maximal depth. Then the stable Kronecker coefficient $\overline{g}(\lambda, \nu, \mu)$ is given by the number of semistandard Kronecker tableaux of shape $\nu\setminus\lambda$ and weight $\mu$ whose reverse reading word is a lattice permutation.
The observant reader will notice that the statement above describes the Littlewood–Richardson coefficients uniformly as part of a far broader family of stable Kronecker coefficients (and is the first result in the literature to do so). Whilst the classical Pieri rule (describing the semistandardness condition for Littlewood–Richardson tableaux) is elementary, it served as a first step towards understanding the full Littlewood–Richardson rule; indeed Knutson–Tao–Woodward have shown that the Littlewood–Richardson rule follows from the Pieri rule by associativity [@taoandco]. We hope that our generalisation of the co-Pieri rule (the lattice permutation condition for Kronecker tableaux) will prove equally useful in the study of stable Kronecker coefficients.
The definition of [semistandard Kronecker tableaux]{} naturally generalises the classical notion of semistandard Young tableaux as certain “orbits" of paths on the branching graph given in \[brancher\] (see Section 1.2 and Definition \[sstrd\]). The [lattice permutation condition]{} is identical to the classical case once we generalise the dominance order to all steps in the branching graph $\mathcal{Y}$ to define the reverse reading word of a semistandard Kronecker tableau (see \[sec:latticed\]).
**Examples of co-Pieri triples.** The definition of [*co-Pieri triples* ]{} is given in [@BDE Theorem 4.12] and can appear quite technical at first reading; we present a few special cases here.
- $\lambda$ and $\mu$ are one-row partitions and $\mu$ is arbitrary. This family has been extensively studied over the past thirty years and there are many distinct combinatorial descriptions of some or all of these coefficients [@RW94; @Rosas01; @ROSAANDCO; @BWZ10; @MR2550164], none of which generalises.
- the two skew partitions $\lambda \ominus (\lambda \cap \nu)$ and $\nu \ominus (\lambda \cap \nu)$ have no two boxes in the same column and $|\mu| = \max \{|\lambda \ominus (\lambda \cap \nu)| , |\nu \ominus (\lambda \cap \nu)|\}$. It is easy to see that if, in addition, $(\lambda, \nu, \mu)$ is a triple of maximal depth, then this case specialises to the classical co-Pieri triples.
- $\lambda = \nu = (dl,d(l-1), \ldots , 2d,d)$ for any $l,d{\geqslant}1$ and $|\mu| {\leqslant}d$.
In this extended abstract we have chosen to focus primarily on case $(i)$ as these triples carry many of the tropes of general co-Pieri triples (but with significant simplifications which serve to make this abstract more approachable) and because case $(i)$ should be familiar to many readers due to its many appearances in the literature.
The partition algebra and Kronecker tableaux {#sec2}
=============================================
\[sec:standard\]
The combinatorics underlying the representation theory of the partition algebras and symmetric groups is based on partitions. A [*partition*]{} $\lambda $ of $n$, denoted $\lambda \vdash n$, is defined to be a sequence of weakly decreasing non-negative integers which sum to $n$. We let $\varnothing$ denote the unique partition of 0. Given a partition, $\lambda=(\lambda _1,\lambda _2,\dots )$, the associated [*Young diagram*]{} is the set of nodes $[\lambda]=\left\{(i,j)\in\mathbb{Z}_{>0}^2\ \left|\ j{\leqslant}\lambda_i\right.\right\}.$ We define the length, $\ell(\lambda)$, of a partition $\lambda$, to be the number of non-zero parts. Given $\lambda = (\lambda_1,\lambda_2, \ldots,\lambda_{\ell} )$ a partition and $n$ an integer, define $\lambda_{[n]}=(n-|\lambda|, \lambda_1,\lambda_2, \ldots,\lambda_{\ell}).$ Given $\lambda_{[n]} $ a partition of $n$, we say that the partition has [*depth*]{} equal to $|\lambda|$.
The partition algebra is generated as an algebra by the elements $s_{k,k+1}$, $p_{k+1/2}$ ($1{\leqslant}k{\leqslant}r-1$) and $p_k$ ($1{\leqslant}k{\leqslant}r$) pictured below modulo a long list of relations. One can visualise any product in this algebra as simply being given by concatenation of diagrams, modulo some surgery to remove closed loops [@BDE]. $${s}_{k,k+1}=
\begin{minipage}{34mm}\scalefont{0.8}\begin{tikzpicture}[scale=0.45]
\draw (0,0) rectangle (6,3);
\foreach \x in {0.5,1.5,...,5.5}
{\fill (\x,3) circle (2pt);
\fill (\x,0) circle (2pt);}
\draw (2.5,-0.49) node {$k$};
\draw (2.5,+3.5) node {$\overline{k}$};
\begin{scope} \draw (0.5,3) -- (0.5,0);
\draw (5.5,3) -- (5.5,0);
\draw (2.5,3) -- (3.5,0);
\draw (4.5,3) -- (4.5,0);
\draw (3.5,3) -- (2.5,0);
\draw (1.5,3) -- (1.5,0);
\end{scope}
\end{tikzpicture}\end{minipage}
p_{k+1/2}
=\begin{minipage}{34mm}\scalefont{0.8}\begin{tikzpicture}[scale=0.45]
\draw (0,0) rectangle (6,3);
\foreach \x in {0.5,1.5,...,5.5}
{\fill (\x,3) circle (2pt);
\fill (\x,0) circle (2pt);}
\draw (2.5,-0.49) node {$k$};
\draw (2.5,+3.5) node {$\overline{k}$};
\begin{scope} \draw (0.5,3) -- (0.5,0);
\draw (5.5,3) -- (5.5,0);
\draw (1.5,3) -- (1.5,0);
\draw (3.5,0) arc (0:180:.5 and 0.5);
\draw (2.5,3) arc (180:360:.5 and 0.5);
\draw (2.5,3) -- (2.5,0);
\draw (4.5,3) -- (4.5,0);
\end{scope}
\end{tikzpicture}\end{minipage}
p_k
=\begin{minipage}{34mm}\scalefont{0.8}\begin{tikzpicture}[scale=0.45]
\draw (0,0) rectangle (6,3);
\foreach \x in {0.5,1.5,...,5.5}
{\fill (\x,3) circle (2pt);
\fill (\x,0) circle (2pt);}
\draw (2.5,-0.49) node {$k$};
\draw (2.5,+3.5) node {$\overline{k}$};
\begin{scope} \draw (0.5,3) -- (0.5,0);
\draw (5.5,3) -- (5.5,0);
\draw (4.5,0) -- (4.5,3);
\draw (1.5,3) -- (1.5,0);
\draw (3.5,3) -- (3.5,0);
\end{scope}
\end{tikzpicture}\end{minipage}
$$
Define the branching graph $\mathcal{Y}$ as follows. For $k\in {{{\mathbb Z}_{{\geqslant}0}}}$, we denote by $\mathscr{P}_{{\leqslant}k}$ the set of partitions of degree less or equal to $k$. Now the set of vertices on the $k$th and $(k+1/2)$th levels of $\mathcal{Y}$ are given by $${\mathcal{Y}}_{k}= \{ (\lambda,k-|\lambda|) \mid \lambda \in \mathscr{P}_{{\leqslant}k}\}
\qquad
{\mathcal{Y}}_{k+1/2} =
\{ (\lambda,k-|\lambda|) \mid \lambda \in \mathscr{P}_{{\leqslant}k}\}.$$ The edges of $\mathcal{Y}$ are as follows,
- for $(\lambda,l) \in {\mathcal{Y}}_k$ and $(\mu,m) \in {\mathcal{Y}}_{k+1/2}$ there is an edge $(\lambda,l) \to(\mu,m)$ if $\mu = \lambda$, or if $\mu $ is obtained from $\lambda $ by removing a box in the $i$th row for some $i{\geqslant}1$; we write $\mu =\lambda- \varepsilon_0$ or $\mu =\lambda- \varepsilon_i$, respectively.
- for $(\lambda,l) \in {\mathcal{Y}}_{k+1/2}$ and $(\mu,m) \in {\mathcal{Y}}_{k+1}$ there is an edge $(\lambda,l) \to(\mu,m)$ if $\mu = \lambda$, or if $\mu $ is obtained from $\lambda $ by adding a box in the $i$th row for some $i{\geqslant}1$; we write $\mu =\lambda+ \varepsilon_0$ or $\mu =\lambda+ \varepsilon_i$, respectively.
When it is convenient, we decorate each edge with the index of the node that is added or removed when reading down the diagram. The first few levels of $\mathcal{Y}$ are given in Figure \[brancher\]. When no confusion is possible, we identify $(\lambda,l) \in \mathcal{Y}_{k}$ with the partition $\lambda$.
Given $\lambda \in \mathscr{P}_{r-s} \subseteq \mathcal{Y}_{r-s}$ and $\nu \in \mathscr{P}_ {{\leqslant}r} \subseteq \mathcal{Y}_{r}$, we define a [*standard Kronecker tableau*]{} of shape $ \nu \setminus \lambda $ and degree $s$ to be a path ${\mathsf{t}}$ of the form $$\label{genericpath}
\lambda = {\mathsf{t}}(0) \to {\mathsf{t}}(\tfrac{1}{2}) \to {\mathsf{t}}(1)\to \dots \to {\mathsf{t}}(s-\tfrac{1}{2})\to {\mathsf{t}}(s) = \nu,$$ in other words ${\mathsf{t}}$ is a path in $\mathcal{Y}$ which begins at $\lambda$ and terminates at $\nu$. We let ${\mathrm{Std}}_s(\nu \setminus \lambda)$ denote the set of all such paths. If $\lambda = \emptyset \in \mathcal{Y}_0$ then we write ${\mathrm{Std}}_r(\nu)$ instead of ${\mathrm{Std}}_r(\nu \setminus \emptyset)$. Given ${\mathsf{s}},{\mathsf{t}}$ two standard Kronecker tableaux of degree $s$, we write ${\mathsf{s}}\trianglerighteq {\mathsf{t}}$ if ${\mathsf{s}}(k)\trianglerighteq {\mathsf{t}}(k)$ for all $0{\leqslant}k{\leqslant}s$.
We can think of a path as either the sequence of partitions or the sequence of boxes removed and added. We usually prefer the latter case and record these boxes removed and added pairwise. For a pair $(-\varepsilon_p,+\varepsilon_q)$ we call this an add or remove step if $p=0$ or $q=0$ respectively (because the effect of this step is to add or remove a box) and we call this a dummy step if $p=q$ (as we end up at the same partition as we started); we write $a(q)$ or $r(p)$ for an add or remove step and $d(p)$ for a dummy step. Many examples are given below, in particular the reader should compare the paths of Example \[example3\] with those depicted in the central diagram in Figure \[maximaldepth\]. We let ${\mathsf{t}}^\lambda$ denote the most dominant element of ${\mathrm{Std}}_s(\lambda)$, namely that of the form: $$\underbrace{
d(0) \circ d(0) \circ
\dots
\circ d(0) }_{r-|\lambda|}
\circ
\underbrace{
a(1)
\circ
\dots\circ
a(1)}_{\lambda_1}\circ
\underbrace{
a(2)
\circ
\dots\circ
a(2)}_{\lambda_2}\circ
\cdots$$ Given $\lambda
\in\mathscr{P}_{r-s} \subseteq \mathcal{Y}_{r-s}$ and $\nu \in\mathscr{P}_{{\leqslant}r} \subseteq \mathcal{Y}_r$, define the [*skew cell module*]{} $$\Delta_s(\nu\setminus\lambda) = {\rm Span}\{ {\mathsf{t}}^\lambda \circ {\mathsf{s}}\mid {\mathsf{s}}\in {\mathrm{Std}}_s(\nu\setminus\lambda)\}$$ with the action of $P_s(n)\hookrightarrow P_{r-s}(n) \otimes P_s(n)\hookrightarrow P_r(n)$ given as in [@BDE Section 2.3]. If $\lambda =\varnothing$, then we simply denote this module by $\Delta_s(\nu)$. Let $\lambda\in \mathscr{P}_{r-s}$, $\mu\in \mathscr{P}_s$ and $\nu \in \mathscr{P}_{{\leqslant}r}$. Then we are able to define the stable Kronecker coefficients (even if this is not their usual definition) to be the multiplicities $$\overline{g}(\lambda,\nu,\mu)
= \dim_{\mathbb{Q}}( {\operatorname{Hom}}_{ P_{s}(n)}( \Delta_{s}(\mu), \Delta_s(\nu \setminus\lambda ) ) )$$ for all $n{\geqslant}2s$. When $s=|\nu|-|\lambda|$, the (skew) cell modules for partition algebras specialise to the usual Specht modules of the symmetric groups and we hence easily see that these stable coefficients coincide with the classical Littlewood–Richardson coefficients.
The action of the partition algebra
===================================
Understanding the action of the partition algebra on skew modules is difficult in general. In this section, we show that this can be done to some extent in the cases of interest to us. We have assumed that $|\mu|=s$, therefore the ideal $P_s(n)p_r P_s(n)\subset P_s(n)$ annihilates $\Delta_s(\mu)$ and this motivates the following definition.
We define the [Dvir radical]{} of the skew module $\Delta_s(\nu\setminus\lambda)$ by $${\sf DR}_s(\nu\setminus\lambda) = \Delta_s(\nu\setminus\lambda)P_s(n)p_rP_s(n)
\subseteq \Delta_s(\nu\setminus\lambda)$$ and set $$\Delta^0_s(\nu\setminus\lambda)=
\Delta_s(\nu\setminus\lambda) /{\sf DR}_s(\nu\setminus\lambda).$$ If $s=|\nu|-|\lambda|$, then set $ {\mathrm{Std}}^0_s(\nu\setminus\lambda) = {\mathrm{Std}}_s(\nu\setminus\lambda).
$ If $\lambda$ and $\nu$ are one-row partitions, then set $ {\mathrm{Std}}^0_s(\nu\setminus\lambda) \subseteq {\mathrm{Std}}_s(\nu\setminus\lambda)
$ to be the subset of paths, ${\mathsf{s}}$, whose steps are of the form $$r(1)=(-1,+0) \qquad d(1)=(-1,+1) \qquad a(1)=(-0,+1)$$ and such that the total number of boxes removed in ${\mathsf{s}}$ is less than or equal to $|\lambda|$.
Fix ${\mathsf{t}}\in {{\mathrm{Std}}}_r( \nu )$ and $1{\leqslant}k {\leqslant}r$ and suppose that $${\mathsf{t}}{(k-1)} \xrightarrow{-t} {\mathsf{t}}(k-\tfrac{1}{2}) \xrightarrow{+u} {\mathsf{t}}(k+1) \xrightarrow{-v} {\mathsf{t}}(k+\tfrac{1}{2}) \xrightarrow{+w} {\mathsf{t}}(k+1).$$ We define $ {\mathsf{t}}_{k \leftrightarrow k+1}\in {\mathrm{Std}}_r(\nu)$ to be the tableau, if it exists, determined by $ {\mathsf{t}}_{k \leftrightarrow k+1}(l) ={\mathsf{t}}(l) $ for $l\neq k, k \pm \tfrac{1}{2} $ and $${\mathsf{t}}_{k \leftrightarrow k+1} {(k-1)} \xrightarrow{-v} {\mathsf{t}}_{k \leftrightarrow k+1}{(k-\tfrac{1}{2})} \xrightarrow{+w}
{\mathsf{t}}_{k \leftrightarrow k+1}{(k)} \xrightarrow{-t} {\mathsf{t}}_{k \leftrightarrow k+1}(k+\tfrac{1}{2}) \xrightarrow{+u} {\mathsf{t}}_{k \leftrightarrow k+1}(k+1).$$ Let $(\lambda,\nu,s) $ be such that $s=|\nu|-|\lambda|$, or $\lambda$ and $\nu$ are both one-row partitions, then $\Delta^0_s(\nu\setminus\lambda)$ is free as a ${{\mathbb Z}}$-module with basis $$\{{\mathsf{t}}\mid {\mathsf{t}}\in {\mathrm{Std}}^0_s(\nu\setminus\lambda)\}$$ and the $P_s(n)$-action on $\Delta_s^0(\nu\setminus\lambda)$ is as follows: $$\label{co-case}
({\mathsf{t}}+ {\sf DR_s(\nu\setminus\lambda)} ) s_{k,k+1} =
\begin{cases}
{{\mathsf{t}}_{k\leftrightarrow k+1}} + {\sf DR_s(\nu\setminus\lambda)} &\text{if ${{\mathsf{t}}_{k\leftrightarrow k+1}}$ exists} \\
- {\mathsf{t}}+\sum_{{\mathsf{s}}\rhd {\mathsf{t}}} r_{{\mathsf{s}}{\mathsf{t}}}{\mathsf{s}}+ {\sf DR_s(\nu\setminus\lambda)} &\text{otherwise}
\end{cases}$$ for $1{\leqslant}k < s$ and $ ({\mathsf{t}}+ {\sf DR_s(\nu\setminus\lambda)} ) p_{k,k+1} =
0$ and $
( {\mathsf{t}}+ {\sf DR_s(\nu\setminus\lambda)} ) p_{k} =
0
$ for $1{\leqslant}k {\leqslant}s$. The coefficients $r_{{\mathsf{s}}{\mathsf{t}}}\in \mathbb{Q}$ are given in [@BDE Theorem 2.9].
The set ${\mathrm{Std}}^0((3,3)\setminus (2,1))$ has two elements $${\mathsf{t}}_1=a(1) \circ a(2)\circ a(2)
\qquad
{\mathsf{t}}_2=a(2) \circ a(1)\circ a(2).$$ These are depicted on the lefthand-side of \[actioner\]. We have that $$s_{1,2}=\left(\begin{array}{cc}0 & 1 \\1 & 0\end{array}\right)
\qquad
s_{2,3}=\left(\begin{array}{cc}1 & -1 \\0 & -1\end{array}\right).$$
$$\begin{tikzpicture}[scale=0.55]
\path (0,0)edge[decorate] node[left] {$-0$} (0,-2);
\path (0,-2)edge[decorate] node[right] {$+2$} (2,-4);
\path (0,-2)edge[decorate] node[left] {$+1$} (-2,-4);
\path (-2,-6)edge[decorate] node[left] {$-0$} (-2,-4);
\path (2,-6)edge[decorate] node[right] {$-0$} (2,-4);
\path (-2,-6)edge[decorate] node[left] {$+2$} (0,-8);
\path (2,-6)edge[decorate] node[right] {$+1$} (0,-8);
\path (0,0-8)edge[decorate] node[left] {$-0$} (0,-2-8);
\path (0,-2-8)edge[decorate] node[left] {$+2$} (0,-4-8);
\fill[white] (0,0) circle (17pt);
\begin{scope}
\fill[white] (0,0) circle (17pt);
\draw (0,0) node {$ \scalefont{0.4}\yng(2,1) $ };
\fill[white] (0,-2) circle (17pt); \draw (0,-2) node{$ \scalefont{0.4}\yng(2,1)$ };
\fill[white] (-2,-4) circle (17pt); \draw (-2,-4) node{$ \scalefont{0.4}\yng(3,1) $ };
\fill[white] (2,-4) circle (17pt); \draw (2,-4) node{$ \scalefont{0.4}\yng(2,2) $ };
\fill[white] (-2,-6) circle (17pt); \draw (-2,-6) node{$ \scalefont{0.4}\yng(3,1)$ };
\fill[white] (2,-6) circle (17pt); \draw (2,-6) node{$ \scalefont{0.4}\yng(2,2) $ };
\fill[white] (0,-8) circle (17pt); \draw (0,-8) node{$ \scalefont{0.4}\yng(3,2)$ };
\fill[white] (0,-2-8) circle (17pt); \draw (0,-2-8) node{$ \scalefont{0.4}\yng(3,2)$ };
\fill[white] (0,-4-8) circle (17pt); \draw (0,-4-8) node{$ \scalefont{0.4}\yng(3,3) $ };
\end{scope}
\end{tikzpicture} \qquad\qquad
\begin{tikzpicture}[scale=0.55]
\path (0,0)edge[decorate] node[left] {$-1$} (-2,-2);
\path (0,0)edge[decorate] node[left] {$-0$} (2,-2);
\path (-3,-4)edge[decorate] node[left] {$+0$} (-2,-2);
\path (0,-4)edge[decorate] node[left] {$+1$} (-2,-2);
\path (3,-4)edge[decorate] node[right] {$+1$} (2,-2);
\path (0,-4)edge[decorate] node[left] {$+0$} (2,-2);
\path (-3,-8)edge[decorate] node[left] {$-0$} (-2,-10);
\path (0,-8)edge[decorate] node[left] {$-1$} (-2,-10);
\path (3,-8)edge[decorate] node[right] {$-1$} (2,-10);
\path (0,-8)edge[decorate] node[left] {$-0$} (2,-10);
\path (-3,-4)edge[decorate] node[left] {$-1$} (-3,-6);
\path (-3,-4)edge[decorate] node[left] {$-0$} (0,-6);
\path (0,-4)edge[decorate] node[left] {$-0$} (3,-6);
\path (0,-4)edge[decorate] node[left] {$-1$} (0,-6);
\path (3,-4)edge[decorate] node[right] {$-1$} (3,-6);
\path (-3,-8)edge[decorate] node[left] {$+1$} (-3,-6);
\path (-3,-8)edge[decorate] node[left] {$+0$} (0,-6);
\path (0,-8)edge[decorate] node[left] {$+0$} (3,-6);
\path (0,-8)edge[decorate] node[left] {$+1$} (0,-6);
\path (3,-8)edge[decorate] node[right] {$+1$} (3,-6);
\path (2,-2-8)edge[decorate] node[right] {$+0$} (0,-4-8);
\path (-2,-2-8)edge[decorate] node[left] {$+1$} (0,-4-8);
\fill[white] (0,0) circle (17pt);
\begin{scope}
\fill[white] (0,0) circle (18pt);
\draw (0,0) node {$ \scalefont{0.4}\yng(4) $ };
\fill[white] (-2,-2) circle (17pt); \draw (-2,-2) node{$ \scalefont{0.4}\yng(1,1,1)$ };
\fill[white] (2,-2) circle (17pt); \draw (2,-2) node{$ \scalefont{0.4}\yng(4)$ };
\fill[white] (-3,-4) circle (17pt); \draw (-3,-4) node{$ \scalefont{0.4}\yng(1,1,1)$ };
\fill[white] (0,-4) circle (17pt); \draw (0,-4) node{$ \scalefont{0.4}\yng(4)$ };
\fill[white] (3,-4) circle (17pt); \draw (3,-4) node{$ \scalefont{0.4}\yng(5)$ };
\fill[white] (-3,-6) circle (17pt); \draw (-3,-6) node{$ \scalefont{0.4}\yng(2)$ };
\fill[white] (0,-6) circle (17pt); \draw (0,-6) node{$ \scalefont{0.4}\yng(1,1,1)$ };
\fill[white] (3,-6) circle (17pt); \draw (3,-6) node{$ \scalefont{0.4}\yng(4)$ };
\fill[white] (-3,-8) circle (17pt); \draw (-3,-8) node{$ \scalefont{0.4}\yng(1,1,1)$ };
\fill[white] (0,-8) circle (17pt); \draw (0,-8) node{$ \scalefont{0.4}\yng(4)$ };
\fill[white] (3,-8) circle (17pt); \draw (3,-8) node{$ \scalefont{0.4}\yng(5)$ };
\fill[white] (-2,-2-8) circle (17pt); \draw (-2,-2-8) node{$ \scalefont{0.4}\yng(1,1,1)$ };
\fill[white] (2,-2-8) circle (17pt); \draw (2,-2-8) node{$ \scalefont{0.4}\yng(4)$ };
\fill[white] (0,-4-8) circle (17pt); \draw (0,-4-8) node{$ \scalefont{0.4}\yng(4) $ };
\end{scope}
\end{tikzpicture}$$
\[example3\] The set ${\mathrm{Std}}_3^0((4)\setminus(4))$ consists of the 7 oscillating tableaux $$\begin{array}{ccccc}
{\mathsf{s}}_1= r(1)\circ d(1)\circ a(1) &
{\mathsf{s}}_2= d(1)\circ r(1)\circ a(1) &
{\mathsf{s}}_3= r(1)\circ a(1) \circ d(1)
\\
{\mathsf{s}}_4= a(1)\circ r(1)\circ d(1) &
{\mathsf{s}}_5= d(1)\circ a(1) \circ r(1) &
{\mathsf{s}}_6= a(1)\circ d(1)\circ r(1) &
\\
& {\mathsf{s}}_7=d(1)\circ d(1) \circ d(1)
\end{array}$$ pictured in \[actioner\]. We have that $$s_{1,2}=
\left(\begin{array}{ccccccc}
\cdot & 1 & \cdot & \cdot & \cdot & \cdot & \cdot \\
1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & 1 & \cdot & \cdot & \cdot \\
\cdot & \cdot & 1 & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & 1 & \cdot \\
\cdot & \cdot & \cdot & \cdot & 1 & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 1 \\\end{array}\right)
\qquad
s_{2,3}=
\left(\begin{array}{ccccccc}
\cdot & \cdot & 1 & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & 1 & \cdot & \cdot \\
1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & 1 & \cdot \\
\cdot & 1 & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & 1 & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 1 \\\end{array}\right)$$ It is not difficult to see that this module decomposes as follows $$\Delta^0_3((4)\setminus(4))= 2 \Delta^0_3((3))
\oplus 2 \Delta^0_3((2,1))
\oplus \Delta^0_3((1^3)).$$
Semistandard Kronecker tableaux {#sec:semistandard}
===============================
For any $(\lambda,\nu,s) \in {\mathscr{P}_{r-s}}\times {\mathscr{P}_{{\leqslant}r}} \times {{\mathbb Z}}_{> 0} $ and any $\mu \vdash s$ we have $$\overline g( \lambda, \nu,\mu) = \dim_{\mathbb{Q}}{\operatorname{Hom}}_{P_s(n)}(\Delta_s(\mu), \Delta_s^0(\nu\setminus\lambda) ) = \dim_{{\mathbb{Q}}} {\operatorname{Hom}}_{{\mathbb{Q}}\mathfrak{S}_s}({\sf S}(\mu), \Delta_s^0(\nu \setminus \lambda)),$$ where ${\mathbb{Q}}\mathfrak{S}_s$ is viewed as the quotient of $P_s(n)$ by the ideal generated by $p_r$. Now for each $\mu = (\mu_1, \mu_2, \ldots , \mu_l) \vdash s$ we have an associated Young permutation module ${\sf M} (\mu) = {\mathbb{Q}}\otimes_{\mathfrak{S}_\mu} {\mathbb{Q}}\mathfrak{S}_s$ where $\mathfrak{S}_\mu = \mathfrak{S}_{\mu_1} \times \mathfrak{S}_{\mu_2}\times \dots\times \mathfrak{S}_{\mu_l} \subseteq \mathfrak{S}_s$. As a first step towards understanding the stable Kronecker coefficients, it is natural to consider $$\dim_{{\mathbb{Q}}} {\operatorname{Hom}}_{\mathfrak{S}_s}({\sf M} (\mu), \Delta^0_s(\nu \setminus \lambda) )$$ and to attempt to construct a basis in terms of semistandard (Kronecker) tableaux.
\[sstrd\] Let $(\lambda,\nu,s) \in \mathscr{P}_{r-s} \times \mathscr{P}_{{\leqslant}r} \times \mathbb{N}$ be a pair of one-row partitions or a triple of maximal depth. Let $\mu = (\mu_1, \mu_2, \ldots , \mu_l)\vdash s$ and let ${\mathsf{s}}, {\mathsf{t}}\in {\mathrm{Std}}^0_s(\nu \setminus \lambda)$.
1. For $1{\leqslant}k <s$ we write ${\mathsf{s}}\overset{k}{\sim} {\mathsf{t}}$ if ${\mathsf{s}}= {\mathsf{t}}_{k\leftrightarrow k+1}$.
2. We write ${\mathsf{s}}\overset{\mu}{\sim} {\mathsf{t}}$ if there exists a sequence of standard Kronecker tableaux ${\mathsf{t}}_1, {\mathsf{t}}_2, \ldots , {\mathsf{t}}_d \in {\mathrm{Std}}^0_s(\nu\setminus\lambda)$ such that $${\mathsf{s}}= {\mathsf{t}}_{1} \overset{k_1}{\sim} {\mathsf{t}}_{2} , \
{\mathsf{t}}_{2} \overset{k_2}{\sim} {\mathsf{t}}_{3} , \ \dots \ ,
{\mathsf{t}}_{d-1}\overset{k_{d-1}}{\sim} {\mathsf{t}}_{d}
={\mathsf{t}}$$ for some $k_1,\dots, k_{d-1}\in \{1, \ldots , s-1\} \setminus
\{ [\mu]_c \mid c = 1, \ldots , l-1 \}$. We define a [tableau of weight]{} $\mu$ to be an equivalence class of tableau under $\overset{\mu}{\sim} $, denoted $[{\mathsf{t}}]_\mu = \{ {\mathsf{s}}\in {\mathrm{Std}}^0_s(\nu\setminus \lambda) \, |\, {\mathsf{s}}\overset{\mu}{\sim} {\mathsf{t}}\}$.
3. We say that a Kronecker tableau, $[{\mathsf{t}}]_\mu$, of shape $\nu\setminus \lambda$ and weight $\mu$ is [semistandard]{} if for any ${\mathsf{s}}\in [{\mathsf{t}}]_\mu$ and any $ k \not \in \{[\mu_c] \mid c = 1, \ldots , l-1 \}$ the tableau ${\mathsf{s}}_{k\leftrightarrow k+1}$ exists. We let ${\mathrm{SStd}}_s^0(\nu\setminus \lambda, \mu)$ denote the set of semistandard Kronecker tableaux of shape $\nu\setminus \lambda$ and weight $\mu$.
To represent these semistandard Kronecker tableaux graphically, we will add frames’ corresponding to the composition $\mu$ on the set of paths ${\mathrm{Std}}_s^0(\nu \setminus \lambda)$ in $\mathcal{Y}$. For ${\mathsf{t}}=(-\varepsilon_{i_1}, + \varepsilon_{j_1}, \ldots , -\varepsilon_{i_s}, + \varepsilon_{j_s})$ we say that the integral step $(-\varepsilon_{i_k}, + \varepsilon_{j_k})$ belongs to the $c$th frame if $[\mu]_{c-1} < k{\leqslant}[\mu]_c$. Thus for ${\mathsf{s}}, {\mathsf{t}}\in {\mathrm{Std}}_s^0(\nu\setminus \lambda)$ we have that ${\mathsf{s}}\overset{\mu}{\sim} {\mathsf{t}}$ if and only if ${\mathsf{s}}$ is obtained from ${\mathsf{t}}$ by permuting integral steps within each frame (as in Figures \[anewfigforintro\] and \[maximaldepth\]).
\[YOUNGSRULE\] Let $(\lambda, \nu, s)$ be a co-Pieri triple and $\mu\vdash s$. We define $ \varphi_{\mathsf{T}}({{\mathsf{t}}^\mu}) = \sum_{{\mathsf{s}}\in {\mathsf{T}}}{\mathsf{s}}$ for ${\mathsf{T}}\in {\mathrm{SStd}}_s^0(\nu \setminus \lambda, \mu)$. Then $ {\operatorname{Hom}}_{\mathfrak{S}_s}({\sf M}(\mu), \Delta_s^0(\nu\setminus \lambda))$ has ${{\mathbb Z}}$-basis $ \{\varphi_{\mathsf{T}}\mid {\mathsf{T}}\in {\mathrm{SStd}}_s^0(\nu \setminus \lambda, \mu)\}$.
(-3.85,-7.9)– (-3.85,-0.1) node \[right,black,midway,yshift=-0.2cm,xshift=-2cm\] node \[right,black,midway,yshift=0.2cm,xshift=-2cm\] ; (-3.85,-7.9-8)– (-3.85,-0.1-8) node \[right,black,midway,yshift=-0.2cm,xshift=-2cm\] node \[right,black,midway,yshift=0.2cm,xshift=-2cm\] ; (-3.85,-7.9-8-4)– (-3.85,-0.1-8-8) node \[right,black,midway,yshift=-0.2cm,xshift=-2cm\] node \[right,black,midway,yshift=0.2cm,xshift=-2cm\] ;
(-3.6,0.5) rectangle (3.6,-20.8); (0,0)edge\[decorate\] node\[left\] [$-1$]{} (0,-2); (0,-2)edge\[decorate\] node\[right\] [$+1$]{} (2,-4); (0,-2)edge\[decorate\] node\[left\] [$+0$]{} (-2,-4);
(-2,-6)edge\[decorate\] node\[left\] [$-1$]{} (-2,-4); (2,-6)edge\[decorate\] node\[right\] [$-1$]{} (2,-4);
(-2,-6)edge\[decorate\] node\[left\] [$+1$]{} (0,-8); (2,-6)edge\[decorate\] node\[right\] [$+0$]{} (0,-8);
(0,0-8)edge\[decorate\] node\[left\] [$-1$]{} (-2,-2-8); (0,0-8)edge\[decorate\] node\[right\] [$-0$]{} (2,-2-8);
(2,-2-8)edge\[decorate\] node\[right\] [$+1$]{} (2,-4-8); (-2,-2-8)edge\[decorate\] node\[left\] [$+1$]{} (-2,-4-8);
(0,-6-8)edge\[decorate\] node\[left\] [$-0$]{} (-2,-4-8); (0,-6-8)edge\[decorate\] node\[right\] [$-1$]{} (2,-4-8);
(0,-6-8)edge\[decorate\] node\[left\] [$+1$]{} (0,-8-8);
(0,-16)edge\[decorate\] node\[left\] [$-0$]{} (0,-18); (0,-20)edge\[decorate\] node\[left\] [$+1$]{} (0,-18);
(-3.5,0) rectangle (3.5,-8); (-3.5,-8)–(-3.5,-16)–(3.5,-16)–(3.5,-8) ;
(-3.5,-16)–(-3.5,-20)–(3.5,-20)–(3.5,-16) ; (0,0) circle (17pt);
(0,0.1) circle (22pt); (0,0) node [$ \scalefont{0.4}\yng(4) $ ]{}; (0,-2) circle (17pt); (0,-2) node[$ \scalefont{0.4}\yng(3)$ ]{}; (-2,-4) circle (17pt); (-2,-4) node[$ \scalefont{0.4}\yng(3) $ ]{}; (2,-4) circle (17pt); (2,-4) node[$ \scalefont{0.4}\yng(4) $ ]{}; (-2,-6) circle (17pt); (-2,-6) node[$ \scalefont{0.4}\yng(2)$ ]{}; (2,-6) circle (17pt); (2,-6) node[$ \scalefont{0.4}\yng(3) $ ]{}; (0,-8) circle (17pt); (0,-8) node[$ \scalefont{0.4}\yng(3)$ ]{}; (0,-16) circle (17pt); (0,-16) node[$ \scalefont{0.4}\yng(4)$ ]{}; (-2,-2-8) circle (17pt); (-2,-2-8) node[$ \scalefont{0.4}\yng(2)$ ]{}; (2,-2-8) circle (17pt); (2,-2-8) node[$ \scalefont{0.4}\yng(3)$ ]{}; (-2,-4-8) circle (17pt); (-2,-4-8) node[$ \scalefont{0.4}\yng(3) $ ]{}; (2,-4-8) circle (17pt); (2,-4-8) node[$ \scalefont{0.4}\yng(4) $ ]{}; (0,-6-8) circle (17pt); (0,-6-8) node[$ \scalefont{0.4}\yng(3)$ ]{};
(0.5,-20) circle (12pt); (-0.5,-20) circle (12pt);
(0,-18) circle (17pt); (0,-18) node[$ \scalefont{0.4}\yng(4)$ ]{}; (0,-20) circle (17pt); (0,-20) node[$ \scalefont{0.4}\yng(5)$ ]{};
(-3.6,0.5) rectangle (3.6,-20.8); (0,0)edge\[decorate\] node\[left\] [$-1$]{} (-2,-2); (0,0)edge\[decorate\] node\[right\] [$-4$]{} (2,-2);
(2,-2)edge\[decorate\] node\[right\] [$+0$]{} (2,-4); (-2,-2)edge\[decorate\] node\[right\] [$+0$]{} (-2,-4);
(-2,-6)edge\[decorate\] node\[right\] [$-4$]{} (-2,-4); (2,-6)edge\[decorate\] node\[right\] [$-1$]{} (2,-4);
(-2,-6)edge\[decorate\] node\[left\] [$+0$]{} (0,-8); (2,-6)edge\[decorate\] node\[right\] [$+0$]{} (0,-8);
(0,0-8)edge\[decorate\] node\[left\] [$-1$]{} (-2,-2-8); (0,0-8)edge\[decorate\] node\[right\] [$-2$]{} (2,-2-8);
(2,-2-8)edge\[decorate\] node\[right\] [$+3$]{} (2,-4-8); (-2,-2-8)edge\[decorate\] node\[left\] [$+0$]{} (-2,-4-8);
(-2,-6-8)edge\[decorate\] node\[left\] [$-2$]{} (-2,-4-8); (2,-6-8)edge\[decorate\] node\[right\] [$-1$]{} (2,-4-8);
(-2,-6-8)edge\[decorate\] node\[left\] [$+3$]{} (0,-8-8); (2,-6-8)edge\[decorate\] node\[left\] [$+0$]{} (0,-8-8);
(0,-16)edge\[decorate\] node\[right\] [$-2$]{} (0,-18); (0,-20)edge\[decorate\] node\[right\] [$+3$]{} (0,-18);
(-3.5,0) rectangle (3.5,-8); (-3.5,-8)–(-3.5,-16)–(3.5,-16)–(3.5,-8) ;
(-3.5,-16)–(-3.5,-20)–(3.5,-20)–(3.5,-16) ; (0,0) circle (17pt);
(0,0.1) circle (26pt); (-.2,-0.1) circle (7pt);(1.2,0) circle (12pt); (1,-8) circle (12pt); (-1,-8) circle (12pt); (1,-16) circle (12pt); (-1,-16) circle (12pt); (0.8,-20) circle (12pt); (-0.8,-20) circle (12pt); (-0.6,-0.7) circle (7pt); (0.4,-0.4) node [$ \scalefont{0.4}\yng(7,5,1,1) $ ]{}; (2,-2) circle (17pt); (2,-2.2) node[$ \scalefont{0.4}\yng(7,5,1)$ ]{}; (-2,-2) circle (17pt); (-2,-2.3) node[$ \scalefont{0.4}\yng(6,5,1,1)$ ]{}; (2,-4) circle (17pt); (2,-4.2) node[$ \scalefont{0.4}\yng(7,5,1)$ ]{}; (-2,-4) circle (17pt); (-2,-4.3) node[$ \scalefont{0.4}\yng(6,5,1,1)$ ]{}; (-2,-6.1) circle (18pt); (-2,-6.1) node[$ \scalefont{0.4}\yng(6,5,1)$ ]{}; (2,-6.1) circle (18pt); (1.84,-6.1) node[$ \scalefont{0.4}\yng(6,5,1)$ ]{}; (0,-8) circle (17pt); (0-0.5,-8-0.5) circle (9pt); (0.1,-8.2) node[$ \scalefont{0.4}\yng(6,5,1)$ ]{}; (0,-16) circle (17pt); (0,-16.2) node[$ \scalefont{0.4}\yng(5,4,2)$ ]{}; (-2,-2-8) circle (17pt); (-2,-2-8.2) node[$ \scalefont{0.4}\yng(5,5,1)$ ]{}; (2,-2-8) circle (17pt); (2,-2-8.2) node[$ \scalefont{0.4}\yng(6,4,1)$ ]{}; (-2,-4-8) circle (17pt); (-2,-4-8.1) node[$ \scalefont{0.4}\yng(5,5,1) $ ]{}; (2,-4-8) circle (17pt); (2,-4-8.1) node[$ \scalefont{0.4}\yng(6,4,2)$ ]{}; (-2,-4-8-2) circle (17pt); (-2,-4-8.1-2) node[$ \scalefont{0.4}\yng(5,4,1)$ ]{}; (2,-4-8.1-2) circle (18pt); (2,-4-8.1-2) node[$ \scalefont{0.4}\yng(5,4,2)$ ]{};
(0,-18) circle (17pt); (0,-18.2) node[$ \scalefont{0.4}\yng(5,3,2)$ ]{}; (0,-20.2) circle (19pt); (0,-20.1) node[$ \scalefont{0.4}\yng(5,3,3)$ ]{};
\[semiexam2\] Let $\lambda =(4) $, $\nu =(4)$ and $s=5$ and $\mu=(2,2,1) \vdash {5}$. An example of a semistandard tableau, ${\mathsf{V}}$, of shape $\nu\setminus \lambda$ and weight $\mu$ is given by the rightmost diagram in Figure \[anewfigforintro\]. The semistandard tableau ${\mathsf{V}}$ is an orbit consisting of the following four standard tableaux $$\begin{aligned}
&{\mathsf{v}}_1= r(1) \circ d(1) \circ d(1) \circ a(1) \circ a(1)
\ \quad \
{\mathsf{v}}_2= d(1) \circ r(1) \circ d(1) \circ a(1) \circ a(1)
\ \quad \ \\
& {\mathsf{v}}_3= r(1) \circ d(1) \circ a(1) \circ d(1) \circ a(1)
\ \quad \
{\mathsf{v}}_4= d(1) \circ r(1) \circ a(1) \circ d(1) \circ a(1)
\end{aligned}$$ We have a corresponding homomorphism $
\varphi_{\mathsf{V}}\in {\operatorname{Hom}}_{\mathfrak{S}_s}({\sf M}(2,2,1), \Delta_s((4)\setminus (4))
$ given by $$\varphi_{\mathsf{T}}({\mathsf{t}}^{(2,2,1)})={\mathsf{v}}_1+{\mathsf{v}}_2+{\mathsf{v}}_3+{\mathsf{v}}_4.$$
The classical picture for semistandard Young tableaux
-----------------------------------------------------
We now wish to illustrate how our Definition \[sstrd\] and the familiar visualisation of a semistandard Young tableaux coincide for triples of maximal depth. Given $\lambda \vdash {r-s} , \nu \vdash {r}, \mu = (\mu_1, \mu_2, \ldots , \mu_\ell ) \vdash s $ such that $\lambda \subseteq \nu$ a Young tableau of shape $\nu\ominus \lambda$ and weight $\mu$ in the classical picture is visualised as a filling of the boxes of $[\nu\ominus \lambda]$ with the entries $$\underbrace{1, \dots, 1}_{\mu_1}, \underbrace{2,\dots, 2}_{\mu_2},
\ldots, \underbrace{\ell ,\dots, \ell }_{\mu_\ell }$$ so that they are weakly increasing along the rows and columns. One should think of this classical picture of a Young tableau of weight $\mu$ simply as a diagrammatic way of encoding an $\mathfrak{S}_\mu$-orbit of standard Young tableaux as follows. Let ${\mathsf{s}}$ be a standard Young tableau of shape $\nu\ominus \lambda$ and let $\mu$ be a partition. Then define $\mu({\mathsf{s}})$ to be the Young tableau of weight $\mu$ obtained from ${\mathsf{s}}$ by replacing each of the entries $[\mu]_{c-1} < i {\leqslant}[\mu]_c$ in ${\mathsf{s}}$ by the entry $c$ for $ c {\geqslant}1$. We identify a Young tableau, ${\mathsf{S}}$, of weight $\mu$ with the set of standard Young tableaux, $\mu^{-1}({\mathsf{S}})=\{{\mathsf{s}}\mid \mu({\mathsf{s}})={\mathsf{S}}\}$.
In either picture, a Young tableau of weight $\mu$ is merely a picture which encodes an $\mathfrak{S}_\mu$-orbit of standard Young tableaux. We picture a Young tableau, ${\mathsf{S}}$, of weight $\mu$ as the orbit of paths $\mu^{-1}({\mathsf{S}})$ in the branching graph with a frame to record the partition $\mu$.
A tableau of weight $\mu$ in the classical picture would be said to be semistandard if and only if the entries are strictly increasing along the columns. In our picture, this is equivalent to condition 3 of Definition \[sstrd\].
$$\begin{tikzpicture}[scale=0.6]
\draw[white] [decorate,decoration={brace,amplitude=6pt},xshift=6pt,yshift=0pt] (-3.85,-7.9)-- (-3.85,-0.1) node [right,black,midway,yshift=-0.2cm,xshift=-2cm]{\text{1st frame}}
node [right,black,midway,yshift=0.2cm,xshift=-2cm]{\text{2 steps in}} ;
\draw[white] [decorate,decoration={brace,amplitude=6pt},xshift=6pt,yshift=0pt] (-3.85,-7.9-8)-- (-3.85,-0.1-8) node [right,black,midway,yshift=-0.2cm,xshift=-2cm]{\text{2nd frame}}
node [right,black,midway,yshift=0.2cm,xshift=-2cm]{\text{2 steps in}} ;
\draw[white] [decorate,decoration={brace,amplitude=6pt},xshift=6pt,yshift=0pt] (-3.85,-7.9-8-4)-- (-3.85,-0.1-8-8) node [right,black,midway,yshift=-0.2cm,xshift=-2cm]{\text{3rd frame}}
node [right,black,midway,yshift=0.2cm,xshift=-2cm]{\text{1 step in}} ;
\clip(-3.6,0.5) rectangle (3.6,-20.8);
\path (0,0)edge[decorate] node[left] {$-0$} (0,-2);
\path (0,-2)edge[decorate] node[right] {$+2$} (2,-4);
\path (0,-2)edge[decorate] node[left] {$+1$} (-2,-4);
\path (-2,-6)edge[decorate] node[left] {$-0$} (-2,-4);
\path (2,-6)edge[decorate] node[right] {$-0$} (2,-4);
\path (-2,-6)edge[decorate] node[left] {$+2$} (0,-8);
\path (2,-6)edge[decorate] node[right] {$+1$} (0,-8);
\path (0,0-8)edge[decorate] node[left] {$-0$} (0,-2-8);
\path (0,-2-8)edge[decorate] node[right] {$+3$} (2,-4-8);
\path (0,-2-8)edge[decorate] node[left] {$+2$} (-2,-4-8);
\path (-2,-6-8)edge[decorate] node[left] {$-0$} (-2,-4-8);
\path (2,-6-8)edge[decorate] node[right] {$-0$} (2,-4-8);
\path (-2,-6-8)edge[decorate] node[left] {$+3$} (0,-8-8);
\path (2,-6-8)edge[decorate] node[right] {$+2$} (0,-8-8);
\path (0,-16)edge[decorate] node[left] {$-0$} (0,-18);
\path (0,-20)edge[decorate] node[left] {$+3$} (0,-18);
\draw[dashed] (-3.5,0) rectangle (3.5,-8);
\draw[dashed] (-3.5,-8)--(-3.5,-16)--(3.5,-16)--(3.5,-8) ;
\draw[dashed] (-3.5,-16)--(-3.5,-20)--(3.5,-20)--(3.5,-16) ;
\fill[white] (0,0) circle (17pt);
\begin{scope}
\fill[white] (0,0) circle (17pt);
\draw (0,0) node {$ \scalefont{0.4}\yng(2,1) $ };
\fill[white] (0,-2) circle (17pt); \draw (0,-2) node{$ \scalefont{0.4}\yng(2,1)$ };
\fill[white] (-2,-4) circle (17pt); \draw (-2,-4) node{$ \scalefont{0.4}\yng(3,1) $ };
\fill[white] (2,-4) circle (17pt); \draw (2,-4) node{$ \scalefont{0.4}\yng(2,2) $ };
\fill[white] (-2,-6) circle (17pt); \draw (-2,-6) node{$ \scalefont{0.4}\yng(3,1)$ };
\fill[white] (2,-6) circle (17pt); \draw (2,-6) node{$ \scalefont{0.4}\yng(2,2) $ };
\fill[white] (0,-8) circle (17pt); \draw (0,-8) node{$ \scalefont{0.4}\yng(3,2)$ };
\fill[white] (0,-16) circle (17pt); \draw (0,-16) node{$ \scalefont{0.4}\yng(3,3,1)$ };
\fill[white] (0,-2-8) circle (17pt); \draw (0,-2-8) node{$ \scalefont{0.4}\yng(3,2)$ };
\fill[white] (-2,-4-8) circle (17pt); \draw (-2,-4-8) node{$ \scalefont{0.4}\yng(3,3) $ };
\fill[white] (2,-4-8) circle (17pt); \draw (2.1,-4-8.1) node{$ \scalefont{0.4}\yng(3,2,1) $ };
\fill[white] (-2,-6-8) circle (17pt); \draw (-2,-6-8) node{$ \scalefont{0.4}\yng(3,3)$ };
\fill[white] (2,-6-8) circle (17pt); \draw (2.1,-6-8.1) node{$ \scalefont{0.4}\yng(3,2,1)$ };
\fill[white] (0,-18) circle (17pt); \draw (0,-18) node{$ \scalefont{0.4}\yng(3,3,1)$ };
\fill[white] (0,-20) circle (17pt); \draw (0,-20) node{$ \scalefont{0.4}\yng(3,3,2)$ };
\end{scope}
\end{tikzpicture}
\qquad\qquad
\begin{tikzpicture}[scale=0.6]
\clip(-3.6,0.5) rectangle (3.6,-20.8);
\path (0,0)edge[decorate] node[left] {$-0$} (0,-2);
\path (0,-2)edge[decorate] node[right] {$+2$} (2,-4);
\path (0,-2)edge[decorate] node[left] {$+1$} (-2,-4);
\path (-2,-6)edge[decorate] node[left] {$-0$} (-2,-4);
\path (2,-6)edge[decorate] node[right] {$-0$} (2,-4);
\path (-2,-6)edge[decorate] node[left] {$+2$} (0,-8);
\path (2,-6)edge[decorate] node[right] {$+1$} (0,-8);
\path (0,0-8)edge[decorate] node[left] {$-0$} (0,-2-8);
\path (0,-2-8)edge[decorate] node[left] {$+3$} (0,-4-8);
\path (0,-6-8)edge[decorate] node[left] {$-0$} (0,-4-8);
\path (0,-6-8)edge[decorate] node[left] {$+3$} (0,-8-8);
\path (0,-16)edge[decorate] node[left] {$-0$} (0,-18);
\path (0,-20)edge[decorate] node[left] {$+2$} (0,-18);
\draw[dashed] (-3.5,0) rectangle (3.5,-8);
\draw[dashed] (-3.5,-8)--(-3.5,-16)--(3.5,-16)--(3.5,-8) ;
\draw[dashed] (-3.5,-16)--(-3.5,-20)--(3.5,-20)--(3.5,-16) ;
\fill[white] (0,0) circle (17pt);
\begin{scope}
\fill[white] (0,0) circle (17pt);
\draw (0,0) node {$ \scalefont{0.4}\yng(2,1) $ };
\fill[white] (0,-2) circle (17pt); \draw (0,-2) node{$ \scalefont{0.4}\yng(2,1)$ };
\fill[white] (-2,-4) circle (17pt); \draw (-2,-4) node{$ \scalefont{0.4}\yng(3,1) $ };
\fill[white] (2,-4) circle (17pt); \draw (2,-4) node{$ \scalefont{0.4}\yng(2,2) $ };
\fill[white] (-2,-6) circle (17pt); \draw (-2,-6) node{$ \scalefont{0.4}\yng(3,1)$ };
\fill[white] (2,-6) circle (17pt); \draw (2,-6) node{$ \scalefont{0.4}\yng(2,2) $ };
\fill[white] (0,-8) circle (17pt); \draw (0,-8) node{$ \scalefont{0.4}\yng(3,2)$ };
\fill[white] (0,-16) circle (17pt); \draw (0,-16) node{$ \scalefont{0.4}\yng(3,3,1)$ };
\fill[white] (0,-2-8) circle (17pt); \draw (0,-2-8) node{$ \scalefont{0.4}\yng(3,2)$ };
\fill[white] (0,-4-8) circle (17pt); \draw (0,-4-8) node{$ \scalefont{0.4}\yng(3,2,1) $ };
\fill[white] (0,-6-8) circle (17pt); \draw (0,-6-8.1) node{$ \scalefont{0.4}\yng(3,2,2)$ };
\fill[white] (0,-18) circle (17pt); \draw (0,-18) node{$ \scalefont{0.4}\yng(3,2,2)$ };
\fill[white] (0,-20) circle (17pt); \draw (0,-20) node{$ \scalefont{0.4}\yng(3,3,2)$ };
\end{scope}
\end{tikzpicture}$$
\[semiexam1\] Let $\lambda =(2,1) $, $\nu =(3,3,2)$ and $s=5$. Then $(\lambda, \nu,s)$ is a triple of maximal depth. Take $\mu=(2,2,1) \vdash {5}$. The semistandard tableau ${\mathsf{U}}$ is an orbit consisting of the following four standard tableaux $$\begin{aligned}
&{\mathsf{u}}_1= a(1) \circ a(2) \circ a(2) \circ a(3) \circ a(3)
\ \quad \
{\mathsf{u}}_2= a(2) \circ a(1) \circ a(2) \circ a(3) \circ a(3)
\ \quad \ \\
& {\mathsf{u}}_3= a(1) \circ a(2) \circ a(3) \circ a(2) \circ a(3)
\ \quad \
{\mathsf{u}}_4= a(2) \circ a(1) \circ a(3) \circ a(2) \circ a(3)
\end{aligned}$$ pictured as follows $$\scalefont{0.9}
\Yboxdim{12pt}
\mu^{-1} \left( \; \Yvcentermath1 \young(\ \ 1,\ 12,23)\ \right) =
\left\{
\; \Yboxdim{12pt} \Yvcentermath1 \young(\ \ 1,\ 23,45)
\ \ , \ \
\Yboxdim{12pt} \Yvcentermath1 \young(\ \ 2,\ 13,45)
\ \ , \ \
\Yboxdim{12pt} \Yvcentermath1 \young(\ \ 2,\ 14,35)
\ \ , \ \
\Yboxdim{12pt} \Yvcentermath1 \young(\ \ 1,\ 23,45)
\ \right\}.
$$ We have a corresponding homomorphism $
\varphi_{\mathsf{U}}\in {\operatorname{Hom}}_{\mathfrak{S}_s}({\sf M}(2,2,1), \Delta_s((3,3,2)\setminus (2,1))
$ given by $$\varphi_{\mathsf{T}}({\mathsf{t}}^{(2,2,1)})={\mathsf{u}}_1+{\mathsf{u}}_2+{\mathsf{u}}_3+{\mathsf{u}}_4.$$ Compare this orbit sum over 4 tableaux with the picture in Figure \[maximaldepth\] and the statement of Theorem \[YOUNGSRULE\].
Let $\lambda =(2,1) $, $\nu =(3,3,2)$ and $s=5$. Then $(\lambda, \nu,s)$ is a triple of maximal depth. Take $\mu=(2,2,1) \vdash {5}$. The full list of semistandard tableaux (pictured in the classical fashion) are as follows $$\Yvcentermath1 \young(\ \ 1,\ 12,23)
\quad
\Yvcentermath1 \young(\ \ 1,\ 13,22)
\quad
\Yvcentermath1 \young(\ \ 1,\ 22,13)
\quad
\Yvcentermath1 \young(\ \ 2,\ 13,12)$$ The first two of these semistandard tableaux are pictured in our diagrammatic fashion in \[maximaldepth\].
Latticed Kronecker tableaux {#sec:latticed}
===========================
We now provide the main result of the paper, namely we combinatorially describe $$\overline{g}(\lambda, \nu, \mu) = \dim {\operatorname{Hom}}_{\mathfrak{S}_s}({\sf S}(\mu), \Delta_s^0(\nu\setminus \lambda))$$ for $(\lambda, \nu, \mu)$ a triple of maximal depth or such that $\lambda$ and $\nu$ are both one-row partitions. One can think of a path ${\mathsf{t}}\in {\mathrm{Std}}_s(\nu\setminus\lambda)$ as a sequence of partitions; or equivalently, as the sequence of boxes added and removed. We shall refer to a pair of steps, $(-\varepsilon_a,+\varepsilon_b)$, between consecutive integral levels of the branching graph as an [integral step]{} in the branching graph. We define [types]{} of integral step (move-up, dummy, move-down) in the branching graph of $P_r(n)$ and order them as follows, $$\begin{array}{ccccccccc}
&\text{move-up } & &\text{dummy } & &\text{move-down }
&
\\
& (-\varepsilon_p, + \varepsilon_q)&< & (-\varepsilon_t, + \varepsilon_t)
&< &(-\varepsilon_u, + \varepsilon_v)
\end{array}$$ for $p>q$ and $u< v$; we refine this to a total order as follows,
1. we order $(-\varepsilon_p, + \varepsilon_q)< (-\varepsilon_{p'}, + \varepsilon_{q'}) $ if $q<q'$ or $q=q'$ and $p>p'$;
2. we order $(-\varepsilon_t, + \varepsilon_t) < (-\varepsilon_{t'}, + \varepsilon_{t'}) $ if $t>t'$;
3. we order $(-\varepsilon_u, + \varepsilon_v)< (-\varepsilon_{u'}, + \varepsilon_{v'})$ if $u>u'$ or $u=u'$ and $v<v'$.
We sometimes let $a(i):={m{\downarrow}}(0,i)$ (respectively $r(i):={m{\uparrow}}(i,0)$) and think of this as [adding]{} (respectively [removing]{}) a box. We start with any standard tableau ${\mathsf{s}}\in {\mathrm{Std}}_s^0(\nu \setminus \lambda)$ and any $\mu = (\mu_1, \mu_2, \ldots , \mu_l)\vdash s$. Write $${\mathsf{s}}= (-\varepsilon_{i_1},
+\varepsilon_{j_1},
-\varepsilon_{i_2},
+\varepsilon_{j_2},
\dots
, -\varepsilon_{i_s},
+\varepsilon_{j_s}).$$ Recall from the previous section that, to each integral step $(-\varepsilon_{i_k}, + \varepsilon_{j_k})$ in ${\mathsf{s}}$, we associate its frame $c$, that is the unique positive integer such that $[\mu]_{c-1} < k {\leqslant}[\mu]_c.$
\[jdfhklssdhjhlashlfs\] We encode the integral steps of ${\mathsf{s}}$ and their frames in a $2\times s$ array, denoted by $\omega_\mu ({\mathsf{s}})$ (called the $\mu$-reverse reading word of ${\mathsf{s}}$) as follows. The first row of $\omega_\mu({\mathsf{s}})$ contains all the integral steps of ${\mathsf{s}}$ and the second row contains their corresponding frames. We order the columns of $\omega_\mu({\mathsf{s}})$ increasingly using the ordering on integral steps given in Definition 2.5. For two equal integral steps we order the columns so that the frame numbers are weakly decreasing. Given ${\mathsf{S}}\in {\mathrm{SStd}}_s^0(\nu\setminus \lambda, \mu)$, it is easy to see that $\omega_\mu ({\mathsf{s}})=\omega_\mu ({\mathsf{t}})$ for any pair ${\mathsf{s}},{\mathsf{t}}\in {\mathsf{S}}$ and so we define the $\mu$-reverse reading word, $\omega({\mathsf{S}})$, of ${\mathsf{S}}$ in the obvious fashion. For ${\mathsf{S}}\in {\mathrm{SStd}}_s^0(\nu\setminus \lambda, \mu)$ we write $$\omega({\mathsf{S}}) = (\omega_1({\mathsf{S}}), \omega_2({\mathsf{S}}))$$ where $\omega_1({\mathsf{S}})$ (respectively $\omega_2({\mathsf{S}})$) is the first (respectively second) row of $\omega({\mathsf{S}})$. Note that $\omega_2({\mathsf{S}})$ is a sequence of positive integers such that $i$ appears precisely $\mu_i$ times, for $i{\geqslant}1$.
\[semiexam3\] For $\lambda=(2,1)$ and $\nu=(3,3,2)$, the steps taken in the semistandard tableau ${\mathsf{U}}$ of Figure \[maximaldepth\] are $$a(1), a(2), a(2), a(3), a(3)$$ We record the steps according to the dominance ordering for the partition algebra ($a(1)< a(2) < a(3)$) and refine this by recording the frame in which these steps occur backwards, as follows $$\omega({\mathsf{U}})=
\left(\begin{array}{cccccccccccc}
a(1)&a(2)&a(2)&a(3) &a(3)
\\
1& 2& 1 & 3 &2
\end{array}\right).$$ For $\lambda=(4)$ and $\nu=(5)$, the steps taken in the semistandard tableau ${\mathsf{V}}$ on the right of Figure \[anewfigforintro\] are $$r(1), d(1), d(1), a(1), a(1).$$ We record the steps according to the dominance ordering for the partition algebra ($r(1)< d(1) < a(1)$) and we refine this by recording the frame in which these steps occur backwards, as follows $$\omega({\mathsf{V}})=\left(\begin{array}{cccccccccccc}
r(1)&d(1)&d(1)&a(1) &a(1)
\\
1& 2& 1 & 3 &2
\end{array}\right)$$ and notice that $\omega_2({\mathsf{U}})=\omega_2({\mathsf{V}})$. We leave it as an exercise for the reader to verify that the rightmost tableau depicted in Figure \[anewfigforintro\] has reading word $$\left(\begin{array}{cccccccccccc}
\ r(4) \ & \ r(1) \ & \ r(1) \ &m{\downarrow}(2,3) &m{\downarrow}(2,3)
\\
1& 2& 1 & 2&3
\end{array}\right).$$
For ${\mathsf{S}}\in {\mathrm{SStd}}_s^0(\nu\setminus \lambda, \mu)$ we say that its reverse reading word $\omega({\mathsf{S}})$ is a lattice permutation if $\omega_2({\mathsf{S}})$ is a string composed of positive integers, in which every prefix contains at least as many positive integers $i$ as integers $i+1$ for $i{\geqslant}1$. We define ${\mathrm{Latt}}_s^0(\nu \setminus \lambda, \mu)$ to be the set of all ${\mathsf{S}}\in {\mathrm{SStd}}_s^0(\nu\setminus \lambda, \mu)$ such that $\omega({\mathsf{S}})$ is a lattice permutation. For any co-Pieri triple $(\lambda, \nu, s)$ and any $\mu\vdash s$ we have $$\overline{g}(\lambda, \nu, \mu) = \dim_{{\mathbb{Q}}}{\operatorname{Hom}}_{\mathfrak{S}_s}({\sf S}(\mu), \Delta_s^0(\nu\setminus \lambda)) = |{\mathrm{Latt}}_s^0(\nu\setminus \lambda, \mu)|.$$
For example, we have that $$\overline{g}((2,1), (3,3,2), (2,2,1))= 1 =
\overline{g}((4),(4),(2,2,1))$$ and that the corresponding homomorphisms are constructed in Examples \[semiexam2\] and \[semiexam1\]. That these semistandard tableaux satisfy the lattice permutation property is checked in Example \[semiexam3\]. Verifying that these are the only semistandard tableaux satisfying the lattice permutation property is left as an exercise for the reader. Similarly, one can check that $ \overline{g}((7,5,1^2), (6,3,3), (2,2,1))= 1 $.
The (non-stable) Kronecker coefficients are also indexed by partitions. As we increase the size of the first row of each of the indexing partitions of the Kronecker coefficients, we obtain a weakly increasing sequence of coefficients; the limiting values of these sequences are the stable Kronecker coefficients which have been the focus of this paper. The non-stable Kronecker coefficients labelled by two 2-line partitions can be written as an alternating sum of at most 4 stable Kronecker coefficients labelled by two 1-line partitions [@BDE Proposition 7.6]. (In fact, any non-stable Kronecker coefficient can be written as an alternating sum of stable Kronecker coefficients.) This should be compared with the existing descriptions of Kronecker coefficients labelled by two 2-line partitions [@RW94; @Rosas01] which also involve alternating sums with at most 4 terms.
The advantages of our description are that $(1)$ ours is the first description that generalises to other stable Kronecker coefficients (and in particular the first description of any family of Kronecker coefficients subsuming the Littlewood–Richardson coefficients) and $(2)$ it counts explicit homomorphisms and therefore works on a higher structural level than all other descriptions of stable Kronecker coefficients since those first considered by Littlewood and Richardson [@LR34].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A variety of machine learning models have been proposed to assess the performance of players in professional sports. However, they have only a limited ability to model how player performance depends on the game context. This paper proposes a new approach to capturing game context: we apply Deep Reinforcement Learning (DRL) to learn an action-value Q function from 3M play-by-play events in the National Hockey League (NHL). The neural network representation integrates both continuous context signals and game history, using a possession-based LSTM. The learned Q-function is used to value players’ actions under different game contexts. To assess a player’s overall performance, we introduce a novel Game Impact Metric (GIM) that aggregates the values of the player’s actions. Empirical Evaluation shows GIM is consistent throughout a play season, and correlates highly with standard success measures and future salary.'
author:
- |
Jêröme Lang\
Laboratoire d’Analyse et Modélisation des Systèmes pour l’Aide à la Décision (LAMSADE)\
pcchair@ijcai-18.org
- |
First Author$^1$, Second Author$^2$, Third Author$^3$,\
$^1$ First Affiliation\
$^2$ Second Affiliation\
$^3$ Third Affiliation\
first@email.address, second@email.address, third@email.address
- Guiliang Liu
- |
Oliver Schulte\
Simon Fraser University, Burnaby, Canada\
gla68@sfu.ca, oschulte@cs.sfu.ca
bibliography:
- 'ijcai18.bib'
title:
- 'IJCAI–18 Formatting Instructions[^1]'
- |
Deep Reinforcement Learning in Ice Hockey\
for Context-Aware Player Evaluation
---
Introduction: Valuing Actions and Players
=========================================
With the advancement of high frequency optical tracking and object detection systems, more and larger event stream datasets for sports matches have become available. There is increasing opportunity for large-scale machine learning to model complex sports dynamics. Player evaluation is a major task for sports modeling that draws attention from both fans and team managers, who want to know which players to draft, sign or trade. Many models have been proposed [@Buttrey2011; @Macdonald2011; @decroos2018actions; @kaplan]. The most common approach has been to quantify the value of a player’s action, and to evaluate players by the total value of the actions they took [@Schuckers2013; @mchale2012development].
However, traditional sports models assess only the actions that have immediate impact on goals (e.g. shots), but not the actions that lead up to them (e.g. pass, reception). And action values are assigned taking into account only a limited context of the action. But in realistic professional sports, the relevant context is very complex, including game time, position of players, score and manpower differential, etc. Recently, Markov models have been used to address these limitations. [@Routley2015a] used states of a Markov Game Model to capture game context and compute a Q function, representing the chance that a team scores the next goal, for [*all*]{} actions. [@Cervone2014a] applied a competing risk framework with Markov chain to model game context, and developed EPV, a point-wise conditional value similar to a Q function, for each action . The Q-function concept offers two key advantages for assigning values to actions [@schulte2017markov; @decroos2018actions]: 1) All actions are scored on the same scale by looking ahead to expected outcomes. 2) Action values reflect the match context in which they occur. For example, a late check near the opponent’s goal generates different scoring chances than a check at other locations and times.
![Ice Hockey Rink. **Ice hockey** is a fast-paced team sport, where two teams of skaters must shoot a puck into their opponent’s net to score goals. []{data-label="fig:ice-hockey-rink"}](Ice-Hockey-Rink-marked.png){width="45.00000%"}
The states in the previous Markov models represent only a partial game context in the real sports match, but nonetheless the models assume full observability. Also, they pre-discretized input features, which leads to loss of information. In this work, we utilize a deep reinforcement learning (DRL) model to learn an action-value Q function for capturing the current match context. The neural network representation can easily incorporate continuous quantities like rink location and game time. To handle partial observability, we introduce a possession-based Long Short Term Memory (LSTM) architecture that takes into account the current play history. Unlike most previous work on active reinforcement learning (RL), which aims to compute [*optimal strategies*]{} for complex continuous-flow games [@littlestone; @Mnih2015], we solve a prediction (not control) problem in the passive learning (on policy) setting [@Sutton1998]. [*We use RL as a behavioral analytics tool for real human agents, not to control artificial agents.*]{}
Given a Q-function, the [*impact*]{} of an action is the change in Q-value due to the action. Our novel Goal Impact Metric (GIM) aggregates the impact of all actions of a player. To our knowledge, this is the first player evaluation metric based on DRL. The GIM metric measures both players’ offensive and defensive contribution to goal scoring. For player evaluation, similar to clustering, ground truth is not available. A common methodology [@Routley2015a; @Pettigrew2015] is to assess the predictive value of a player evaluation metric for standard measures of success. Empirical comparison between 7 player evaluation metrics finds that 1) given a complete season, GIM correlates the most with 12 standard success measures and is the most temporally consistent metric, 2) given partial game information, GIM generalizes best to future salary and season total success.
Related Work
============
We discuss the previous work most related to our approach.
[*Deep Reinforcement Learning.*]{} Previous DRL work has focused on [*control*]{} in continuous-flow games, not prediction [@Mnih2015]. Among these papers, [@littlestone] use a very similar network architecture to ours, but with a fixed trace length parameter rather than our possession-based method. @littlestone find that for partially observable control problems, the LSTM mechanism outperforms a memory window. Our study confirms this finding in an on policy prediction problem.
[*Player Evaluation.*]{} Albert et al. [-@schwartz] provide several up-to-date survey articles about evaluating players. A fundamental difficulty for action value counts in continuous-flow games is that they traditionally have been restricted to goals and actions immediately related to goals (e.g. shots). The Q-function solves this problem by using lookahead to assign values to all actions.
[*Player Evaluation with Reinforcement Learning.*]{} Using the Q-function to evaluate players is a recent development [@schulte2017markov; @Cervone2014a; @Routley2015a]. @schulte2017markov discretized location and time coordinates and applied dynamic programming to learn a Q-function. Discretization leads to loss of information, undesirable spatio-temporal discontinuities in the Q-function, and generalizes poorly to unobserved parts of the state space. For basketball, @Cervone2014a defined a player performance metric based on an expected point value model that is equivalent to a Q-function. Their approach assumes complete observability (of all players at all times), while our data provide partial observability only.
Task Formulation and Approach
=============================
Player evaluation (the “Moneyball” problem) is one of the most studied tasks in sports analytics. Players are rated by their observed performance over a set of games. Our approach to evaluating players is illustrated in Figure \[fig:control-flow\]. Given dynamic game tracking data, we apply Reinforcement Learning to estimate the [*action value*]{} function ${Q^{}({s},{a})}$, which assigns a value to action ${a}$ given game state ${s}$. We define a new player evaluation metric called **Goal Impact Metric (GIM)** to value each player, based on the aggregated impact of their actions, which is defined in Section 6 below. Player evaluation is a descriptive task rather than a predictive generalization problem.As game event data does not provide a ground truth rating of player performance, our experiments assess player evaluation as an unsupervised problem in Section 7.
![System Flow for Player Evaluation[]{data-label="fig:control-flow"}](control-flow.png){width="50.00000%"}
[GID=GameId, PID=playerId, GT=GameTime, TID=TeamId, MP=Manpower, GD=Goal Difference, OC = Outcome, S=Succeed, F=Fail, P = Team Possess puck, H=Home, A=Away, H/A=Team who performs action, TR = Time Remain, PN = Play Number, D = Duration]{}
Play Dynamic in NHL
===================
We utilize a **dataset** constructed by SPORTLOGiQ using computer vision techniques. The data provide information about [*game events*]{} and [*player actions*]{} for the entire 2015-2016 NHL (largest professional ice hockey league) season, which contains 3,382,129 events, covering 30 teams, 1140 games and 2,233 players. Table \[table:example-of-dataset\] shows an excerpt. The data track events around the puck, and record the identity and actions of the player in possession, with space and time stamps, and features of the game context. The table utilizes adjusted spatial coordinates where negative numbers refer to the defensive zone of the acting player, positive numbers to his offensive zone. Adjusted X-coordinates run from -100 to +100, Y-coordinates from 42.5 to -42.5, and the origin is at the ice center as in Figure \[fig:ice-hockey-rink\]. We augment the data with derived features in Table \[table:derived-features\] and list the complete feature set in Table \[table:feature-of-dataset\].
We apply the Markov Game framework [@Littman1994] to learn an action value function for NHL play. Our notation for RL concepts follows [@Mnih2015]. There are two agents ${\it{Home}}$ resp. ${\it{Away}}$ representing the home resp. away team. The , represented by goal vector ${g}_t$ is a 1-of-3 indicator vector that specifies which team scores (${\it{Home}},{\it{Away}},{\it{Neither}}$). An ${a}_t$ is one of 13 types, including shot, block, assist, etc., together with a mark that specifies the team executing the action, e.g. $\it{Shot}({\it{Home}})$. An is a feature vector $\features_{t}$ for discrete time step $t$ that specifies a value for the 10 features listed in Table \[table:feature-of-dataset\]. We use the complete sequence ${s}_{t} \equiv (\features_t,{a}_{t-1},\features_{t-1},\ldots,\features_0)$ as the state representation at time step $t$ [@Mnih2015], which satisfies the Markov property.
We divide NHL games into [**goal-scoring episodes**]{}, so that each episode 1) begins at the beginning of the game, or immediately after a goal, and 2) terminates with a goal or the end of the game. A represents the conditional probability of the event that the home resp. away team [*scores the goal at the end of the current episode*]{} (denoted ${\it{goal}}_{{\it{Home}}}=1$ resp. ${\it{goal}}_{{\it{Away}}}=1$), or neither team does (denoted ${\it{goal}}_{{\it{Neither}}}=1$):
$$Q_{{\it{team}}}({s},{a}) = P({\it{goal}}_{{\it{team}}}=1|{s}_{t}={s},{a}_{t}={a})$$
where ${\it{team}}$ is a placeholder for one of ${\it{Home}},{\it{Away}},{\it{Neither}}$. This $Q$-function represents [*the probability that a team scores the next goal*]{}, given current play dynamics in the NHL (cf. @schulte2017markov [@Routley2015a]). Different $Q$-functions for different expected outcomes have been used to capture different aspects of NHL play dynamics, such as match win [@Pettigrew2015; @kaplan; @Routley2015a] and penalties [@Routley2015a]. For player evaluation, the next-goal Q function has three advantages. 1) The next-goal reward captures what a coach expects from a player. For example, if a team is ahead by two goals with one minute left in the match, a player’s actions have negligible effect on final match outcome. Nonetheless professionals should keep playing as well as they can and maximize the scoring chances for their own team. 2) The $Q$-values are easy to interpret, since they model the probability of an event that is a relatively short time away (compared to final match outcome). 3) Increasing the probability that a player’s team scores the next goal captures both offensive and defensive value. For example, a defensive action like blocking a shot decreases the probability that the other team will score the next goal, thereby increasing the probability that the player’s own team will score the next goal.
![Our design is a 5-layer network with 3 hidden layers. Each hidden layer contains 1000 nodes, which utilize a relu activation function. The first hidden layer is the LSTM layer, the remaining layers are fully connected. Temporal-difference learning looks ahead to the next goal, and the LSTM memory traces back to the beginning of the play (the last possession change).[]{data-label="fig:DP-look-trace"}](DP-lstm-model-structure.png){width="50.00000%"}
Learning Q values with DP-LSTM Sarsa
====================================
We take a function approximation approach and learn a neural network that represents the $Q$-function ($Q_{{\it{team}}}({s},{a})$).
Network Architecture
--------------------
Figure \[fig:DP-look-trace\] shows our model structure. Three output nodes represent the estimates ${\hat{Q}_{{\it{Home}}}({s},{a})}$, ${\hat{Q}_{{\it{Away}}}({s},{a})}$ and ${\hat{Q}_{{\it{Neither}}}({s},{a})}$. Output values are normalized to probabilities. The $\hat{Q}$-functions for each team share weights. The network architecture is a Dynamic LSTM that takes as inputs a current sequence ${s}_{t}$, an action ${a}_{t}$ and a dynamic trace length ${\it{tl}}_{t}$.[^2]
![Temporal Projection of the method. For each team, and each game time, the graph shows the chance the that team scores the next goal, as estimated by the model. Major events lead to major changes in scoring chances, as annotated. The network also captures smaller changes associated with [*every*]{} action under different game contexts. []{data-label="fig:value-ticker"}](Temporal-Projection-three-normalized.png){width="50.00000%"}
Weight Training
---------------
We apply an on-policy Temporal Difference (TD) prediction method [@Sutton1998 Ch.6.4], to estimate $Q_{{\it{team}}}({s},{a})$ for the NLH play dynamics observed in our dataset. Weights $\theta$ are optimized by minibatch gradient descent via backpropagation. We used batch size 32 (determined experimentally). The Sarsa gradient descent update at time step $t$ is based on a squared-error loss function:
$$\begin{aligned}
\mathcal{L}_{t}(\theta_t) & = \operatorname{\mathbb{E}}[({g}_{t} + {\hat{Q}_{}({s}_{t+1},{a}_{t+1}, \theta_t)} - {\hat{Q}_{}({s}_t,{a}_t, \theta_t)})^{2}] \\
\theta_{t+1} & = \theta_{t} + \alpha \nabla_{\theta} \mathcal{L}(\theta_{t})\end{aligned}$$
where ${g}$ and $\hat{Q}$ are for a single team. LSTM training requires setting a [*trace length*]{} ${\it{tl}}_{t}$ parameter. This key parameter controls how far back in time the LSTM propagates the error signal from the current time at the input history. Team sports like Ice Hockey show a turn-taking aspect where one team is on the offensive and the other defends; one such turn is called a [*play*]{}. We set ${\it{tl}}_{t}$ to the number of time steps from current time $t$ to the beginning of the current [*play*]{} (with a maximum of 10 steps). A play ends when the possession of puck changes from one team to another. Using possession changes as break points for temporal models is common in several continuous-flow sports, especially basketball [@Cervone2014a; @omidiran2011new]. We apply Tensorflow to implement training; our source code is published on-line.[^3] *Illustration of Temporal Projection.* Figure \[fig:value-ticker\] shows a value ticker [@Decroos2017; @Cervone2014a] that represents the evolution of the Q function from the $3^{rd}$ period of a match between the Blue Jackets (Home team) and the Penguins (Away team), Nov. 17, 2015. The figure plots values of the three output nodes. We highlight critical events and match contexts to show the context-sensitivity of the Q function. High scoring probabilities for one team decrease those of its opponent. The probability that neither team scores rises significantly at the end of the match.
Player Evaluation
=================
In this section, we define our novel Goal Impact Metric and give an example player ranking.
Player Evaluation Metric
------------------------
Our $Q$-function concept provides a novel AI-based definition for assigning a value to an action. Like [@schulteapples], we measure the quality of an action by how much it changes the expected return of a player’s team. Whereas the scoring chance at a time measures the value of a state, and therefore depends on the previous efforts of the entire team, the change in value measures directly the impact of an action by a specific player. In terms of the Q-function, this is the [*change in Q-value*]{} due to a player’s action. This quantity is defined as the action’s . The impact can be visualized as the difference between successive points in the Q-value ticker (Figure \[fig:value-ticker\]). For our specific choice of Next Goal as the reward function, we refer to . The total impact of a player’s actions is his (GIM). The formal equations are:
$$\begin{aligned}
{\it{impact}^{{\it{team}}}({s}_{t},{a}_{t})} & = {Q^{{\it{team}}}({s}_{t},{a}_{t})}-{Q^{{\it{team}}}({s}_{t-1},{a}_{t-1})} \label{eq:impact} \\
GIM^{i}({D}) & = \sum_{{s},{a}} {n^{i}_{{D}}({s},{a})} \times {\it{impact}^{{\it{team}}_{i}}({s},{a})}\end{aligned}$$
where ${D}$ indicates our dataset, ${\it{team}}_{i}$ denotes the team of player $i$, and ${n^{i}_{{D}}({s},{a})}$ is the number of times that player $i$ was observed to perform action ${a}$ at ${s}$. Because it is the sum of differences between subsequent Q values, the GIM metric inherits context-sensitivity from the Q function.
Rank Players with GIM
---------------------
Table \[table:top-20-ranking\] lists the top-20 highest impacts players, with basic statistics. All these players are well-known NHL stars. Taylor Hall tops the ranking although he did not score the most goals. This shows how our ranking, while correlated with goals, also [*reflects the value of other actions by the player.*]{} For instance, we find that the total number of passes performed by Taylor Hall is exceptionally high at 320. Our metric can be used to [*identify undervalued players.*]{} For instance, Johnny Gaudreau and Mark Scheifele drew salaries below what their GIM rank would suggest. Later they received a $\$5M+ $ contract for the 2016-17 season.
Empirical Evaluation
====================
We describe our comparison methods and evaluation methodology. Similar to clustering problems, there is [*no ground truth*]{} for the task of player evaluation. To assess a player evaluation metric, we follow previous work [@Routley2015a; @Pettigrew2015] and compute its correlation with statistics that directly measure success like Goals, Assists, Points, Play Time (Section 7.2). There are two justifications for comparing with [*success measures*]{}. (1) These statistics are generally recognized as important measures of a player’s strength, because they indicate the player’s ability to contribute to game-changing events. So a comprehensive performance metric ought to be related to them. (2) The success measures are often forecasting targets for hockey stakeholders, so a good player evaluation metric should have predictive value for them. For example, teams would want to know how many points an offensive player will contribute. To evaluate the ability of the GIM metric for generalizing from past performance to future success, we report two measurements: How well the GIM metric predicts a total season success measure from a sample of matches only (Section 7.3), and how well the GIM metric predicts the future salary of a player in subsequent seasons (Section 7.4). Mapping performance to salaries is a practically important task because it provides an objective standard to guide players and teams in salary negotiations [@idson2000team].
Comparison Player Evaluation Metrics {#sec:metrics}
------------------------------------
We compare GIM with the following player evaluation metrics to show the advantage of 1) modeling game context 2) incorporating continuous context signal 3) including history.
Our first baseline method **Plus-Minus (+/-)** is a commonly used metric that measures how the presence of a player influences the goals of his team [@Macdonald2011]. The second baseline method **Goal-Above-Replacement (GAR)** estimates the difference of team’s scoring chances when the target player plays, vs. replacing him or her with an average player [@gerstenberg2014wins]. **Win-Above-Replacement (WAR)**, our third baseline method, is the same as GAR but for winning chances [@gerstenberg2014wins]. Our fourth baseline method **Expected Goal (EG)** weights each shot by the chance of it leading to a goal. These four methods consider only very limited game context. The last baseline method **Scoring Impact (SI)** is the most similar method to GIM based on Q-values. But Q-values are learned with pre-discretized spatial regions and game time [@schulte2017markov]. As a lesion method, we include **GIM-T1**, where we set the maximum trace length of LSTM to 1 (instead of 10) in computing GIM. This comparison assesses the importance of including enough history information. [*Computing Cost.*]{} Compared to traditional metrics like +/-, learning a Q-function is computationally demanding (over 5 million gradient descent steps on our dataset). However, after the model has been trained off-line, the GIM metric can be computed quickly with a single pass over the data. [*Significance Test.*]{} To assess whether GIM is significantly different from the other player evaluation metrics, we perform paired t-tests over all players. The null hypothesis is rejected with respective p-values: $1.1*10^{-186}$, $7.6*10^{-204}$, $8*10^{-218}$, $3.9*10^{-181}$, $4.7*10^{-201}$ and $1.3*10^{-05}$ for PlusMinus, GAR, WAR, EG, SI and GIM-T1, which shows that GIM values are very different from other metrics’ values.
Season Totals: Correlations with standard Success Measures {#sec:season-total}
----------------------------------------------------------
In the following experiment, we compute the correlation between player ranking metrics and success measures over the entire season. Table \[table:all-correlation\] shows the correlation coefficients of the comparison methods with 14 standard success measures: Assist, Goal, Game Wining Goal (GWG), Overtime Goal (OTG), Short-handed Goal (SHG), Power-play Goal (PPG), Shots (S), Point, Short-handed Point (SHP), Power-play Point (PPP), Face-off Win Percentage (FOW), Points Per Game (P/GP), Time On Ice (TOI) and Penalty Minute (PIM). These are all commonly used measures available from the NHL official website (www.nhl.com/stats/player). [*GIM achieves the highest correlation in 12 out of 14 success measures.*]{} For the remaining two (TOI and PIM), GIM is comparable to the highest. Together, the Q-based metrics GIM, GIM-1 and SI show the highest correlations with success measures. EG is only the fourth best metric, because it considers only the expected value of shots without look-ahead. The traditional sports analytics metrics correlate poorly with almost all success measures. This is evidence that AI techniques that provide fine-grained expected action value estimates lead to better performance metrics. With the neural network model, GIM can handle continuous input without pre-discretization. This prevents the loss of game context information and explains why both GIM and GIM-T1 performs better than SI in most success measures. And the higher correlation of GIM compared to GIM-T1 also demonstrates the value of game history. In terms of absolute correlations, GIM achieves high values, except for the very rare events OTG, SHG, SHP and FOW. Another exception is Penalty Minutes (PIM), which interestingly, show positive correlation with all player evaluation metrics, although penalties are undesirable. We hypothesize that better players are more likely to receive penalties, because they play more often and more aggressively.
Round-by-Round Correlations: Predicting Future Performance From Past Performance {#sec:round-by-round}
--------------------------------------------------------------------------------
A sports season is commonly divided into **rounds**. In round $n$, a team or player has finished $n$ games in a season. For a given performance metric, we measure the correlation between (i) its value computed [*over the first $n$ rounds*]{}, and (ii) the value of the three main success measures, assists, goals, and points, computed [*over the entire season*]{}. This allows us to assess how quickly different metrics acquire predictive power for the final season total, so that future performance can be predicted from past performance. We also evaluate the [*auto-correlation*]{} of a metric’s round-by-round total with its own season total. The auto-correlation is a measure of temporal consistency, which is a desirable feature [@Pettigrew2015], because generally the skill of a player does not change greatly throughout a season. Therefore a good performance metric should show temporal consistency.
We focused on the expected value metrics EG, SI, GIM-T1 and GIM, which had the highest correlations with success in Table \[table:all-correlation\]. Figure \[fig:round-by-round-correlation\] shows metrics’ round-by-round correlation coefficients with assists, goals, and points. The bottom right shows the auto-correlation of a metric’s round-by-round total with its own season total. [*GIM is the most stable metric*]{} as measured by auto-correlation: after half the season, the correlation between the round-by-round GIM and the final GIM is already above 0.9.
We find both GIM and GIM-T1 eventually dominate the predictive value of the other metrics, which shows the advantages of modeling sports game context without pre-discretization. And possession-based GIM also dominates GIM-T1 after the first season half, which shows the value of including play history in the game context. But how quickly and how much the GIM metrics improve depends on the specific success measure. For instance, in Figure \[fig:round-by-round-correlation\], GIM’s round-by-round correlation with Goal (top right graph) dominates by round 10, while others require a longer time.
Future Seasons: Predicting Players’ Salary {#sec:salary}
------------------------------------------
In professional sports, a team will give a comprehensive evaluation to players before deciding their contract. The more value players provide, the larger contract they will get. Accordingly, a good performance metric should be positively related to the amount of players’ [*future*]{} contract. The NHL regulates when players can renegotiate their contracts, so we focus on players receiving a new contract following the games in our dataset (2015-2016 season).
Table \[table:correlation-player-contract\] shows the metrics’ correlations with the amount of players’ contract over all the players who obtained a new contract during the 2016-17 and 2017-18 NHL seasons. Our GIM score achieves the highest correlation in both seasons. This means that the metric can serve as an objective basis for contract negotiations. The scatter plots of Figure \[fig:scatter-player-contract\] illustrate GIM’s correlation with amount of players’ future contract. In the 2016-17 season (left), we find many underestimated players in the right bottom part, with high GIM but low salary in their new contract. It is interesting that the percentage of players who are undervalued in their new contract decreases in the next season (from $32/258$ in 2016-17 season to $8/125$ in 2017-2018 season). This suggests that GIM provides an early signal of a player’s value after one season, while it often takes teams an additional season to recognize performance enough to award a higher salary.
Conclusion and Future Work
==========================
We investigated Deep Reinforcement Learning (DRL) for professional sports analytics. We applied DRL to learn complex spatio-temporal NHL dynamics. The trained neural network provides a rich source of knowledge about how a team’s chance of scoring the next goal depends on the match context. Based on the learned action values, we developed an innovative context-aware performance metric GIM that provides a comprehensive evaluation of NHL players, taking into account [*all*]{} of their actions. In our experiments, GIM had the highest correlation with most standard success measures, was the most temporally consistent metric, and generalized best to players’ future salary. Our approach applies to similar continuous-flow sports games with rich game contexts, like soccer and basketball. A limitation of our approach is that players get credit only for recorded individual actions. An influential approach to extend credit to all players on the rink has been based on regression [@Macdonald2011; @Thomas2013]. A promising direction for future work is to combine Q-values with regression.\
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by an Engage Grant from the National Sciences and Engineering Council of Canada, and a GPU donation from NVIDIA Corporation.
[^1]: These match the formatting instructions of IJCAI-07. The support of IJCAI, Inc. is acknowledged.
[^2]: We experimented with a single-hidden layer, but weight training failed to converge.
[^3]: https://github.com/Guiliang/DRL-ice-hockey
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Radiative and collisional constants of excited atoms contain the matrix elements of the dipole transitions and when they are blocked one can expect occurring a number of interesting phenomena in radiation-collisional kinetics. In recent astrophysical studies of IR emission spectra it was revealed a gap in the radiation emitted by Rydberg atoms ($RA$) with values of the principal quantum number of $n\approx10$. Under the presence of external electric fields a rearrangement of $RA$ emission spectra is possible to associate with manifestations of the Stark effect. The threshold for electric field ionization of $RA$ is $E\approx3\cdot10^{4}$ V/cm for states with $n>10$. This means that the emission of $RA$ with $n\ge10$ is effectively blocked for such fields. In the region of lower electric field intensities the double Stark resonance (or Förster resonance) becomes a key player. On this basis it is established the fact that the static magnetic or electric fields may strongly affect the radiative constants of optical transitions in the vicinity of the Föster resonance resulting, for instance, in an order of magnitude reduction of the intensity in some lines. Then, it is shown in this work that in the atmospheres of celestial objects lifetimes of comparatively long-lived $RA$ states and intensities of corresponding radiative transitions can be associated with the effects of dynamic chaos via collisional ionization. The Föster resonance allows us to manipulate the random walk of the Rydberg electron ($RE$) in the manifold of quantum levels and hence change the excitation energies of $RA$, which lead to anomalies in the IR spectra.'
address:
- 'Department of physics, Saint-Petersburg University, Ulianovskaya 1, 198504, St.Petersburg, Petrodvorets, Russia'
- 'University of Belgrade,Institute of Physics, P. O. Box 57, 11001 Belgrade,Serbia'
- 'Laser Centre, University of Latvia, Zellu Str. 8, LV-1002 Riga, Latvia'
author:
- 'N.N.Bezuglov'
- 'A.N.Klyucharev'
- 'A.A.Mihajlov'
- 'V. A. Sre[ć]{}kovi['' c]{}'
title: 'Anomalies in radiation-collisional kinetics of Rydberg atoms induced by the effects of dynamical chaos and the double Stark resonance'
---
Rydberg atoms ,emission and absorption 32.80.Rm ,33.80.Rv ,95.55.Qf
=0.5 cm
Introduction
============
The aim of this work is to show that in the atmospheres of celestial objects lifetimes of comparatively long-lived states of the Rydberg atoms ($RA$) and intensities of corresponding radiative transitions can be associated with the effects of dynamic chaos via collisional ionization. Inelastic atom-atom collisions in the thermal range of energy (chemi-ionization processes) involving Rydberg atoms ($RA$) and leading to the formation of molecular and atomic ions $$\label{eq:RA}
RA + A = \left\{
\begin{array}{lll}
A_{2}^{+} + e, \\
\\
\displaystyle{ A^{+} + A + e }
\end{array}
\right .$$ are traditionally considered in the astrophysical literature as an alternative to elastic $RA + A$ collisions (see, e.g., @kly10). This is especially true for hydrogen atoms (the solar photosphere) and helium atoms (helium-rich cool white dwarfs stars) [@mih11a]. Paper @kly07 deals with the processes of chemi-ionization involving a sodium atom that seems to influence the processes in the atmosphere of Jupiter’s satellite Io. The theory and the experiment related to the Rydberg atoms [@mih12; @oke12; @sre13] and especially as an extreme case Rydberg matter with a high degree of ionization and consisting of Rydberg atoms are discussed in literature (see, e. g., @hol06).
As chemi-ionization processes affect the optical characteristics and profiles of spectral lines emitted in the atmospheres of space objects, they are more and more consistently included in the models of atmospheres of cooler stars and stellar atmospheres [@mih11b; @gne09].
It has long been believed that in the absence of collisions (excluding optical transitions) the selectivity of the primary optical transition is preserved. This proposition was questioned in 1988 (see, e.g., [@kly07]).
The information obtained by research examining the above-mentioned processes can be briefly reduced to a few statements:
1. The application of the dipole resonance ionization mechanism is well established within the framework of Fermi model of inelastic collisions $RA + A$ (Fig. \[fig:fig1\]). This is confirmed by comparing the results of theoretical calculations and experimental results for hydrogen atoms and hydrogen-like alkali atoms with the values of effective quantum numbers $5 \le n \le 30$.
2. Paper [@kly07] drew attention to the need for the chemi-ionization model involving $RA$ to take into account the multiplicity of quasi-crossings of the initial term of the Rydberg molecule ($RA + A$) with a network of terms related to the neighboring states of a selectively excited atom (Fig. \[fig:fig2\]). As a result of the latter circumstance the application of the classical Landau-Zener theory determining the ionization process (1) for $R \ge R_{c}$ is limited and the connection between the measured rate constant and the initial excited state of $RA$ is questionable.
3. One of the direct ways to solve this contradiction is associated with the use of the dynamic chaos model in the model of atom-atom collisions [@kly10].
4. It has been noted that there is an experimentally observed gap in the spectrum of the infrared radiation of white dwarfs. This gap corresponds to optical transitions in $RA$ with $n \approx 10$. According to [@gne09], this effect can be explained by the following: taking into account the collision-induced processes of light absorption, the relativistic quantum defect of “vacuum polarization,” the influence of strong magnetic fields ($B \ge 10^5$ G), the Stark effect with the electric field intensity $E \ge 10^{6}$Vcm$^{-1}$
It is known that the behavior of a Rydberg electron ($RE$) in the electric field and in the magnetic field is different. The energy of $RE$ interaction with the external electric field of $E$ intensity is $$\label{eq:We}
W_{E} \approx n^{2}\cdot E$$ For $RE$ in a strong magnetic field with induction $B$ the similar expression can be written as: $$\label{eq:Wb}
W_{B} \approx \frac{1}{8\alpha^{2}n^{4}B^{2}}$$ where $W_{B}$ is the energy of $RE$ diamagnetic interaction with the magnetic field, $\alpha$ is the fine structure constant. Hence: $$\label{eq:WeWb}
\frac{W_{E}}{W_{B}} \approx n^{-2}\frac{E}{B^{2}}$$ Due to the quadratic dependence (\[eq:WeWb\]) on $n$, the behavior of $RE$ in strong fields is different from the case of atoms in lower excited states. Discrete terms $RA$ under the influence of an external electric field transit into the state of the continuous spectrum, neglecting the tunneling effect at $$\label{eq:En}
E(n) = \frac{1}{16n^{4}} \, \textrm{a.u.}$$ For states with $n = 10 - 12$, this corresponds to $E \le 30$kV$\cdot$cm$^-1$, where “blocking” of optical transitions from upper excited states begins.
The imposition of a magnetic field, as opposed to an electric one, does not lead directly to the ionization of $RA$. When $B \le 2\cdot10^7$ G, it, on the contrary, increases the connection between $RE$ and the atomic core (the case of a “weak” magnetic field). In this case, $n$ remains a “good” quantum number for the excited atom, and the mixing of excited states is much smaller than the energy of $RE$ connection. In a strong magnetic field, when diamagnetic interaction exceeds Coulomb interaction, one can speak about the case of a “free” electron in the external magnetic field. This particularly leads to the observation of Landau resonances in the absorption of light.
The scale of the interparticle interaction in a pseudo-bound complex (cold particles + photon) becomes comparable to the de Broglie wavelength ($T \le 10^{-3}$ K), while the energy transferred in the interparticle collisions is less than the energy transmitted to them by photons. The first experiments that started the above work used cesium $RA$ with $n = 10 \div 12$ in the electric field with the intensity of a few hundred V$\cdot$cm$^{-1}$. Interestingly, it is to these values of the principal quantum number that the maximum of cross-sections of associative ionization ($AI$) of hydrogen and hydrogen-like alkali atoms corresponds. For the sake of completeness of the topic of “blocking” optical transitions in external fields, let us remember that the physical literature has discussed the influence of external fields on the optical properties of the excited atom since 1896 (the Zeeman effect).
Rydberg atoms. Approximation of stochastic dynamics. {#Section RA}
====================================================
Diffusion approach in the collision kinetics problems. {#Section 2RA}
------------------------------------------------------
The possibility of using “diffusion” kinetics in a single atom-atom collision involving Rydberg atoms ($RA$) was first considered in [@dev88]. Its authors drew on the fact that it is hardly possible to account for the multiplicity of quasi-crossings of terms of the Rydberg collision complex ($RA + A$) with the terms of the nearest molecular states $A_{2}^{*}$, if, in addition, one takes into account the phenomenon of the Coulomb condensation of terms (Fig. \[fig:fig2\]). At the same time, the change in the energy of a Rydberg electron ($RE$) in one such avoided crossing at sufficiently large internuclear distances in complex $\Delta\varepsilon$ is the value comparable with the distance between the adjacent energy levels of $RA$ of the order $n^{-3}$, where $n$ is the effective value of the principal quantum number of the excited atom. Thus, the relationship between $\Delta\varepsilon$ and the binding energy of $RE$, $\varepsilon_{0} = 1/(2n^{2})$ is $\Delta\varepsilon/\varepsilon_{0} = n^{-1} \ll 1$. This makes it possible to apply the diffusion approach to chemi-ionization processes in thermal and subthermal collisions. As opposed to the well-known Pitaevskii diffusion model in multiple particle collisions, in the latter case we deal with the diffusion of $RE$ in the web of the terms of energy states of Rydberg complexes in single atom-atom collisions. As a result, the originally single selective term of a quasimolecule is transformed into a conical “bundle” at $\varepsilon_{0} = 1/(2n^2)$ (Fig.\[fig:fig3\]). When simulating the process leading to ionization (\[eq:RA\]), let us limit ourselves to the adiabatic approximation of particle motion and the quasi-classical representation of a single trajectory. The terms corresponding to the main terms of ion $A_{2}^{+}$ and molecule $A_{2}^{*}$ are split into two components $\Sigma_{u}$ , $\Sigma_{g}$ and $\Lambda_{u}$, $\Lambda_{g}$ correspondingly (Fig.\[fig:fig2\]). Hereinafter the atomic system of units is used, unless otherwise stated. The value $R=R_{0}$ corresponds to the beginning of autoionization of the complex at $R < R_{0}$. A more recent paper @bez02 analyzed transient kinetic equations describing the stochastic diffusion of $RE$ in energy space when $R < R_{0}$. It was supposed that in a quasi-monochromatic field emerging in process (\[eq:RA\]) according to the model of the dipole resonance mechanism, there are internal nonlinear dynamic resonances caused by the coincidence of overtones of $RE$ rotation speed in the Keplerian orbit with the frequency of recharging $\Delta R$. As a result, the $RE$ trajectory motion becomes unstable, and quasi-molecular complex $(RA + A)$, as a consequence of the $RE$ orbit instability with respect to small perturbations, goes into the $K$-system or the dynamic chaos mode. The $RE$ motion becomes a random “walk” on the web of quasi-molecular terms, and the calculation of the $RE$ time distribution function throughout the states with different values of $n$ is reduced to the solution of Kolmogorov-Fokker-Planck equation. Besides, in the problems of this kind one should basically take into account the mixing of states with different values of the orbital quantum number $l$. The key condition for the emergence of global dynamic chaos is overlapping of $n$-adjacent nonlinear resonances of finite width.
The development of such a situation in a variable microwave electric field, the applicability of the quasi-classical description of the stochastic dynamics of $RE$, and the difference between the occurring ionization process and multiphoton or tunnel ionization, were demonstrated (experimentally and theoretically) in the 1980s (see, e.g., [@leo78]). Prohibition of global dynamic chaos mode in atomic (molecular) systems is primarily associated with the dimension of such systems. In the case of a time-varying external disturbance, chaos is possible even for a one-dimensional Hamiltonian system of a hydrogen atom in a microwave electric field. Average time $\tau_{eff}(n)$ required for an $RE$ to achieve the ionization threshold in the stochastic diffusion mode is: $$\label{eq:teff}
\tau_{eff}(n) = \frac{\omega_{L}^{4/3}}{0.65\cdot F^{2}}n^{3}\left(\frac{1}{n}-\frac{n_{c}}{2n^{2}}\right)$$ where $\omega_{L}$ is the microwave field frequency, $n_{c}$ is the value of the $RA$ principal quantum number $n$ separating the region of chaotic ($n > n_{c}$) and regular ($n < n_{c}$) $RE$ motion and $F$ is the intensity of the microwave field (see Fig.\[fig:fig4\]). The value of the diffusion coefficient at which $RE$ at the initial time in the state with energy $\varepsilon = -1/(2n^{2})$ reaches the ionization threshold ($n = \infty$), is $$\label{eq:Dn}
D_{n} = 0.65\cdot F^{2}n^{3}\omega_{L}^{-4/3}$$
Specific features of RE stochastic diffusion under the “double Stark resonance” - Förster resonance. {#Section 3RA}
====================================================================================================
Processes associated with the emergence of global chaos for atoms other than hydrogen atoms can be conveniently simulated within the framework of the Sommerfeld model of the excited atom that treats the potential $RE$ motion $$\label{eq:Ur}
U(r) = -\frac{1}{r}+\frac{\alpha}{2r^2},$$ A dimensionless quantity $\langle N \rangle$ can serve as a convenient parameter for quantitative assessment of the manifestation of the effect of the global dynamic chaos. This quantity is the number of periods of $RE$ rotation in the Keplerian orbit required for its departure into the ionization continuum $\langle N\rangle \approx \tau_{diff}\cdot T_{0}^{-1}$. Here $T_{0} = 2\pi n^{3}$ is the $RE$ rotation time in the Keplerian orbit. The decrease of $\langle N \rangle$ when the Sommerfeld parameter changes $\alpha(l) = 3(l^{2} - 0,25^{2})$ should indicate intensification of the process of chaotic diffusion and vice versa.
From the standpoint of quantum mechanics, diffusion ionization of $RA$ in the external field can be considered as an analogue of multiphoton ionization. It is a well-known phenomenon, called the Cooper minimum of photoionization cross-sections, when corresponding values of dipole matrix elements vanish. Under the Seaton criterion [@sea83], this meets the condition of a half-integer value of the difference of the quantum defects (in our case, the difference of $\alpha$) for the two neighboring states involved in the transition. Förster resonance is one of manifestations of nonlinear effects in optical transitions between the $RA$ in relatively weak electric fields [@bet88]. This resonance occurs when the level of $l$-series is exactly between the two levels of neighboring ($l$-1) or ($l$+1) series (Fig.\[fig:fig5\]). In particular, such configuration of levels corresponds to the two-photon resonance for the transition in a highly excited alkali atom $\{l+1,n\}\rightarrow\{l,n\}\rightarrow\{l+1,n-1\}$, $\{p,d\}$-series, and $\{l-1,n\}\rightarrow\{l,n\}\rightarrow\{l-1,n-1\}$, $\{s,p\}$-series.
Configuration of highly excited real levels of alkali metals for $\{s,p\}$- and $\{p,d\}$-series is close to the structure of Förster resonance, which allows to receive the latter at the values of the electric field intensities of a few V/cm. The Förster resonance has received increased attention in literature recently, since it is regarded as a promising tool for the manipulation of an atom in a laser field, and is promising for solving practical problems in quantum information science.
Further we will focus on the blocking of dipole matrix elements that are immediately relevant to the issues of the dynamic chaos kinetics. The configuration of levels for the conditions of a Förster resonance is similar to the diagram of levels of a three-dimensional quantum oscillator. Its dipole matrix elements are non-zero only for near transitions to adjacent levels, other “long” optical transitions are blocked in $(RA-A)$ quasimolecular complex. It is known that the widths of nonlinear dynamic resonances depend on the values of dipole matrix elements. Blocking of the latter means blocking the dynamic chaos mode (Fig.\[fig:fig6\]).
The effectiveness of $RE$ random walk across the energy levels in the area of Coulomb condensation can be controlled by external constant electric fields. At a Förster resonance this results in repopulation of Rydberg atomic states and of intensification of the light absorption in the infrared range. Note that the model calculations above do not take into account the possibility of $l$-mixing processes. The concentration of Rydberg atoms, and, consequently, the intensity of their corresponding processes of emission and absorption of light as one of the parameters, is determined by the effective lifetime of the excited state $\tau_{eff}$. In the atmospheres of space objects $\tau_{eff}$ value may be related to the collisional diffusion ionization processes discussed above.
Conclusions
===========
Following from the fact that the Föster resonance allows to manipulate the random walk of $RE$ in the manifold of quantum levels with the redistribution of the excitation energy of $RA$ and the occurrence of accompanying anomalies in the IR spectra, in this work it is shown that in the atmospheres of celestial objects the intensities of corresponding radiative transitions can be associated with the effects of dynamic chaos via collisional ionization. There is no doubt that the modern investigations of Rydberg atoms will lead to new approaches in atomic physics and its practical applications
Acknowledgments {#acknowledgments .unnumbered}
===============
The work was carried out within the EU FP7 Centre of Excellence FOTONIKA-LV and under the partial support by the EU FP7 IRSES Project COLIMA. Also, the authors are thankful to the Ministry of Education, Science and Technological Development of the Republic of Serbia for the support of this work within the projects 176002, III4402.
[12]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix
, I. M., [Ryabtsev]{}, I. I., [Fateev]{}, N. V., 1988. [Observation of the two-photon dynamics Stark effect in the three-levels Rydberg atoms.]{} Sov. Phys. JETP. 48, 195–198.
, N. N., [Borodin]{}, V. M., [Klyucharev]{}, A. N., [Matveev]{}, A. A., 2002. [Stochastic dynamics of a Rydberg electron during a single atom-atom ionizing collision.]{} Russian Journ. of Phys. Chem. 76, 527–542.
, A. Z., [Klyucharev]{}, A. N., [Penkin]{}, N. P., [Sebyakin]{}, Y. N., 1988. [Diffusional approach to the process of collisional ionization of excited atoms]{}. Optics and Spectroscopy 64, 425–426.
, Y. N., [Mihajlov]{}, A. A., [Ignjatovi[ć]{}]{}, L. M., [Sakan]{}, N. M., [Sre[ć]{}kovi[ć]{}]{}, V. A., [Zakharov]{}, M. Y., [Bezuglov]{}, N. N., [Klycharev]{}, A. N., 2009. [Rydberg atoms in astrophysics]{}. New Astron. Rev. 53, 259–265.
, L., 2006. [The alkali metal atmospheres on the Moon and Mercury: Explaining the stable exospheres by heavy Rydberg Matter clusters]{}. Planet. Space Sci. 54, 101–112.
, A. N., [Bezuglov]{}, N. N., [Matveev]{}, A. A., [Mihajlov]{}, A. A., [Ignjatovi[ć]{}]{}, L. M., [Dimitrijevi[ć]{}]{}, M. S., 2007. [Rate coefficients for the chemi-ionization processes in sodium- and other alkali-metal geocosmical plasmas]{}. New Astron. Rev. 51, 547–562.
, A. N., [Bezuglov]{}, N. N., [Mihajlov]{}, A. A., [Ignjatovi[ć]{}]{}, L. M., Nov. 2010. [Influence of inelastic Rydberg atom-atom collisional process on kinetic and optical properties of low-temperature laboratory and astrophysical plasmas]{}. Journal of Physics Conference Series 257 (1), 012027.
, J. G., [Percival]{}, I. C., Oct. 1978. [Microwave Ionization and Excitation of Rydberg Atoms]{}. Phys. Rev. Lett. 41, 944–947.
Mihajlov, A., Sre[ć]{}kovi[ć]{}, V., Ignjatovi[ć]{}, L. M., Klyucharev, A., 2012. The chemi-ionization processes in slow collisions of rydberg atoms with ground state atoms: Mechanism and applications. J. Clust. Sci. 23 (1), 47–75.
, A. A., [Ignjatovi[ć]{}]{}, L. M., [Sre[ć]{}kovi[ć]{}]{}, V. A., [Dimitrijevi[ć]{}]{}, M. S., Mar. 2011. [Chemi-ionization in Solar Photosphere: Influence on the Hydrogen Atom Excited States Population]{}. ApJS 193, 2.
, A. A., [Ignjatovi[ć]{}]{}, L. M., [Sre[ć]{}kovi[ć]{}]{}, V. A., [Dimitrijevi[ć]{}]{}, M. S., 2011. [The Influence of Chemi-Ionization and Recombination Processes on Spectral Line Shapes in Stellar Atmospheres]{}. Balt. Astron. 20, 566–571.
, P., [Bolognesi]{}, P., [Avaldi]{}, L., [Moise]{}, A., [Richter]{}, R., [Mihajlov]{}, A. A., [Sre[ć]{}kovi[ć]{}]{}, V. A., [Ignjatovi[ć]{}]{}, L. M., 2012. [Experimental and theoretical study of the chemi-ionization in thermal collisions of Ne Rydberg atoms]{}. Phys. Rev. A. 85 (5), 052705.
, M. J., 1983. [Quantum defect theory]{}. Rep.Prog.Phys 46, 167–225.
, V. A., [Mihajlov]{}, A. A., [Ignjatovi[ć]{}]{}, L. M., [Dimitrijevi[ć]{}]{}, M. S., Apr. 2013. [Excitation and deexcitation processes in atom-Rydberg atom collisions in helium-rich white dwarf atmospheres]{}. A&A 552, A33.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The Relativistic Heavy Ion Collider (RHIC) was built to re-create and study in the laboratory the extremely hot and dense matter that filled our entire universe during its first few microseconds. Its operation since June 2000 has been extremely successful, and the four large RHIC experiments have produced an impressive body of data which indeed provide compelling evidence for the formation of thermally equilibrated matter at unprecedented temperatures and energy densities – a “quark-gluon plasma (QGP)”. A surprise has been the discovery that this plasma behaves like an almost perfect fluid, with extremely low viscosity. Theorists had expected a weakly interacting gas of quarks and gluons, but instead we seem to have created a strongly coupled plasma liquid. The experimental evidence strongly relies on a feature called “elliptic flow” in off-central collisions, with additional support from other observations. This article explains how we probe the strongly coupled QGP, describes the ideas and measurements which led to the conclusion that the QGP is an almost perfect liquid, and shows how they tie relativistic heavy-ion physics into other burgeoning fields of modern physics, such as strongly coupled Coulomb plasmas, ultracold systems of trapped atoms, and superstring theory.'
address:
- '$^1$Department of Physics, The Ohio State University, Columbus, OH 43210, USA'
- '$^2$CERN, Physics Department, Theory Division, CH-1211 Geneva 23, Switzerland'
author:
- 'Ulrich Heinz$^{1,2}$'
title: 'The strongly coupled quark-gluon plasma created at RHIC '
---
Introduction: what is a QGP, and how to create it {#sec1}
=================================================
Initially, the Universe didn’t look at all like it does today: until about 10$\mu$s after the Big Bang, it was too hot and dense to allow quarks and gluons to form hadrons, and the entire Universe was filled with a thermalized plasma of deconfined quarks, antiquarks, gluons and leptons (and other, heavier particles at even earlier times) – a Quark-Gluon Plasma (QGP). After hadronization of this QGP, it took another 3 minutes until the first small nuclei were formed from protons and neutrons (primordial nucleosynthesis and chemical freeze-out), another 400,000 years until atomic nuclei and electrons could combine to form electrically neutral atoms, thereby making the Universe transparent and liberating the Cosmic Microwave Background (kinetic decoupling of photons and thermal freeze-out), and another 13 billion years or so for creatures to evolve with sufficient intelligence to contemplate all of this.
With high-energy nuclear collisions at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory or at the Large Hadron Collider (LHC) at CERN, one attempts to recreate the conditions of the Early Universe in the laboratory, by heating nuclear matter back up to temperatures above the quark-gluon deconfinement temperature. The hot and dense fireballs created in these collisions undergo thermalization, cooling by expansion, hadronization, chemical and thermal freeze-out in a pattern that strongly resembles the evolution of the Universe after the Big Bang. For this reason heavy-ion collisions are often called “Little Bangs”. A major difference is, however, the much smaller size of the Little Bang and its much (by about 18 orders of magnitude) faster dynamical evolution when compared with the Big Bang. This complicates the theoretical analysis of the experimental observations.
The main stages of the Little Bang are: 1.) Two disk-like nuclei approach each other, Lorentz contracted along the beam direction by a factor $\gamma=E_\mathrm{beam}/M$ (where $E_\mathrm{beam}$ is the beam energy per nucleon and $M=0.94$GeV is the nucleon mass, such that $\gamma\approx
110$ for heavy ions at RHIC and $\approx 3000$ at the LHC at their respective top energies). 2.) After impact, hard collisions which large momentum transfer $Q \gg 1$GeV between quarks, antiquarks or gluons (partons) inside the nucleons of the two nuclei produce secondary partons with large transverse momenta $p_T$ at early times $\sim 1/Q \sim 1/p_T$. 3.) Soft collisions with small momentum exchange $Q\lesssim 1$GeV produce many more particles somewhat later and thermalize the QGP after about 1fm/$c$. The resulting thermalized QGP fluid expands hydrodynamically and cools approximately adiabatically. 4.) The QGP converts to a gas of hadrons; the hadrons continue to interact quasi-elastically, further accelerating the expansion and cooling of the fireball until thermal freeze-out. The chemical composition of the hadron gas is fixed during the hadronization process and remains basically unchanged afterwards. After thermal decoupling, unstable hadrons decay and the stable decay products stream freely towards the detector.
By studying the behaviour of the matter created in the Little Bangs we can explore the phase structure and phase diagram of strongly interacting matter. Where the proton-proton collision program at the LHC aims at an understanding of the elementary degrees of freedom and fundamental forces at the shortest distances, the heavy-ion program focusses on the condensed matter aspects of bulk material whose constituents interact with each other through these forces. The difference between this kind of condensed matter physics and the traditional one is that in our case the fundamental interaction is mediated by the strong rather than the electromagnetic force. The coupling strength of the strong interaction gets bigger rather than smaller at large distances, leading us to expect a completely new type of phase structure. Indeed, strongly interacting matter appears to behave like a liquid (“quark soup”) at high temperature and like a gas (“hadron resonance gas”) at low temperature, contrary to intuition. On the other hand, the QGP state, with its unconfined color charges, has similarities with electrodynamic plasmas whose dynamical behaviour is controlled by the presence of unconfined electric charges. For example, both feature Debye screening of (color) electric fields. The main differences are QGP temperatures that are about a factor 1000 higher, and particle densities that are about a factor $10^9$ larger, than their counterparts in the hottest and densest electrodynamic plasmas. Accordingly, quark-gluon plasmas are intrinsically relativistic, and magnetic fields play everywhere an equally important role as electric fields. The nonlinear effects resulting from the non-Abelian structure of the strong interaction manifest themselves most importantly in the magnetic sector.
The equilibrium properties of a QGP with small net baryon density can be computed from first principles using Lattice QCD. Such calculations exhibit a smooth but rapid cross-over phase transition from a QGP above $T_c$ to a hadron gas below $T_c$ around critical temperature $T_c= 173\pm15$MeV, corresponding to a critical energy density $e_c\simeq 0.7$GeV/fm$^3$. For $T\gtrsim2\,T_c$, the QGP features an equation of state $p(e)$ corresponding to an ideal gas of massless partons, $p\approx\frac{1}{3}e$, with sound speed $c_s=
\sqrt{\frac{\partial p}{\partial e}}=\sqrt{\frac{1}{3}}$. But both the normalized energy density $e/T^4$ and pressure $p/T^4$ remain about 15-20% below the corresponding Boltzmann limits for a non-interacting massless parton gas. At face value, this relatively small deviation from the ideal gas appears to support the idea (long held by theorists before the RHIC era) that, because of “asymptotic freedom” (the QCD coupling “constant” decreases logarithmically with increasing energy), the QGP is a weakly interacting system of quarks, antiquarks and gluons that can be treated perturbatively. However, RHIC experiments proved this expectation to be quite wrong. Instead the QGP was found to behave like an almost perfect liquid, requiring it to be a [*strongly coupled plasma*]{}.
This experimental surprise is based on three key observations: 1.) [**Large elliptic flow:**]{} The anisotropic collective flow of the fireball matter measured in non-central heavy-ion collisions is huge and essentially exhausts the upper theoretical limit predicted by ideal fluid dynamics. 2.) [**Heavy quark collective flow:**]{} Even heavy charm and bottom quarks are observed to be dragged along by the collective expansion of the fireball and exhibit large elliptic flow. 3.) [**Jet quenching:**]{} Fast partons plowing through the dense fireball matter, even if they are very heavy such as charm and bottom quarks, lose large amounts of energy, leading to a strong suppression of hadrons with large $p_T$ when compared with expectations based on a naive extrapolation of proton-proton collision data, and the quenching of hadronic jets.
As I will try to explain in the rest of this review, none of these three observations can be understood without assuming some sort of strong coupling among the plasma constituents. Further, I will show that there is rather compelling evidence that we are indeed dealing with a “plasma” state where color charges are deconfined. The strongly-coupled nature of the QGP manifests itself in its collectivity and its transport properties, seen in non-equilibrium situations such as those generated in heavy-ion collisions. It is much more difficult to extract directly from its bulk equilibrium properties which are studied in lattice QCD.
Statements made and conclusions drawn in this overview are based on a large body of experimental data and comparisons with models for the dynamical evolution of heavy-ion collisions. Unfortunately, the assigned space does not permit me to show these here. For the most important plots and a comprehensive list of references I refer the interested reader to recent reviews by Müller and Nagle [@Muller:2006ee] and by Shuryak [@Shuryak:2008eq]. For some newer developments I will provide references to the original papers.
Collective flow – the “Bang” {#sec2}
============================
The primary observables in heavy-ion collisions are (i) hadron momentum spectra and yield ratios, from which we can extract the chemical composition, temperature, and collective flow pattern (including anisotropies) of the fireball when it finally decouples, (ii) the energy distributions of hard direct probes such as jets created by hadronizing fast partons, hadrons containing heavy charm and bottom quarks, and directly emitted electromagnetic radiation, from which we can extract information on the fireball temperature and density at earlier times before it decouples, and (iii) two-particle momentum correlations from which we can extract space-time information about the size and shape of the fireball at decoupling (hadron correlations) or at earlier times (photon correlations). Except for photon correlations which are hard to measure, good experimental data exist now for all of these observables. To learn about the existence and properties of the quark-hadron phase transition itself one needs to study event-by-event fluctuations around the statistical average of the event ensemble; this subject is still in its infancy, both experimentally and theoretically.
I will first discuss the collective flow patterns observed in heavy-ion collisions at RHIC. Collective flow is driven by pressure gradients and thus provides access to the equation of state $p(e)$ ($p$ is the thermodynamic pressure, $e$ is the energy density, and the net baryon density $n_B$ is small enough at RHIC energies that its influence on the EOS can be neglected). The key equation is $${\dot u}^\mu=\frac{\nabla^\mu p}{e+p} = \frac{c_s^2}{1+c_s^2}
\frac{\nabla^\mu e}{e},$$ valid for ideal fluids, which shows that the acceleration of the fluid is controlled by the speed of sound $c_s=\sqrt{\frac{\partial p}{\partial e}}$ (reflecting the stiffness of the EOS $p(e)$) which determines the fluid’s reaction to the normalized pressure or energy density gradients. (The dot in ${\dot u}^\mu \equiv (u\cdot\partial)u^\mu$ denotes the time derivative in the fluid’s local rest frame (LRF), and $\nabla^\mu$ is the spatial gradient in that frame. For non-ideal (viscous) fluids, additional terms appear on the r.h.s., depending on spatial velocity gradients in the LRF multiplied by transport coefficients (shear and bulk viscosity).)
Lattice QCD data show that $c_s^2$ decreases from about $\frac{1}{3}$ at $T>2T_c$ by almost a factor 10 close to $T_c$, rising again to around $0.16-0.2$ in the hadron gas (HG) phase below $T_c$. The finally observed collective flow transverse to the beam direction reflects a (weighted) average of the history of $c_s^2(T)$ along the cooling trajectory explored by the fireball medium. Different aspects of the final flow pattern weight this history differently. Whereas the azimuthally averaged “radial flow” receives contributions from all expansion stages, due to persistent (normalized) pressure gradients between the fireball interior and the outside vacuum, flow anisotropies, in particular the strong “elliptic flow” seen in non-central collisions, are generated mostly during the hot early collision stages. They are driven by spatial anisotropies of the pressure gradients due to the initial spatial deformation of the nuclear reaction zone (see Fig. 1),
but this deformation decreases with time as a result of anisotropic flow, since the matter accelerates more rapidly, due to larger pressure gradients, in the direction where the fireball was initially shorter. With the disappearance of pressure gradient anisotropies the driving force for flow anisotropies vanishes, and due to this “self-quenching” effect the elliptic flow saturates early. If the fireball expansion starts at sufficiently high initial temperature, it is possible that all elliptic flow is generated before the matter reaches $T_c$ and hadronizes. In this case (which we expect to be realized in heavy-ion collisions at the LHC) elliptic flow is a clean probe of the EOS of the QGP phase.
Elliptic flow is measured as the second Fourier coefficient of the azimuthal angle distribution of the final hadrons in the transverse plane: $$v_2(y,p_T;b) = \langle \cos(2\phi_p)\rangle
= \frac{\int d\phi_p \cos(2\phi_p) \frac{dN}{dy\,p_T\,dp_T\,d\phi_p}(b)}
{\int d\phi_p \frac{dN}{dy\,p_T\,dp_T\,d\phi_p}(b)}.$$ Here $y=\frac{1}{2}\ln[(E+p_L)/(E-p_L)]$ is the rapidity of the particles, and $b$ the impact parameter of the collision. Each particle species has its own elliptic flow coefficient, characterizing the elliptic azimuthal deformation of its momentum distribution. $v_2$ has been measured as a function of transverse momentum $p_T$ for a variety of hadron species with different masses, ranging from the pion ($m_\pi=140$MeV) to the $\Omega$ hyperon ($m_\Omega=1672$MeV). For some hadron species $v_2$ has been measured out to $p_T>10$GeV where the spectrum has decayed by more than 7 orders of magnitude from the yield measured at low $p_T$! More than 99% of all hadrons have $p_T<2$GeV; in this domain the data show excellent agreement with ideal fluid dynamical predictions, including the hydrodynamically predicted rest mass dependence of $v_2$ (at the same $p_T$, heavier hadrons show less elliptic flow). Ideal fluid dynamics thus gives a good description of the collective behaviour of the bulk of the fireball matter.
It must be emphasized that the ideal hydrodynamic prediction of $v_2(p_T)$ is essentially parameter free: All model parameters (initial conditions and decoupling temperature) are fixed in central collisions where $v_2=0$, and the only non-trivial input for non-central collisions is the initial geometric source eccentricity as a function of impact parameter. Originally, one computed this eccentricity from a geometric Glauber model, and in this case one finds that the experimental data fully exhaust the theoretical prediction from ideal fluid dynamics, leaving very little room for viscosity which would reduce the theoretical value for the elliptic flow. This is the cornerstone of the “perfect fluid” paradigm for the QGP that has emerged from the RHIC data. However, recently suggested alternate models for the initial state, for example the Color Glass Condensate (CGC) model, can give initial eccentricities that are up to 30% larger than the Glauber model values. Furthermore, the first ideal fluid calculations used an incorrect chemical composition during the late hadronic stage of the collision; once corrected, this increased the theoretical prediction for $v_2(p_T)$ for pions by another 30% or so. If both of these effects are included, the measured $v_2(p_T)$ reaches only about 2/3 of the ideal fluid limit, opening some room for viscous effects in the fireball fluid.
At RHIC energies, not all of the flow anisotropy is created before hadronization, especially in noncentral collisions and away from midrapidity where the initial temperatures are not as high as in central collisions near midrapidity. Recent studies which treat this late hadronic stage microscopically rather than as an ideal fluid have exhibited large viscous effects in the hadron gas phase [@Hirano:2005xf]. If these are taken into account in the theoretical description, one finds that they reduce the elliptic flow and compensate for the $\sim 30\%$ increase of $v_2$ due to non-equilibrium hadron chemistry mentioned above. With Glauber initial conditions one is thus again left with almost no room for QGP viscosity, whereas CGC initial conditions allow the QGP fluid to have some non-zero viscosity.[^1]
Recent theoretical progress in the formulation and simulation of relativistic hydrodynamics for [*viscous*]{} fluids (see [@Heinz:2008qm] for a summary) has provided us with a tool that permits us to answer the question how large the QGP viscosity might be. This breakthrough is based on second-order Israel-Stewart theory and variations thereof which avoids longstanding problems of violations of causality in relativistic Navier-Stokes theory which includes only first-order gradients of the local thermodynamic variables. This is still largely work in progress, and published results based on this approach do not yet include all the physical ingredients known to be relevant for a quantitative prediction of elliptic flow. Nonetheless, a first heroic effort has been made by Luzum and Romatschke [@Luzum:2008cw] to use this new approach to limit the shear viscosity to entropy ratio $\eta/s$ from experimental elliptic flow data. Their work does not include non-equilibrium chemistry in the late hadronic stage (which would increase the calculated $v_2$), nor does it subtract effects from increased viscosity during that stage (which would reduce $v_2$). These two effects work against each other, and it may therefore not be too presumptuous to try to extract an upper limit for $\eta/s$ from their results, shown in Fig. 8 of Ref. [@Luzum:2008cw]. Even with all the uncertainties shown in that Figure (Glauber vs. CGC and a claimed 20% uncertainty in the normalization of the experimental data), it looks like $\frac{\eta}{s}>3 \left(\frac{\eta}{s}\right)_\mathrm{min}$ (where $\left(\frac{\eta}{s}\right)_\mathrm{min}=\frac{1}{4\pi}\approx
0.08$ is a conjectured absolute lower limit on the specific shear viscosity for [*any*]{} fluid, derived for strongly coupled conformally invariant supersymmetric field theories using the AdS/CFT correspondence [@Kovtun:2004de]) is difficult to accomodate. (The authors quote a conservative upper limit of $\eta/s<0.5$.) Since all known classical fluids have $\eta/s >> 1$ at all temperatures, even at their boiling points where $\eta/s$ typically reaches a minimum [@Kovtun:2004de], this makes the QGP the most perfect fluid of any studied so far in the laboratory. (Recent studies indicate that ultracold atoms in the unitary limit (infinite scattering length) may come in a close second [@Rupak:2007vp].)
Primordial hadrosynthesis – measuring $T_c$ {#sec3}
===========================================
The observed hadron yields (better: hadron yield ratios, from which the hard to measure fireball volume drops out) tell us about the chemical composition of the fireball when it finally decouples. It turns out that the hadron yield ratios measured in Au+Au collisions at RHIC can be described extremely well using a thermal model with just two parameters: a chemical decoupling temperature $T_\mathrm{chem}=163\pm4$MeV and a small baryon chemical $\mu_B=24\pm4$MeV. In central and semi-central collisions the phase space for strange quarks is fully saturated – if one generalizes the thermal fit to include a strangeness saturation factor $\gamma_s$ one finds $\gamma_s=0.99\pm0.07$. In peripheral collisions with less than 150 participating nucleons, $\gamma_s$ is found to drop, approaching a value around 0.5 in $pp$ collisions, reflecting the well-known strangeness suppression in such collisions. This suppression is completely removed in central Au+Au collisions. In contrast to $\gamma_s$, the chemical decoupling temperature $T_\mathrm{chem}$ is found to be completely independent of collision centrality. So, at freeze-out, all Au+Au collisions are well described by a thermalized hadron resonance gas in [*relative*]{} chemical equilibrium with respect to all types of inelastic, identity-changing hadronic reactions, as long as the total number of strange valence quark-antiquark pairs (which in peripheral collisions is suppressed below its [*absolute*]{} equilibrium value) is conserved.
Two aspects of this observation are, at first sight, puzzling: (i) The measured chemical decoupling temperature is, within errors, consistent with the critical temperature for hadronization of a QGP predicted by lattice QCD. If hadrons formed at $T_c$ and then stopped interacting inelastically with each other at $T_\mathrm{chem}\approx T_c$, how was there ever enough time in the continuously expanding and cooling fireball for their abundances to reach chemical equilibrium? (ii) If chemical equilibrium among the hadrons is controlled by inelastic scattering between them, the decoupling of hadron abundances is controlled by a competition between the microscopic inelastic scattering rate and the macroscopic hydrodynamic expansion rate. Since the expansion rate depends on the fireball radius and thereby on the impact parameter of the Au+Au collision, the chemical decoupling temperature (which controls the density and thus the scattering rate) should also depend on collision centrality [@Heinz:2007in]. How can the measured $T_\mathrm{chem}$ then be independent of centrality?
There is only one explanation that resolves both puzzles simultaneously: The hadrons are born into a maximum entropy state by complicated quark-gluon dynamics during hadronization, and after completion of this process the hadronic phase is so dilute that inelastic hadronic collisions (other than resonance scattering that only affects the particles’momenta but not the abundances of finally observed stable decay products) can no longer compete with dilution by expansion, freezing the chemical composition. This maximum entropy state cannot be distinguished from the chemical equilibrium state with the same temperature and chemical potential that would eventually be reached by microscopic inelastic scattering if the fireball medium were held in box. The measured temperature $T_\mathrm{chem}=T_c$ is, however, not established by hadronic scattering processes, but characterizes the energy density at which the quarks and gluons coalesce into hadrons, independent of the local expansion rate of the fluid in which this happens. The microscopic dynamics itself that leads to the observed maximum entropy state is not describable in terms of well-defined hadronic degrees of freedom but involves effective degrees of freedom which control the microscopic physics of the hadronization phase transition.
The absence of inelastic hadronic scattering processes after completion of hadronization is fortunate since it opens a window onto the phase transition itself, allowing us to measure $T_c$ through the hadron yields in spite of the fact that the hadrons continue to scatter quasi-elastically, maintaining approximate thermal (but not chemical!) equilibrium for the momentum distributions down to much lower thermal decoupling temperatures around 100MeV. This “kinetic decoupling temperature” can be extracted from the measured momentum distributions and [*is*]{} found to depend on collision centrality, as predicted by hydrodynamics [@Heinz:2007in].
JET: Jet Emission Tomography of the QGP {#sec5}
=======================================
As explained so far, RHIC collisions show strong evidence for fast thermalization of the momenta of the fireball constituents throughout at least the earlier part of the fireball expansion (until hadronization) and for chemical equilibrium at hadronization. After hadronization, chemical equilibrium is immediately broken at $T_c$, and thermal equilibration becomes gradually less efficient until the momenta finally decouple, too, a $T_\mathrm{therm}\sim 100\,\mathrm{MeV} < T_\mathrm{chem}$.
What causes the fast thermalization during the early expansion stage? We can use fast partons, created in primary collisions between quarks or gluons from the two nuclei, to probe the early dense stage of the medium. Such hard partons, emitted with high transverse momenta $p_T$, fragment into a spray of hadrons in the direction of the parton, forming what’s called a [*jet*]{}. The rate for creating such jets can be factored into a hard parton-parton cross section, described by perturbative QCD, a soft structure function describing the probability to find a parton to scatter off inside a nucleon within the colliding nuclei, and a soft fragmentation function describing the fragmentation of the scattered parton into hadrons. The structure and fragmentation functions are universal and can be measured in deep-inelastic $ep$ scattering (DIS) and in $pp$ collisions. Nuclear modifications of the structure function can be measured in DIS of electrons on nuclei. Jets thus form a calibrated, selfgenerated probe which can used to explore the fireball medium tomographically. The medium will affect the hard parton along the path from its production point to where it exits the fireball. If the parton is sufficiently energetic, it will exit the medium before it can begin to fragment into hadrons. The difference in jet production rates or, more generally, in the rates for producing high-$p_T$ hadrons from jet fragmentation in Au+Au and $pp$ collisions can be calculated in terms of the density of scatterers in the medium, multiplied with a perturbatively calculable cross section (if the parton has sufficiently high energy to justify a perturbative approach), and integrated along the path of the jet. This integrated product of density times cross section characterizes the opacity of the medium. Since the probe is colored and interacts through color exchange, it is sensitive to density of color charges resolvable at the scale of its Compton wave length; that density will be higher in a color-deconfined QGP than in a cold nucleus where quarks and gluons are hidden away inside the nucleons.
The procedure is similar to the familiar Positron Emission Tomography (PET) in medicine – therefore I call it “JET”. The main difference is that the positron emitting source used in PET is injected externally into the medium to be probed while the jets used in heavy-ion collisions are created internally together with the medium.
One of the first results from RHIC, right after the discovery of strong elliptic flow, was the experimental confirmation of the theoretical prediction that QGP formation should result in a strong suppression of high-$p_T$ hadrons compared to appropriately scaled $pp$ collisions. The observed suppression amounts to a factor 5-6, almost independent of $p_T$ in the region $4<p_T<15$GeV/$c$. This is close to $A^{1/3}$ for $A\sim 200$ and suggests a surface/volume effect, i.e. that high-$p_T$ hadrons are predominantly emitted from the firball surface while fast partons created in the fireball interior lose so much energy before exiting that they no longer contribute to high-$p_T$ hadron production. Indeed, angular correlations between a high-$p_T$ leading hadron selected from the collision and other energetic hadrons, indicative of a fragmenting jet, strongly support such a picture: Whereas in $pp$ and d+Au collisions, where fast sideward-moving partons escape from the narrow “fireball medium” almost instantaneously (if one can even use the notion of a “medium” in that case), one observes two such correlation peaks, separated by $180^\circ$ and corresponding to the pair of hard partons created back-to-back in the primary hard scattering, one sees only one such peak in central Au+Au collisions, in the direction of the fast trigger haddron. This can be understood if one assumes that the trigger hadron stems from the fragmentation of the outgoing partner of a hard parton pair created near the surface of the fireball, which exits the medium soon after creation, while its inward-travelling partner at $180^\circ$ loses most of its energy before exiting the fireball on the other side and no longer contributes energetic hadrons correlated with the trigger hadron. Still, the energy initially carried by that partner should show up near $180^\circ$ relative to the trigger hadron, and it does: While the away-side correlations of energetic hadrons with the trigger one are [*depleted*]{}, the away-side correlations between “soft” hadrons ($p_T<1.5$GeV/$c$) and the trigger hadron are [*enhanced*]{}. The energy lost by the fast parton travelling away from the trigger hadron and through the medium re-appears in the form of additional soft hadrons, with a distribution of transverse momenta similar to that of the medium itself: As the Au+Au collisions become more central, the average transverse momenta $\langle p_T\rangle$ of the extra hadrons emitted into the away-side hemisphere are observed to approach the $\langle p_T\rangle$ of the entire collision event.
In non-central collisions, the fireball created in the collision is deformed like an egg. Its orientation can be determined from the elliptic flow of the soft hadrons discussed in Sec. \[sec2\]. In this case, the overall fireball size is smaller than in central Au+Au collisions, and the observed away-side suppression of jet-like angular correlations is less complete. (Indeed, even in central Au+Au collisions, the away-side angular correlations among hard hadrons begin to reappear when the energy of the trigger hadron is increased beyond 10GeV or so; this is apparently too high for the inward-traveling partner to lose all of its initial energy.) But the suppression that is observed is stronger if the trigger is emitted perpendicular to the reaction plane (and its partner thus must travel through the fire-egg along its long direction) than when it moves within the reaction plane (i.e. its partner passes through the fire-egg along its short direction).
Fast partons moving through a QGP collide with its constituents, causing them to lose energy via both elastic collisions and collision-induced gluon radiation. For very energetic partons radiative energy loss is expected to dominate, so first model comparisons with the data included only radiative energy loss. From such calculations it was concluded that the observed suppression of hard particles required densities of scatterers that were consistent with and independently confirmed estimates of the initial energy densities extracted from the successful hydrodynamical models that describe the measured elliptic flow of soft hadrons. On a more quantitative level, radiative energy loss models soon started, however, to develop difficulties. They could not reproduce the observed large difference in away-side jet quenching between the in-plane and out-of-plane emission directions. Decay electrons from weak decays of hadrons containing charm and bottom quarks and pointing back to these hadrons were observed to feature strong elliptic flow (indicating that even these heavy quarks (“boulders in the stream”) are dragged along by the anisotropically expanding QGP liquid) and large energy loss, almost as large as that observed for hadrons containing only light quarks. The inclusion of elastic collisional energy loss and recent refinements in the calculation of radiative energy loss have reduced the disagreement between theory and experiment, but some significant tension remains. This has recently generated lively theoretical activity aiming to compute heavy quark drag and diffusion coefficients for strongly coupled quantum field theories, using gravity duals and the AdS/CFT correspondence (see [@Shuryak:2008eq] for a review and references).
What happens to all the energy lost by fast partons plowing through a QGP? There is some experimental evidence (and it appears to be getting stronger with recent 3-particle correlation measurements) that the soft partons emitted into the away-side hemisphere relative to a hard trigger particle emerge in the shape of a conical structure. This could signal a Mach shock cone, generated by the supersonically moving fast parton as it barrels through the perfect QGP liquid. Interestingly, the perhaps most convincing theoretical approach that actually generates something like the observed structures in the angular and 3-particle correlations on the away-side of the trigger jet again is based on models using gravity duals and the AdS/CFT connection [@Chesler:2007sv]. Clearly this needs more work, but the implications are intriguing.
Outlook {#sec6}
=======
On a qualitative level, the new RHIC paradigm, which states that the QGP is a strongly coupled plasma exhibiting almost perfect liquid behaviour and strong color opacity (even for the heaviest colored probes such as charm and bottom quarks), has solid experimental and theoretical support. The microscopic origins of the strong coupling observed in the collective dynamical behaviour of the QGP remain, however, to be clarified. Theorists are approaching this question from three angles: perturbative QCD based on an expansion in $\alpha_s\ll 1$, lattice QCD with $\alpha_s$ adjusted to reproduce the measured hadron mass spectrum, and strong-coupling methods for the limit $\alpha_s\gg 1$, based on the AdS/CFT correspondence which states that properties of certain strongly coupled field theories can be calculated by solving Einstein’s equations in appropriately curved space-times called “gravity duals”. Quantitatively precise and reliable results from either approach are expected to still require much hard work. An overview over some of the ongoing theoretical activities in this direction can be found in [@Shuryak:2008eq].
However, even without a complete quantitative theoretical understanding of the QGP transport coefficients, the body of experimental heavy-ion collision data is already very rich, and it is expected to further grow at a staggering rate with the completion of the RHIC II upgrade and the turn-on of the LHC. With the continued development of increasingly sophisticated models for the fireball expansion dynamics and its ultimate decoupling into non-interacting hadrons, the time is ripe for a comprehensive attack on the problem of extracting precise values for the QGP transport coefficients from a phenomenological description of the experimental data. This program has already started and produced first preliminary results as reported here; its outcome will yield valuable constraints and guidance for the theorists aiming for a first-principles based theoretical understanding of the QGP and its properties.
[**Acknowledgement:**]{} This work was supported by the U.S. Department of Energy under Contract DE-FG02-01ER41190. I am grateful to my students and postdocs who joined me in this research for so many years and helped develop a coherent picture of the complex dynamics of heavy-ion collisions.
[**References**]{} {#references .unnumbered}
------------------
[9]{}
Müller B and Nagle J L 2006 [*Ann. Rev. Nucl. Part. Sci.*]{} [**56**]{} 93. Shuryak E 2008 [*Preprint*]{} arXiv:0807.3033 \[hep-ph\]. Hirano T, Heinz U, Kharzeev D, Lacey R and Nara Y 2006 [*Phys. Lett. B*]{} [**636**]{} 299. Heinz U and Song H, [*J. Phys. G: Nucl. Part. Phys.*]{} [**35**]{} 104126. Luzum M and Romatschke P 2008 [*Phys. Rev. C*]{} [**78**]{} 034915. P. Kovtun, D. T. Son and A. O. Starinets, [*Phys. Rev. Lett.*]{} [**94**]{} 111601. Rupak G and Schäfer T 2007 [*Phys. Rev. A*]{} [**76**]{} 053607. Heinz U and Kestin G 2007 [*Eur. Phys. J. ST*]{} [**155**]{} 75. Chesler P M and Yaffe L G 2008 [*Phys. Rev. D*]{} [**78**]{} 045013.
[^1]: An important aspect of the hydrodynamical simulations that describe the experimental data well is that they require short thermalization times, of order 1fm/$c$. This is true in particular for Glauber initial conditions where one simply doesn’t get enough elliptic flow to describe the data if one doesn’t initiate the hydrodynamic expansion before 1fm/$c$ (where the clock starts at nuclear impact). Short thermalization times and the validity of an ideal fluid picture are, of course, two sides of the same consistent picture. For CGC initial conditions, assuming similarly short thermalization times, the experimental $v_2$ data do not saturate the ideal fluid prediction. In this case one can either start the ideal fluid dynamic expansion later, or endow the fluid with some non-zero viscosity, or both. Again, these are two sides of the same consistent picture, which now invokes non-zero mean free paths for the plasma constituents, leading to incomplete local thermalization.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
Meelis Kull\
Department of Computer Science\
University of Tartu\
`meelis.kull@ut.ee`\
Miquel Perello-Nieto\
Department of Computer Science\
University of Bristol\
`miquel.perellonieto@bris.ac.uk`\
Markus Kängsepp\
Department of Computer Science\
University of Tartu\
`markus.kangsepp@ut.ee`\
Telmo Silva Filho\
Department of Statistics\
Universidade Federal da Paraíba\
`telmo@de.ufpb.br`\
Hao Song\
Department of Computer Science\
University of Bristol\
`hao.song@bristol.ac.uk`\
Peter Flach\
Department of Computer Science\
University of Bristol and\
The Alan Turing Institute\
`peter.flach@bristol.ac.uk`\
title: |
Beyond temperature scaling:\
Obtaining well-calibrated multi-class probabilities with Dirichlet calibration\
Supplementary material
---
Source code
===========
The instructions and code for the experiments can be found on .
Proofs
======
\[thm:equiv\] The parametric families $\vmuh_{DirGen}(\vq; \valpha,\vpi)$, $\vmuh_{DirLin}(\vq; \MW,\vb)$ and $\vmuh_{Dir}(\vq; \MA,\vc)$ are equal, i.e. they contain exactly the same calibration maps.
We will prove that:
1. every function in $\vmuh_{DirGen}(\vq; \valpha,\vpi)$ belongs to $\vmuh_{DirLin}(\vq; \MW,\vb)$;
2. every function in $\vmuh_{DirLin}(\vq; \MW,\vb)$ belongs to $\vmuh_{Dir}(\vq; \MA,\vc)$;
3. every function in $\vmuh_{Dir}(\vq; \MA,\vc)$ belongs to $\vmuh_{DirGen}(\vq; \valpha,\vpi)$.
#### 1.
Consider a function $\muh(\vq)=\vmuh_{DirGen}(\vq; \valpha,\vpi)$. Let us start with an observation that any vector $\vx=(x_1,\dots,x_k)\in(0,\infty)^k$ with only positive elements can be renormalised to add up to $1$ using the expression $\softmax(\vln(\vx))$, since: $$\begin{aligned}
\softmax(\vln(\vx))=\vexp(\vln(\vx))/(\sum_i \exp(\ln(x_i)))=\vx/(\sum_i x_i)\end{aligned}$$ where $\vexp$ is an operator applying exponentiation element-wise. Therefore, $$\begin{aligned}
\muh(\vq)=\softmax(\vln(\pi_1 f_1(\vq),\dots,\pi_k f_k(\vq)))
% =\softmax(\vln\pi+\vln(f_1(\vq),\dots,f_k(\vq)))\end{aligned}$$ where $f_i(\vq)$ is the probability density function of the distribution $Dir(\valpha^{(i)})$ where $\valpha^{(i)}$ is the $i$-th row of matrix $\valpha$. Hence, $f_i(\vq)=\frac{1}{B(\valpha^{(i)})}\prod_{j=1}^k q_j^{\alpha_{ij}-1}$, where $B(\cdot)$ denotes the multivariate beta function. Let us define a matrix $\MW$ and vector $\vb$ as follows: $$\begin{aligned}
w_{ij}=\alpha_{ij}-1,\qquad b_i=\ln(\pi_i)-\ln(B(\valpha^{(i)}))\end{aligned}$$ with $w_{ij}$ and $\alpha_{ij}$ denoting elements of matrices $\MW$ and $\valpha$, respectively, and $b_i,\pi_i$ denoting elements of vectors $\vb$ and $\vpi$. Now we can write $$\begin{aligned}
\ln(\pi_i f_i(\vq))
&=\ln(\pi_i)-\ln(B(\valpha^{(i)}))+\ln\prod_{j=1}^k q_j^{\alpha_{ij}-1} \\
&=\ln(\pi_i)-\ln(B(\valpha^{(i)}))+\sum_{j=1}^k (\alpha_{ij}-1)\ln(q_j) \\
&=b_i+\sum_{j=1}^k w_{ij}\ln(q_j)\end{aligned}$$ and substituting this back into $\muh(\vq)$ we get: $$\begin{aligned}
\muh(\vq)&=\softmax(\vln(\pi_1 f_1(\vq),\dots,\pi_k f_k(\vq))) \\
&=\softmax(\vb+\MW\vln(\vq))=\muh_{DirLin}(\vq; \MW,\vb)\end{aligned}$$
#### 2.
Consider a function $\muh(\vq)=\vmuh_{DirLin}(\vq; \MW,\vb)$. Let us define a matrix $\MA$ and vector $\vc$ as follows: $$\begin{aligned}
a_{ij}=w_{ij}-\min_{i}w_{ij},\qquad \vc=\softmax(\MW\,\vln\,\vu+\vb)\end{aligned}$$ with $a_{ij}$ and $w_{ij}$ denoting elements of matrices $\MA$ and $\MW$, respectively, and $\vu=(1/k,\dots,1/k)$ is a column vector of length $k$. Note that $\MA\,\vx=\MW\,\vx+const_1$ and $\vln\,\softmax(\vx)=\vx+const_2$ for any $x$ where $const_1$ and $const_2$ are constant vectors (all elements are equal), but the constant depends on $\vx$. Taking into account that $\softmax(\vv+const)=\softmax(\vv)$ for any vector $\vv$ and constant vector $const$, we obtain: $$\begin{aligned}
\muh_{Dir}(\vq; \MA,\vc)
&=\softmax(\MA\,\vln\,\frac{\vq}{1/k}+\vln\,\vc)
=\softmax(\MW\,\vln\,\frac{\vq}{1/k}+const_1+\vln\,\vc) \\
&=\softmax(\MW\,\vln\,\vq-\MW\,\vln\,\vu+const_1+\vln\,\softmax(\MW\,\vln\,\vu+\vb)) \\
&=\softmax(\MW\,\vln\,\vq-\MW\,\vln\,\vu+const_1+\MW\,\vln\,\vu+\vb+const_2) \\
&=\softmax(\MW\,\vln\,\vq+\vb+const_1+const_2)
=\softmax(\MW\,\vln\,\vq+\vb)=\muh_{DirLin}(\vq; \MW,\vb)\\
&=\muh(\vq)\end{aligned}$$
#### 3.
Consider a function $\muh(\vq)=\vmuh_{Dir}(\vq; \MA,\vc)$. Let us define a matrix $\valpha$, vector $\vb$ and vector $\pi$ as follows: $$\begin{aligned}
\alpha_{ij}=a_{ij}+1,\qquad \vb=\vln\,\vc-\MA\,\vln\,\vu,\qquad \pi_i=\exp(b_i)\cdot B(\valpha^{(i)})\end{aligned}$$ with $\alpha_{ij}$ and $a_{ij}$ denoting elements of matrices $\valpha$ and $\MA$, respectively, and $\vu=(1/k,\dots,1/k)$ is a column vector of length $k$. We can now write: $$\begin{aligned}
\muh(\vq)&=\muh_{Dir}(\vq; \MA,\vc)
=\softmax(\MA\,\vln\,\frac{\vq}{1/k}+\vln\,\vc)
=\softmax(\MA\,\vln\,\vq-\MA\,\vln\,\vu+\vln\,\vc) \\
&=\softmax((\valpha-1)\vln\,\vq+\vb)\end{aligned}$$ Element $i$ in the vector within the softmax is equal to: $$\begin{aligned}
\sum_{j=1}^k (\valpha_{ij}-1)\ln(q_j)+b_j
&= \sum_{j=1}^k (\valpha_{ij}-1)\ln(q_j) +\ln(\pi_i\cdot\frac{1}{B(\valpha^{(i)})}) \\
&= \ln(\pi_i\cdot \frac{1}{B(\valpha^{(i)})} \prod_{j=1}^k q_j^{\valpha_{ij}-1}) \\
&= \ln(\pi_i\cdot f_i(\valpha^{(i)}))\end{aligned}$$ and therefore: $$\begin{aligned}
\muh(\vq)=\softmax((\valpha-1)\vln(\vq)+\vb)=\softmax(\ln(\pi_i\cdot f_i(\valpha^{(i)})))=\vmuh_{DirGen}(\vq; \valpha,\vpi)\end{aligned}$$
The following proposition proves that temperature scaling can be viewed as a general-purpose calibration method, being a special case within the Dirichlet calibration map family.
Let us denote the temperature scaling family by $\muh'_{TempS}(\vz; t)=\softmax(\vz/t)$ where $\vz$ are the logits. Then for any $t$, temperature scaling can be expressed as $$\begin{aligned}
\muh'_{TempS}(\vz; t)=\muh_{DirLin}(\softmax(\vz); \frac{1}{t}\MI, \vzero)\end{aligned}$$ where $\MI$ is the identity matrix and $\vzero$ is the vector of zeros.
Let us first observe that for any $\vx\in\sR^k$ there exists a constant vector $const$ (all elements are equal) such that $\vln\,\softmax(\vx)=\vx+const$. Furthermore, $\softmax(\vv+const)=\softmax(\vv)$ for any vector $\vv$ and any constant vector $const$. Therefore, $$\begin{aligned}
\muh_{DirLin}(\softmax(\vz); \frac{1}{t}\MI, \vzero)
&=\softmax(\frac{1}{t}\,\MI\,\vln\,\softmax(\vz))) \\
&=\softmax(\frac{1}{t}\,\MI\,(\vz+const)) \\
&=\softmax(\frac{1}{t}\,\MI\,\vz+\frac{1}{t}\,\MI\, const) \\
&=\softmax(\vz/t+const') \\
&=\softmax(\vz/t) \\
&=\muh'_{TempS}(\vz; t)\end{aligned}$$ where $const'=\frac{1}{t}\,\MI\, const$ is a constant vector as a product of a diagonal matrix with a constant vector.
Dirichlet calibration
=====================
In this section we show some examples of reliability diagrams and other plots that can help to understand the representational power of Dirichlet calibration compared with other calibration methods.
Reliability diagram examples
----------------------------
We will look at two examples of reliability diagrams on the original classifier and after applying $6$ calibration methods. Figure \[fig:mlp:bs:reldiag\] shows the first example for the 3 class classification dataset *balance-scale* and the classifier MLP. This figure shows the confidence-reliability diagram in the first column and the classwise-reliability diagrams in the other columns. Figure \[fig:nb:reldiag:mlp:bal:uncal\] shows how posterior probabilities from the MLP have small gaps between the true class proportions and the predicted means. This visualisation may indicate that the original classifier is already well calibrated. However, when we separate the reliability diagram per class, we notice that the predictions for the first class are underconfident; as indicated by low mean predictions containing high proportions of the true class. On the other hand, classes 2 and 3 are overconfident in the regions of posterior probabilities compressed between $[0.2, 0.5]$ while being underconfident in higher regions. The discrepancy shown by analysing the individual reliability diagrams seems to compensate in the general picture of the aggregated one.
The following subfigures show how the different calibration methods try to reduce ECE, occasionally increasing the error. As can be seen in Table \[table:mlp:balance:ece\], Dirichlet L2 and One-vs.Rest isotonic regression obtain the lowest ECE while One-vs.Rest frequency binning makes the original calibration worse. Looking at Figure \[fig:nb:reldiag:mlp:bal:temp\] it is possible to see how temperature scaling manages to reduce the overall overconfidence in the higher range of probabilities for classes 2 and 3, but makes the situation worse in the interval $[0.2, 0.6]$. However, it manages to reduce the overall ECE.
[.26]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_uncalibrated_conf_rel_diagr.pdf "fig:"){width="\linewidth"}
[.71]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_uncalibrated_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
[.26]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_binning_freq_conf_rel_diagr.pdf "fig:"){width="\linewidth"}
[.71]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_binning_freq_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
[.26]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_binning_width_conf_rel_diagr.pdf "fig:"){width="\linewidth"}
[.71]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_binning_width_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
[.26]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_isotonic_conf_rel_diagr.pdf "fig:"){width="\linewidth"}
[.71]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_isotonic_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
[.26]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_dirichlet_fix_diag_conf_rel_diagr.pdf "fig:"){width="\linewidth"}
[.71]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_dirichlet_fix_diag_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
[.26]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_ovr_dir_full_conf_rel_diagr.pdf "fig:"){width="\linewidth"}
[.71]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_ovr_dir_full_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
[.26]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_dirichlet_full_l2_conf_rel_diagr.pdf "fig:"){width="\linewidth"}
[.71]{} ![Confidence-reliability diagrams in the first column and classwise-reliability diagrams in the remaining columns, for a real experiment with the multilayer perceptron classifier on the balance-scale dataset and a subset of the calibrators. All the test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:mlp:bs:reldiag"}](figures/results/mlp_balance-scale_dirichlet_full_l2_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
In the second example we show $3$ calibration methods for a 4 class classification problem (car dataset) applied on the scores of an Adaboost SAMME classifier. Figure \[fig:nb:reldiag:adas:car\] shows one reliability diagram per class ($C_1$ *acceptable*, $C_2$ *good*, $C_3$ *unacceptable*, and $C_4$ *very good*).
From this Figure we can see that the uncalibrated model is underconfident for classes 1, 2 and 3, showing posterior probabilities never higher than $0.7$, while having true class proportions higher than $0.7$ in the mentioned interval. We can see that after applying some of the calibration models the posterior probabilities reach higher probability values.
As can be seen in Table \[table:adas:car:ece\], Dirichlet L2 and One-vs.Rest Isotonic Regression obtain the lowest ECE while Temperature Scaling makes the original calibration worse. Figure \[fig:nb:reldiag:class:adas:car:dirl2\] shows how Dirichlet calibration with L2 regularisation achieved the largest spread of probabilities, also reducing the error mean gap with the predictions and the true class proportions. On the other hand, temperature scaling reduced ECE for class 1, but hurt the overall performance for the other classes.
[.9]{} ![Reliability diagrams per class for a real experiment with the classifier Ada boost SAMME on the car dataset and $3$ calibrators. The test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:nb:reldiag:adas:car"}](figures/results/adas_car_uncalibrated_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
[.9]{} ![Reliability diagrams per class for a real experiment with the classifier Ada boost SAMME on the car dataset and $3$ calibrators. The test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:nb:reldiag:adas:car"}](figures/results/adas_car_isotonic_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
[.9]{} ![Reliability diagrams per class for a real experiment with the classifier Ada boost SAMME on the car dataset and $3$ calibrators. The test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:nb:reldiag:adas:car"}](figures/results/adas_car_dirichlet_fix_diag_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
[.9]{} ![Reliability diagrams per class for a real experiment with the classifier Ada boost SAMME on the car dataset and $3$ calibrators. The test partitions from the 5 times 5-fold-cross-validation have been aggregated to draw every plot.[]{data-label="fig:nb:reldiag:adas:car"}](figures/results/adas_car_dirichlet_full_l2_rel_diagr_perclass.pdf "fig:"){width="\linewidth"}
A more detailed depiction of the previous reliability diagrams can be seen in Figure \[fig:nb:pos:scores:class\]. In this case, the posterior probabilities are not introduced in bins, but a boxplot summarises their full distribution. The first observation here is, for the *good* and *very good* classes, the uncalibrated model tends to predict probability vectors with small variance, i.e. the outputs do not change much among different instances. Among the calibration approaches, temperature scaling still maintains this low level of variance, while both isotonic and Dirichlet L2 manage to show a higher variance on the outputs. While this observation cannot be justified here without quantitative analysis, another observation clearly shows an advantage of using Dirichlet L2. For the *acceptable* class, only Dirichlet L2 is capable of providing the highest mean probability for the correct class, while the other three methods tend to put higher probability mass on the *unacceptable* class on average.
[.24]{} ![Effect of Dirichlet Calibration on the scores of Ada boost SAMME on the *car* dataset which is composed of $4$ classes (*acceptable*, *good*, *unacceptable*, and *very good*). The whiskers of each box indicate the 5th and 95th percentile, the notch around the median indicates the confidence interval. The [green]{} error bar to the right of each box indicates one standard deviation on each side of the mean. In each subfigure, the first boxplot corresponds to the posterior probabilities for the samples of class 1, divided in 4 boxes representing the posterior probabilities for each class. A good classifier should have the highest posterior probabilities in the box corresponding to the true class. In Figure \[fig:nb:pos:scores:class:adas:car:uncal\] it is possible to see that the first class (*acceptable*) is missclassified as belonging to the third class (*unacceptable*) with high probability values, while Dirichlet Calibration is able to alleviate that problem. Also, for the second and fourth true classes (*good*, and *very good*) the original classifier uses a reduced domain of probabilities (indicative of underconfidence), while Dirichlet calibration is able to spread these probabilities with more meaningful values (as indicated by a reduction of the calibration losses; See Figure \[fig:nb:reldiag:adas:car\]). []{data-label="fig:nb:pos:scores:class"}](figures/results/adas_car_uncalibrated_positive_scores_per_class.pdf "fig:"){width="\linewidth"}
[.24]{} ![Effect of Dirichlet Calibration on the scores of Ada boost SAMME on the *car* dataset which is composed of $4$ classes (*acceptable*, *good*, *unacceptable*, and *very good*). The whiskers of each box indicate the 5th and 95th percentile, the notch around the median indicates the confidence interval. The [green]{} error bar to the right of each box indicates one standard deviation on each side of the mean. In each subfigure, the first boxplot corresponds to the posterior probabilities for the samples of class 1, divided in 4 boxes representing the posterior probabilities for each class. A good classifier should have the highest posterior probabilities in the box corresponding to the true class. In Figure \[fig:nb:pos:scores:class:adas:car:uncal\] it is possible to see that the first class (*acceptable*) is missclassified as belonging to the third class (*unacceptable*) with high probability values, while Dirichlet Calibration is able to alleviate that problem. Also, for the second and fourth true classes (*good*, and *very good*) the original classifier uses a reduced domain of probabilities (indicative of underconfidence), while Dirichlet calibration is able to spread these probabilities with more meaningful values (as indicated by a reduction of the calibration losses; See Figure \[fig:nb:reldiag:adas:car\]). []{data-label="fig:nb:pos:scores:class"}](figures/results/adas_car_isotonic_positive_scores_per_class.pdf "fig:"){width="\linewidth"}
[.24]{} ![Effect of Dirichlet Calibration on the scores of Ada boost SAMME on the *car* dataset which is composed of $4$ classes (*acceptable*, *good*, *unacceptable*, and *very good*). The whiskers of each box indicate the 5th and 95th percentile, the notch around the median indicates the confidence interval. The [green]{} error bar to the right of each box indicates one standard deviation on each side of the mean. In each subfigure, the first boxplot corresponds to the posterior probabilities for the samples of class 1, divided in 4 boxes representing the posterior probabilities for each class. A good classifier should have the highest posterior probabilities in the box corresponding to the true class. In Figure \[fig:nb:pos:scores:class:adas:car:uncal\] it is possible to see that the first class (*acceptable*) is missclassified as belonging to the third class (*unacceptable*) with high probability values, while Dirichlet Calibration is able to alleviate that problem. Also, for the second and fourth true classes (*good*, and *very good*) the original classifier uses a reduced domain of probabilities (indicative of underconfidence), while Dirichlet calibration is able to spread these probabilities with more meaningful values (as indicated by a reduction of the calibration losses; See Figure \[fig:nb:reldiag:adas:car\]). []{data-label="fig:nb:pos:scores:class"}](figures/results/adas_car_dirichlet_fix_diag_positive_scores_per_class.pdf "fig:"){width="\linewidth"}
[.24]{} ![Effect of Dirichlet Calibration on the scores of Ada boost SAMME on the *car* dataset which is composed of $4$ classes (*acceptable*, *good*, *unacceptable*, and *very good*). The whiskers of each box indicate the 5th and 95th percentile, the notch around the median indicates the confidence interval. The [green]{} error bar to the right of each box indicates one standard deviation on each side of the mean. In each subfigure, the first boxplot corresponds to the posterior probabilities for the samples of class 1, divided in 4 boxes representing the posterior probabilities for each class. A good classifier should have the highest posterior probabilities in the box corresponding to the true class. In Figure \[fig:nb:pos:scores:class:adas:car:uncal\] it is possible to see that the first class (*acceptable*) is missclassified as belonging to the third class (*unacceptable*) with high probability values, while Dirichlet Calibration is able to alleviate that problem. Also, for the second and fourth true classes (*good*, and *very good*) the original classifier uses a reduced domain of probabilities (indicative of underconfidence), while Dirichlet calibration is able to spread these probabilities with more meaningful values (as indicated by a reduction of the calibration losses; See Figure \[fig:nb:reldiag:adas:car\]). []{data-label="fig:nb:pos:scores:class"}](figures/results/adas_car_dirichlet_full_l2_positive_scores_per_class.pdf "fig:"){width="\linewidth"}
Experimental setup {#sec:exp}
==================
In this section we provide the detailed description of the experimental setup on a variety of non-neural classifiers and datasets. While our implementation of Dirichlet calibration is based on standard Newton-Raphson with multinomial logistic loss and L2 regularisation, as mentioned at the end of Section 3, existing implementations of logistic regression (e.g. scikit-learn) with the log transformed predicted probabilities can also be easily applied.
Datasets and performance estimation
-----------------------------------
The full list of datasets, and a brief description of each one including the number of samples, features and classes is presented in Table \[tab:data\].
Figure \[fig:ds:partition\] shows how every dataset was divided in order to get an estimated performance for every combination of dataset, classifier and calibrator. Each dataset was divided using 5 times 5-fold-cross-validation to create 25 test partitions. For each of the 25 partitions the corresponding training set was divided further with a 3-fold-cross-validation for wich the bigger portions were used to train the classifiers (and validate the calibratiors if they had hyperparameters), and the small portion was used to train the calibrators. The 3 calibrators trained in the inner 3-folds were used to predict the corresponding test partition, and their predictions were averaged in order to obtain better estimates of their performance with the 7 different metrics (accuracy, Brier score, log-loss, maximum calibration error, confidence-ECE, classwise-ECE and the p test statistic of the ECE metrics). Finally, the 25 resulting measures were averaged.
\[tab:data\]
![image](figures/experiments/datasets_train_test_partitions.pdf){width="0.9\linewidth"} \[fig:ds:partition\]
Full example of statistical analysis {#sec:exp:example}
------------------------------------
The following is a full example of how the final rankings and statistical tests are computed. For this example, we will focus on the metric log-loss, and we will start with the naive Bayes classifier. Table \[table:nbayes:loss\] shows the estimated log-loss by averaging the 5-times 5-fold cross-validation log-losses of the inner 3-fold aggregated predictions. The sub-indices are the ranking of every calibrator for each dataset (ties in the ranking share the averaged rank). The resulting table of sub-indices is used to compute the Friedman test statistic, resulting in a value of $73.8$ and a p-value of $6.71e^{-14}$ indicating statistical difference between the calibration methods. The last row contains the average ranks of the full table, which is shown in the corresponding critical difference diagram in Figure \[fig:cd:nbayes:loss\]. The critical difference uses the Bonferroni-Dunn one-tailed statistical test to compute the minimum ranking distance that is shown in the Figure, indicating that for this particular classifier and metric the Dirichlet calibrator with L2 regularisation is significantly better than the other methods.
[.49]{} ![Critical Difference diagrams for the averaged ranking results of the metric Log-loss.](figures/results/crit_diff_nbayes_loss.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical Difference diagrams for the averaged ranking results of the metric Log-loss.](figures/results/crit_diff_loss_v2 "fig:"){width="\linewidth"}
The same process is applied to each of the $11$ classifiers for every metric. Table \[table:loss\] shows the final average results of all classifiers. Notice that the row corresponding to naive Bayes has the rounded average rankings from Figure \[fig:cd:nbayes:loss\].
Results {#sec:res}
=======
In this Section we present all the final results, including ranking tables for every metric, critical difference diagrams, the best hyperparameters selected for Dirichlet calibration with L2 regularisation, Frequency binning and Width binning; a comparison of how calibrated the $11$ classifiers are, and additional results on deep neural networks.
Final ranking tables for all metrics {#sec:res:rank}
------------------------------------
We present here all the final ranking tables for all metrics (Tables \[table:acc\], \[table:loss\], \[table:brier\], \[table:mce\], \[table:conf-ece\], \[table:cw-ece\], \[table:p-conf-ece\], and \[table:p-cw-ece\]). For each ranking, a lower value is indicative of a better metric value (eg. a higher accuracy corresponds to a lower ranking, while a lower log-loss corresponds to a lower ranking as well). Additional details on how to interpret the tables can be found in Section \[sec:exp:example\].
Final critical difference diagrams for every metric
---------------------------------------------------
In order to perform a final comparison between calibration methods, we considered every combination of dataset and classifier as a group $n = \#datasets \times \#classifiers$, and ranked the results of the $k$ calibration methods. With this setting, we have performed the Friedman statistical test followed by the one-tailed Bonferroni-Dunn test to obtain critical differences (CDs) for every metric (See Figure \[fig:multi:cd:all\]). The results showed Dirichlet L2 as the best calibration method for the measures accuracy, log-loss and p-cw-ece with statistical significance (See Figures \[fig:multi:cd:acc\] \[fig:multi:cd:logloss\], and \[fig:multi:cd:p-cw-ece\]), and in the group of the best calibration methods in the rest of the metrics with statistical significance, but no difference within the group. It is worth mentioning that Figure \[fig:multi:cd:logloss\] showed statistical difference between Dirichlet L2, OvR Beta, OvR width binning, and the rest of the calibrators in one group; in the mentioned order.
[.49]{} ![Critical difference of the average of multiclass classifiers.[]{data-label="fig:multi:cd:all"}](figures/results/crit_diff_acc_v2 "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the average of multiclass classifiers.[]{data-label="fig:multi:cd:all"}](figures/results/crit_diff_brier_v2 "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the average of multiclass classifiers.[]{data-label="fig:multi:cd:all"}](figures/results/crit_diff_loss_v2 "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the average of multiclass classifiers.[]{data-label="fig:multi:cd:all"}](figures/results/crit_diff_mce_v2 "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the average of multiclass classifiers.[]{data-label="fig:multi:cd:all"}](figures/results/crit_diff_conf-ece_v2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the average of multiclass classifiers.[]{data-label="fig:multi:cd:all"}](figures/results/crit_diff_cw-ece_v2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the average of multiclass classifiers.[]{data-label="fig:multi:cd:all"}](figures/results/crit_diff_p-conf-ece_v2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the average of multiclass classifiers.[]{data-label="fig:multi:cd:all"}](figures/results/crit_diff_p-cw-ece_v2.pdf "fig:"){width="\linewidth"}
[.4]{} ![Proportion of times each calibrator passes a calibration p-test with a p-value higher than 0.05.[]{data-label="fig:cal:p:ece"}](figures/results/p_table_calibrators_p-conf-ece.pdf "fig:"){width="\linewidth"}
[.4]{} ![Proportion of times each calibrator passes a calibration p-test with a p-value higher than 0.05.[]{data-label="fig:cal:p:ece"}](figures/results/p_table_calibrators_p-cw-ece.pdf "fig:"){width="\linewidth"}
Best calibrator hyperparameters
-------------------------------
[.30]{} ![Histogram of the selected hyperparameters during the inner 3-fold-cross-validation[]{data-label="fig:hyper"}](figures/results/bars_hyperparameters_all_Dirichlet_L2.pdf "fig:"){width="\linewidth"}
[.30]{} ![Histogram of the selected hyperparameters during the inner 3-fold-cross-validation[]{data-label="fig:hyper"}](figures/results/bars_hyperparameters_all_OvR_Freq_Bin.pdf "fig:"){width="\linewidth"}
[.30]{} ![Histogram of the selected hyperparameters during the inner 3-fold-cross-validation[]{data-label="fig:hyper"}](figures/results/bars_hyperparameters_all_OvR_Width_Bin.pdf "fig:"){width="\linewidth"}
Figure \[fig:hyper\] shows the best hyperparameters for every inner 3-fold-cross-validation. Dirichlet L2 (Figure \[fig:hyper:dir:l2\]) shows a preference for regularisation hyperparameter $\lambda = 1e^{-3}$ and lower values. Our current minimum regularisation value of $1e^{-7}$ is also being selected multiple times, indicating that lower values may be optimal in several occasions. However, this fact did not seem to hurt the overall good results in our experiments. One-vs.-Rest frequency binning tends to prefer $10$ bins of equal number of samples, while One-vs.Rest width binning prefers $5$ equal sized bins (See Figures \[fig:hyper:freq:bin\] and \[fig:hyper:width:bin\] respectively).
Comparison of classifiers
-------------------------
In this Section we compare all the classifiers without post-hoc calibration on $17$ of the datasets; from the total of $21$ datasets *shuttle*, *yeast*, *mfeat-karhunen* and *libras-movement* were removed from this analysis as at least one classifier was not able to complete the experiment.
[.49]{} ![Critical difference of uncalibrated classifiers.[]{data-label="fig:uncal:cd:all"}](figures/results/crit_diff_uncal_classifiers_acc "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of uncalibrated classifiers.[]{data-label="fig:uncal:cd:all"}](figures/results/crit_diff_uncal_classifiers_loss "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of uncalibrated classifiers.[]{data-label="fig:uncal:cd:all"}](figures/results/crit_diff_uncal_classifiers_brier "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of uncalibrated classifiers.[]{data-label="fig:uncal:cd:all"}](figures/results/crit_diff_uncal_classifiers_mce "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of uncalibrated classifiers.[]{data-label="fig:uncal:cd:all"}](figures/results/crit_diff_uncal_classifiers_conf-ece "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of uncalibrated classifiers.[]{data-label="fig:uncal:cd:all"}](figures/results/crit_diff_uncal_classifiers_cw-ece "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of uncalibrated classifiers.[]{data-label="fig:uncal:cd:all"}](figures/results/crit_diff_uncal_classifiers_p-conf-ece "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of uncalibrated classifiers.[]{data-label="fig:uncal:cd:all"}](figures/results/crit_diff_uncal_classifiers_p-cw-ece "fig:"){width="\linewidth"}
Figure \[fig:uncal:cd:all\] shows the Critical Difference diagram for all the $8$ metrics. In particular, the MLP and the SVC with linear kernel are always in the group with the higher rankings and never in the lowest. Similarly, random forest is consistently in the best group, but in the worst group as well in $4$ of the measures. SVC with radial basis kernel is in the best group $6$ times, but $3$ times in the worst. On the other hand, naive Bayes and Adaboost SAMME are consistently in the worst group and never in the best one. The rest of the classifiers did not show a clear ranking position.
[.4]{} ![Proportion of times each classifier is already calibrated with different p-tests.[]{data-label="fig:uncal:p:ece"}](figures/results/p_table_classifiers_p-conf-ece.pdf "fig:"){width="\linewidth"}
[.4]{} ![Proportion of times each classifier is already calibrated with different p-tests.[]{data-label="fig:uncal:p:ece"}](figures/results/p_table_classifiers_p-cw-ece.pdf "fig:"){width="\linewidth"}
Figures \[fig:uncal:p:cw:ece\] and \[fig:uncal:p:conf:ece\] show the proportion of times each classifier passed the p-conf-ECE and p-cw-ECE statistical test for all datasets and cross-validation folds.
Deep neural networks
--------------------
In this section, we provide further discussion about results from the deep networks experiments. These are given in the form of critical difference diagrams (Figure \[fig:dnn:cd:all\]) and tables (Tables \[table:res:dnn:loss\]-\[table:res:dnn:pece\_cw\]) both including the following measures: error rate, log-loss, Brier score, maximum calibration error (MCE), confidence-ECE (conf-ECE), classwise-ECE (cw-ECE), as well as significance measures p-conf-ECE and p-cw-ECE.
In addition, Table \[table:res:dnn:ms\_vs\_vecs\] compares MS-ODIR and vector scaling on log-loss. On the table, we also added MS-ODIR-zero which was obtained from the respective MS-ODIR model by replacing the off-diagonal entries with zeroes. Each experiment is replicated three times with different splits on datasets. This is done to compare the stability of the methods. In each replication, the best scoring model is written in bold.
Finally, Figure \[fig:res:rd\_ece:class4\] shows that temperature scaling systematically under-estimates class 4 probabilities on the model [c10\_resnet\_wide32]{} on CIFAR-10.
[.49]{} ![Critical difference of the deep neural networks.[]{data-label="fig:dnn:cd:all"}](figures/cd_diag_dnn/crit_diff_Error_v2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the deep neural networks.[]{data-label="fig:dnn:cd:all"}](figures/cd_diag_dnn/crit_diff_Brier_v2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the deep neural networks.[]{data-label="fig:dnn:cd:all"}](figures/cd_diag_dnn/crit_diff_Loss_v2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the deep neural networks.[]{data-label="fig:dnn:cd:all"}](figures/cd_diag_dnn/crit_diff_MCE_v2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the deep neural networks.[]{data-label="fig:dnn:cd:all"}](figures/cd_diag_dnn/crit_diff_ECE_v2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the deep neural networks.[]{data-label="fig:dnn:cd:all"}](figures/cd_diag_dnn/crit_diff_ECE_CW_v2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the deep neural networks.[]{data-label="fig:dnn:cd:all"}](figures/cd_diag_dnn/crit_diff_pECE_v2.pdf "fig:"){width="\linewidth"}
[.49]{} ![Critical difference of the deep neural networks.[]{data-label="fig:dnn:cd:all"}](figures/cd_diag_dnn/crit_diff_pECE_cw_v2.pdf "fig:"){width="\linewidth"}
![Reliability diagrams of c10\_resnet\_wide32 on CIFAR-10: (a) classwise-reliability for class 4 after temperature scaling; (b) classwise-reliability for class 4 after Dirichlet calibration.[]{data-label="fig:res:rd_ece:class4"}](figures/results/figure_RD_ECE_class4.pdf){width="\linewidth"}
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'P. Blay'
- 'M. Ribó'
- 'I. Negueruela'
- 'J. M. Torrejón'
- 'P. Reig'
- 'A. Camero'
- 'I. F. Mirabel'
- 'V. Reglero'
date: 'Received / Accepted'
title: 'Further evidence for the presence of a neutron star in . [*INTEGRAL*]{} and VLA observations[^1] '
---
Introduction
============
The vast majority of High Mass X-ray Binaries (HMXBs) harbour X-ray pulsars (c.f. Bildsten et al. [@bildsten97]), believed to be young neutron stars with relatively strong magnetic fields ($B\sim10^{12}$ G). An important fraction of them are wind-fed systems, in which the pulsar accretes from the radiative wind of an OB supergiant (perhaps, in some cases, a Wolf-Rayet star).
Among the handful of HMXBs not displaying X-ray pulsations, only three show the typical characteristics of accreting black holes (, and ). In four other HMXBs, pulsations have not been discovered in spite of intensive searches, but there is no strong evidence identifying the accreting object as a black hole. In principle, there is no reason to attribute the lack of pulsations in all these systems to any particular characteristic and different models have indeed been proposed to explain some of them. There have been suggestions that , whose counterpart is the B0Ve star , and , identified with the O6.5V((f)) star , may not be accreting binaries after all, but X-ray sources powered by rotational energy from a young non-accreting neutron star (Maraschi & Treves [@maraschi81]; Martocchia et al. [@martocchia05]), although the presence of relativistic radio jets points towards the accretion scenario (Massi et al. [@massi04]; Paredes et al. [@paredes00]). In the case of , optically identified with the O6.5Iaf+ star , a compact object of unknown nature and mass $M_{\rm X}=2.4\pm0.3$ $M_{\odot}$ accretes material from the wind of the massive supergiant (Clark et al. [@clark02]).
The fourth HMXB not displaying pulsations is , identified with the O9p star (Negueruela & Reig [@negueruela01]; henceforth NR01). The relatively high X-ray luminosity of , $L_{\rm X}\sim 10^{35}$ erg s$^{-1}$ (at an estimated distance of 3 kpc; NR01), combined with its spectral shape, makes the presence of a neutron star or a black hole in the system almost unavoidable. There are reasons to believe that the compact object in this system is a neutron star (NR01; Torrejón et al. [@torrejon04]; Masetti et al. [@masetti04]), but the possibility of a black-hole has not been ruled out completely by previous observations. Analysis of the [*RXTE*]{}/ASM X-ray lightcurve revealed a 9.6 d periodicity, which is likely to be the orbital period of a compact object (Corbet & Peele [@corbet01]; Ribó et al. [@ribo05]). Moreover, the X-ray lightcurve displays short aperiodic variability, with changes in the flux by a factor $\sim$10 over timescales of minutes (Saraswat & Apparao [@saraswat92]; NR01), which are typically seen in wind-fed systems, presumably as a consequence of stochastic variability in the wind.
Interestingly, many high-energy sources not showing pulsations are microquasars (containing either black holes or neutron stars), while pulsating sources do not show significant radio-emission (Fender & Hendry [@fender00]). shares many characteristics with the well-known microquasar (Paredes et al. [@paredes00], [@paredes02]). Both systems contain a non-supergiant late O-type star (Clark et al. [@clark01]) and a compact object that does not show pulsations (Ribó et al. [@ribo99]; Reig et al. [@reig03]) orbiting in a relatively close orbit when compared to the majority of HMXBs, and both systems show evidences of wind-fed accretion (NR01; McSwain et al. [@mcswain04]) with X-ray luminosities in the range $10^{34}$–$10^{35}$ erg s$^{-1}$. However, an inspection of the VLA Sky Survey (NVSS, Condon et al. [@condon98]) reveals no radio emission up to a 3$\sigma$ upper limit of 1 mJy from . The apparent lack of both radio emission and pulsations does not fit within either the typical scenario of a pulsar in a HMXB or the microquasar scenario, and a deeper multi-wavelength approach is necessary.
In this work we present new [*INTEGRAL*]{} and VLA observations of the source during the periods 2002 December–2004 September and 2003 May–June, respectively. The possible presence of a cyclotron line, already suggested by the data from [*RXTE*]{} and [*BeppoSAX*]{} (Torrejón et al. [@torrejon04]; Masetti et al. [@masetti04]), and the absence of radio emission are discussed.
Observations and data analysis
==============================
High-energy observations {#data_he}
------------------------
[*INTEGRAL*]{} is a joint European mission in flight from 2002 October, with three on-board high-energy instruments: the Imager on Board [*INTEGRAL*]{} Spacecraft (IBIS), coupled with the [*INTEGRAL*]{} Soft Gamma-Ray Imager (ISGRI) and the Pixellated Imaging Caesium Iodide Telescope (PICsIT), sensitive to $\gamma$-rays from 15 keV up to 10 MeV; the SPectrometer in [*INTEGRAL*]{} (SPI), optimised for spectroscopy in the $20$ keV–$8$ MeV energy range; and the Joint European Monitor-X (JEM-X), consisting of twin X-ray monitors, which provides information at lower energies (3–35 keV). An Optical Monitoring Camera (OMC) gives source fluxes in the $V$ (550 nm) band and complements the 3 high-energy instruments. All 4 instruments are co-aligned, allowing simultaneous observations in a very wide energy range. A detailed description of the mission can be found in Winkler et al. ([@winkler03]).
[*INTEGRAL*]{} observed the region around on several occasions during its first 22 months of Galactic Plan Survey scans (GPSs), i.e., from 2002 December to 2004 September. We present in Table \[tab:rev\_list\] a summary of all the [*INTEGRAL*]{} revolutions during which the source was inside the Field Of View (FOV) of ISGRI. In total, the source was observed by ISGRI for $\sim$337 ks, but it was significantly detected only for 27 ks[^2]. For revolutions 70, 74, 145 and 189 only an upper limit is given because the source had quite a marginal position in the FOV of ISGRI and the detection is not significant enough. The JEM-X FOV is smaller and thus data were only collected during those revolutions when [*INTEGRAL*]{} pointed close enough to the source (marked in Table \[tab:rev\_list\] with the $\dag$ symbol)[^3]. Only a few pointings in 2003 May and June fulfill this requirement. Although SPI has the largest FOV, it cannot acquire enough information with one exposure due to the detector design. To achieve a S/N$\sim$10 for a source like , SPI would need around 300 ks. Nevertheless, using SPIROS in TIMING mode (see Skinner & Connell [@skinner03]) a light curve has been attained in the 20–40 keV energy range. The obtained flux values are in good agreement with the ISGRI data, but they have larger uncertainties compared to them. Therefore, no data from SPI have been used in this analysis. Data reduction has been performed with the standard Offline Analysis Software (OSA) version 4.0, available from the [*INTEGRAL*]{} Science Data Centre (ISDC)[^4]. A detalied description of the software can be found in Goldwurm et al. ([@goldwurm03]), Diehl et al. ([@diehl03]), Westergaard et al. ([@westergaard03]) and references therein.
[llccccc]{} Rev. & Date & MJD & On-source time & Detected time & Mean count rate & Detection level\
& & & (ks) & (ks) & (count s$^{-1}$) &\
26 & 2002 Dec 30–2003 Jan 1 & 52638.43–52640.07 & 14 & — & — & —\
31 & 2003 Jan 14–16 & 52653.32–52655.90 & 12 & — & — & —\
47 & 2003 Mar 03–05 & 52701.15–52701.27 & 8 & — & — & —\
51 & 2003 Mar 15–17 & 52714.85–52714.96 & 8 & 2.2 & 4.1$\pm$0.4 & 9.0\
54 & 2003 Mar 24–26 & 52722.85–52722.93 & 6 & 2.3 & 4.1$\pm$0.4 & 8.0\
55 & 2003 Mar 27–29 & 52727.64–52727.67 & 4 & — & — & —\
59 & 2003 Apr 08–10 & 52737.03–52737.10 & 4 & — & — & —\
62 & 2003 Apr 17–19 & 52746.01–52746.04 & 10 & 7.3 & 3.9$\pm$0.4 & 8.6\
67$\dag$ & 2003 May 01–04 & 52761.26–52762.45 & 8 & 6.5 & 5.9$\pm$0.5 & 12.6\
70 & 2003 May 10–13 & 52769.93–52770.10 & 12 & — & $<7.1$ & 4.1\
74 & 2003 May 22–25 & 52781.92–52782.12 & 14 & — & $<4.7$ & 2.7\
79 & 2003 Jun 06–09 & 52796.88–52797.07 & 14 & — & — & —\
82 & 2003 Jun 15–18 & 52805.94–52806.15 & 14 & 2.1 & 3.2$\pm$0.3 & 9.5\
87$\dag$ & 2003 Jun 30–Jul 03 & 52820.92–52821.08 & 12 & 6.5 & 5.1$\pm$0.4 & 11.2\
92 & 2003 Jul 15–18 & 52835.84–52836.00 & 12 & — & — & —\
142 & 2003 Dec 12–14 & 52985.44–52985.60 & 12 & — & — & —\
145 & 2003 Dec 21–23 & 52994.41–52994.62 & 12 & — & $<7.6$ & 2.9\
153 & 2004 Jan 14–16 & 53019.25–53019.45 & 14 & — & — & —\
162 & 2004 Feb 10–12 & 53045.49–53045.69 & 14 & — & — & —\
177 & 2004 Mar 26–29 & 53090.55–53091.48 & 16 & — & — & —\
181 & 2004 Apr 07–10 & 53102.88–53103.05 & 15 & — & — & —\
185 & 2004 Apr 19–22 & 53114.56–53114.70 & 13 & — & — & —\
189 & 2004 Apr 30–33 & 53126.47–53126.63 & 15 & — & $<4.4$ & 4.8\
193 & 2004 May 12–15 & 53138.39–53138.42 & 4 & — & — & —\
202 & 2004 Jun 08–11 & 53165.48–53165.67 & 15 & — & — & —\
210 & 2004 Jul 02–05 & 53189.33–53189.51 & 15 & — & — & —\
229 & 2004 Aug 29–Sep 01 & 53246.96–53247.13 & 15 & — & — & —\
233 & 2004 Sep 09–12 & 53258.01–53258.80 & 8 & — & — & —\
234 & 2004 Sep 12–15 & 53260.99–53261.92 & 17 & — & — & —\
Archived [*RXTE*]{}/PCA lightcurves of four long observations made between 2001 October 12 and 20 have also been used. The Proportional Counter Array, PCA, consists of five co-aligned Xenon proportional counter units with a total effective area of $\sim$6000 cm$^{2}$ and a nominal energy range from 2 keV to over 60 keV (Jahoda et al. [@jahoda96]). In order to produce lightcurves only the top Xenon layer in standard2 mode was used. The durations of these observations range from 17.7 to 29.8 ks and the complete integration time spans $\sim$100 ks. A more detailed description of these observations is given in Torrejón et al. ([@torrejon04]).
Radio observations
------------------
We observed with the NRAO[^5] Very Large Array (VLA) at 8.4 GHz (3.6 cm wavelength) on two different epochs: 2003 May 12 from 7:05 to 8:00 and from 11:40 to 12:52 UT (average MJD 52771.4, during [*INTEGRAL*]{} revolution 70) with the VLA in its D configuration, and 2003 May 20 from 15:27 to 17:20 UT (MJD 52779.7, during [*INTEGRAL*]{} revolution 73) with the VLA during the reconfiguration from D to A. The observations were conducted devoting 10 min scans on , preceded and followed by 2 min scans of the VLA phase calibrator . The primary flux density calibrator used was (). The data were reduced using standard procedures within the NRAO [aips]{} software package.
Results
=======
High Energies
-------------
### Timing
Analysis of the X-ray lightcurves clearly shows that the source is variable on all timescales. However, except from the 9.6 d modulation observed in [*RXTE*]{}/ASM data (see Corbet & Peele [@corbet01]; Ribó et al. [@ribo05]) and believed to be the orbital period, no other periodic variability has been detected so far. Unfortunately, the [*INTEGRAL*]{} coverage of the source is not enough to test the presence of the orbital periodicity. Therefore, orbital period analysis is out of the scope of this paper.
Pulse period analysis gave negative results for both our ISGRI and JEM-X datasets. This was expected, as previous searches on similar timescales had also failed (see NR01, Corbet & Peele [@corbet01], Torrejón et al. [@torrejon04] and Masetti et al. [@masetti04]).
ISGRI data from consecutive pointings were joined together when possible and rebinned to 50 s to search for possible longer periods. Nothing was found up to periods of $\sim$1 h. The 20–40 keV lightcurve and power spectrum for a time-span of 6950 seconds during revolutions 67 and 87 can be seen in Fig. \[fig:lc\]. The difference between the two lightcurves and between the corresponding power spectra is apparent. A quasi-periodic feature at $\sim$0.002 Hz ($\sim$500 s) can be seen in data from revolution 87, but it is not present at other epochs. The timing behaviour of the source seems to be different in every pointing.
Little attention has been paid so far to intermediate periods (of the order of hours), perhaps because intermediate periods would be difficult to detect in the [*RXTE*]{}/ASM data, especially when points are filtered and rebinned as one day averages to keep statistical significance. We therefore searched the [*RXTE*]{}/PCA lightcurves described in Sect. \[data\_he\] for intermediate period pulsations to test the possible presence of a slowly rotating NS. Unfortunately, the gaps due to the satellite low-Earth orbit are rather large in comparison to the periods searched, which certainly hampers somewhat the search. We used epoch folding and Lomb-Scargle periodogram techniques with negative results. We show in Fig. \[fig:powspec\] the power spectrum analysis of the whole time span. No significant period is detected, particularly in the interval \[0.95–5.5\]$\times 10^{-4}$ Hz, which corresponds to periods from 3 hours to 30 minutes, approximately.
Searching inside individual observations, we found some periodicities, although with low significance. Inside the [*RXTE*]{}/PCA 60071-01-03 observation we find a possible period of 6900 s while in observation 60071-01-04 we find 8620 s. Folding the entire lightcurve on either of these periods results in no pulsed signal. We conclude therefore that the analysis of the entire lightcurve does not deliver any significant period in the range explored. This result leaves only the possibility of a period in the range from some hours to 1 d to be explored. In order to do this, a suitable long observation with a high Earth orbit satellite, like [*INTEGRAL*]{}, would be required.
We show in Fig. \[fig:bands\] the long-term lightcurves of in different energy ranges, from the 2–12 keV of the [*RXTE*]{}/ASM data up to 80 keV for the ISGRI data, spanning 120 d. As can be seen in the 20–40 keV lightcurve, an increase in brightness occurred during revolution 67 (MJD 52761.36). The source brightness increased threefold over a timespan of the order of half an hour (see also the top left panel in Fig. \[fig:lc\]).
### Spectral Analysis
From the whole timespan during which ISGRI collected data from , the source was inside both the Fully Coded Field Of View (FCFOV) of ISGRI and the JEM-X FOV only during one pointing in revolution 67 and 3 pointings in revolution 87. Therefore, the available spectrum from revolution 67 and the mean spectrum from revolution 87 were used for the spectral analysis. Systematic errors of 10% for ISGRI and 5% for JEM-X were added to our data sets in order to perform a more realistic spectral analysis[^6]. The software package used was [xspec]{} 11.2 (Arnaud [@arnaud96]).
With the aim of comparing with previously published data, the comptonisation model of Sunyaev & Titarchuk ([@sunyaev80]), improved by Titarchuk ([@titarchuk94]) including relativistic effects, implemented in [xspec]{} as [compTT]{}, and a powerlaw model, modified to include photon absorption and a high energy cut-off, were chosen to fit the data. For the comptonisation model, the emitting region temperatures derived were 10$\pm$3 and 13$\pm$8 keV for the data of revolutions 67 and 87, respectively. The fits were acceptable, with corresponding $\chi^2_{\rm Red}$ of 1.3 for 173 degrees of freedom (DOF) in the first observation, and $\chi^2_{\rm Red}$ of 1.2 for 176 DOF in the second observation. The powerlaw parameters of both observations are listed in Table \[table:comparison\].
[@l@[ ]{}c@[ ]{}c@[ ]{}c@[ ]{}c@[ ]{}l@r@[ ]{}l@c@]{} Ref. (Mission, year) & $\Gamma$ & $E_{\rm cut}$ & $E_{\rm fold}$ & $N_{\rm H} \times10^{22}$ & $\chi^{2}_{\rm Red}$ & Flux $\times$10$^{-10}$ & Energy range & $E_{\rm cycl}$\
& & (keV) & (keV) & (atom cm$^{-2}$) & (DOF) & (erg s$^{-1}$ cm$^{-2}$) & (keV) & (keV)\
Negueruela & Reig [@negueruela01] ([*RXTE*]{}, 1997) & 1.7$\pm$0.3 & 7.4$\pm$0.2 & 17.5$\pm$0.8 & 4.7$\pm$0.2 & 0.9(56) & 4.8 & 2.5–30 & not reported\
Corbet & Peele [@corbet01] ([*RXTE*]{}, 1997-1) & 1.71$\pm$0.03 & 7.3$\pm$0.1 & 17.3$\pm$0.6 & 4.6$\pm$0.2 & 0.82$^a$ & 3.1 & 2–10 & not reported\
([*RXTE*]{}, 1997-2) & 1.12$\pm$0.12 & 5.3$\pm$0.2 & 10.5$\pm$1.2 & 2.7$\pm$0.7 & 0.75$^a$ & 1.1 & 2–10 & not reported\
Torrejón et al. [@torrejon04] ([*RXTE*]{}, 1997) & 1.6$\pm$0.1 & 7.6$\pm$0.4 & 16.3$\pm$1.2 & 4.5$\pm$0.4 & 0.71(52) & 2.7 & 2–10 & not reported\
([*RXTE*]{}, 2001) & 1.6$\pm$0.1 & 4.3$\pm$0.3 & 20$\pm$2 & 4.6$\pm$0.1 & 1.27(49) & 1.3 & 2–10 & 29$^{b}$\
([*BeppoSAX*]{}, 1998) & 1.0$\pm$0.2 & 7.8$\pm$0.5 & 11$\pm$3 & 1.1$\pm$0.3 & 1.32(113) & 0.4 & 2–10 & 35$\pm$5$^{b}$\
Masetti et al. [@masetti04] ([*BeppoSAX*]{}, 1998) & 0.95$^{+0.11}_{-0.14}$ & 4.3$^{+0.6}_{-0.5}$ & 10.6$^{+2.7}_{-2.0}$ & 0.88$^{+0.21}_{-0.19}$ & 1.1(219) & 0.4 & 2–10 & 35$\pm$5$^{c}$\
This paper ([*INTEGRAL*]{}, 2003, Rev. 67) & 1.8$\pm$0.7 & 13$\pm$5 & 22$\pm$6 & 1.0(fixed) & 1.2(154) & 15.9 & 4–150 & 32$\pm$5\
This paper ([*INTEGRAL*]{}, 2003, Rev. 87) & 1.7$^{+0.3}_{-0.4}$ & 11$\pm$5 & 29$^{+8}_{-7}$ & 1.0(fixed) & 1.0(153) & 8.3 & 4–150 & 32$\pm$3\
$^{a}$ No information about the DOF is reported in the reference. $^{b}$ No significance reported. $^{c}$ At 2$\sigma$ confidence level.
Both models yield a 4–150 keV flux of $\sim$16$\times$10$^{-10}$ erg s$^{-1}$ cm$^{-2}$ for revolution 67 and $\sim$8$\times$10$^{-10}$ erg s$^{-1}$ cm$^{-2}$ for revolution 87. Assuming a distance to the source of 3 kpc (NR01), its luminosity amounts to $\sim$1.7$\times$10$^{36}$ erg s$^{-1}$ and $\sim$0.9$\times$10$^{36}$ erg s$^{-1}$, respectively. Around $\sim$50% of the total luminosity lies in the 4–12 keV energy band, that is $\sim$8.5$\times$10$^{35}$ erg s$^{-1}$ for revolution 67 and $\sim$4.5$\times$10$^{35}$ erg s$^{-1}$ for revolution 87. We notice that during these observations the source appears brighter than in any previous observation. The [*RXTE*]{}/ASM lightcurve confirms that the flux was high in the 2–12 keV band as well.
We show in Fig. \[fig:spe\_r67\_r87\] the [*INTEGRAL*]{} spectra of for revolutions 67 and 87. Both spectra suggest the presence of an absorption feature around $\sim$30 keV, as already noticed by Torrejón et al. ([@torrejon04]) and Masetti et al. ([@masetti04]) in [*RXTE*]{} and [*BeppoSAX*]{} data, see Table \[table:comparison\]. An absorption feature through the [cyclabs]{} model (in [xspec]{} notation) was added to the powerlaw model. In the revolution 67 spectrum, the absorption feature was fitted at an energy of 32$\pm$5 keV for a fixed line width of 3 keV. The same feature is apparent in the spectrum from revolution 87, where it was fitted at 32$\pm$3 keV [^7]. Except for the normalization factors, the fitted parameters to the datasets of both revolutions are compatible between them within the errors (see Table \[table:comparison\]).
There are well known calibration problems in the ISGRI Response Matrix Function (RMF) that may cast some doubts about the reality of the absorption feature reported here. In order to investigate if this feature is an instrumental effect, we normalised the 20–60 keV spectra of and to their respective continua modeled by a powerlaw, and then divided the normalised spectrum over that of the . We have chosen a observation as close in time as possible to our data and with similar off-axis angles, to ensure that the RMF and off-axis effects are as much similar as possible. We show in Fig. \[fig:ratio\_to\_crab\] the observed spectra (top panel), their ratio to the powerlaw model (middle panel), and the ratio between the former ratios (bottom panel). The absorption feature around 32 keV is still seen. The quality of the data does not allow us to state that the detection is statistically significant, but the likely presence of a feature at this position had already been reported in the analysis of two other independent datasets obtained by two different satellites (see Table \[table:comparison\]). As it has been seen by three different instruments at different times, the existence of this absorption feature is strongly suggested. Such features in X-ray spectra are generally attributed to Cyclotron Resonance Scattering Features (CRSFs) (see, e.g., Coburn et al. [@coburn02] and references therein).
Motivated by the possible presence of this CRSF, we have summed up images from those [*INTEGRAL*]{} revolutions with significant ISGRI detections. The effective exposure time of this mosaic amounts to $\sim$27 ks, and we show its extracted spectrum in Fig. \[fig:mosa\_spe\]. We note that spectral shape changes with luminosity have been reported in Saraswat & Apparao ([@saraswat92]) and NR01. Therefore, by summing up data taken on different epochs we might be losing spectral shape information. However, our goal is not to study the shape of the continuum, but to achieve an improved signal to noise ratio at the CSRF position, which is suggested by both the revolution 67 and revolution 87 spectra to be at $\sim$32 keV. The best fit to the continuum of the new spectrum was a comptonisation model of soft photons by matter undergoing relativistic bulk-motion, i.e. [bmc]{} in [xspec]{} notation (Shrader & Titarchuk [@shrader99]), which provided a $\chi^{\rm 2}_{\rm
Red}$ of 1.7 for 6 degrees of freedom (see top panel in Fig. \[fig:mosa\_spe\]). We added a CSRF absorption feature, by using the [xspec]{} [cyclabs]{} model, at 32$\pm$2 keV to the [bmc]{} model, and obtained a slightly improved fit with $\chi^{\rm 2}_{\rm Red}$ of 1.3 for 5 degrees of freedom. Finally, we also fitted the data by adding a Lorentzian profile in absorption to the [bmc]{} model. In this case the $\chi^{\rm
2}_{\rm Red}$ lowered to 1.1 for 5 degrees of freedom (see bottom panel in Fig. \[fig:mosa\_spe\]). The center of the line was located at 31.5$\pm$0.5 keV and its FWHM was found to be 0.015$\pm$0.005 keV. The normalization of the line, 1.5$^{+0.7}_{-0.8}\times$10$^{-3}$ count s$^{-1}$ keV$^{-1}$ at a 68% confidence level (1$\sigma$), yields a significance of the line of $\sim$2$\sigma$[^8]. We note that the significance of this detection might be somehow influenced by changes in the continuum during the different observations. On the other hand, although the significance depends as well on the model chosen to fit the continuum, any model that properly fits the obtained continuum will reveal the presence of a $\sim$2$\sigma$ absorption around 32 keV, as can be seen from the data of Fig. \[fig:mosa\_spe\]. Thus we can conclude that the presence of a CRSF is strongly suggested by the data. This result, when combined with the previous claims after [*BeppoSAX*]{} and [*RXTE*]{} data (Torrejón et al. [@torrejon04]; Masetti et al. [@masetti04]) gives evidence for the presence of this CSRF in .
Radio
-----
No radio emission at 8.4 GHz was detected, with a 3$\sigma$ upper limit of 0.042 mJy on 2003 May 12 (MJD 52771.4) and a 3$\sigma$ upper limit of 0.066 mJy on 2003 May 20 (MJD 52779.7). We concatenated all the data and obtained a final 3$\sigma$ upper limit of 0.039 mJy. The resulting image is shown in Fig. \[fig:vla\].
Discussion
==========
Conflicts in the neutron star scenario
--------------------------------------
The aperiodic variability of the X-ray emission from favours the idea that the X-ray source is powered by wind-fed accretion onto a compact object. The X-ray luminosity of the source combined with its X-ray spectral shape likely excludes the possibility of a white dwarf (see, e.g., de Martino et al. [@demartino04] for accreting white dwarf X-ray spectra). The compact object must be, therefore, a neutron star or black hole.
There are two main difficulties for accepting as a typical wind-accreting neutron star. The first one is the lack of pulsations, as most other wind-fed systems are X-ray pulsars. In principle, this might result from a geometrical effect: if the angle between the spin axis and the magnetic axis of the neutron star is close to zero or the whole system has a very low inclination angle, all the high-energy radiation seen could be coming from a permanently observed single pole of the neutron star. The system inclination is unlikely to be very small, as the projected $v\sin i$ for the optical companion is not particularly small (NR01), unless there is a very strong misalignment between the rotation axis of the optical star and the orbit. However, if the angles between the spin and magnetic axes of neutron stars are drawn from a random distribution, there is a non-negligible chance that for some systems they will be aligned. Similar scenarios have been proposed to explain the absence of pulsations from (White et al. [@white83]) and also from the low-mass X-ray binary (Masetti et al. [@masetti02]), though in the latter case there is no conclusive evidence that this system is sufficiently young to show pulsations.
The second, stronger argument is the expected X-ray luminosity. We can consider a canonical neutron star with 10 km radius and 1.4 $M_\odot$ accreting from the fast wind of a low-luminosity O9III–V star in a 9.6 d orbit. Following the Bondi-Hoyle approximation, the accretion luminosity obtained, which is an upper limit to the X-ray luminosity, is $\sim$1$\times$10$^{34}$ erg s$^{-1}$ (see Reig et al. [@reig03] for details about the method). In contrast, the observed X-ray luminosity of is in the range $\sim$10$^{35}$–10$^{36}$ erg s$^{-1}$, therefore comparable to those of HMXBs with OB supergiant donors (see Negueruela [@negueruela04] and references therein), which are believed to have mass-loss rates more than one order of magnitude higher than a O9III–V star (Leitherer [@leitherer88]; Howarth & Prinja [@howarth89]). In addition, we note that our estimate for the semimajor axis of , of 55–60 $R_\odot$, is comparable to the highest values of semimajor axes in supergiant systems (see Kaper et al. [@kaper04]). Therefore, a close orbit cannot be invoked to solve the problem of the high X-ray luminosity.
Excluding the black hole scenario
---------------------------------
If a black hole was present in the system, the photon index and luminosities of our [*INTEGRAL*]{} observations would indicate that the source is in a low/hard state (see McClintock & Remillard [@mcclintock04] and references therein). Our radio observations took place on 2003 May 12 and 20, or during [*INTEGRAL*]{} revolutions 70 and 73, i.e., right between [*INTEGRAL*]{} observations at revolutions 67 (2003 May 01–04) and 87 (2003 Jun 30–Jul 03).
Gallo et al. ([@gallo03]) found an empirical correlation between the soft X-ray flux (in the range 2–11 keV) and the centimetre radio emission (with observed flat spectrum in the range 4.9–15 GHz) for black hole binary systems in the low/hard state, of the form: $S_{\rm radio}=(223\pm156)\times
(S_{\rm X})^{+0.7}$, where $S_{\rm radio}$ is the radio flux density scaled to 1 kpc, $S_{\rm X}$ is the X-ray flux in Crab units scaled to 1 kpc, and the uncertainty in the multiplying factor is the non-linear 1$\sigma$ error of their fit. Therefore, by using a measured X-ray flux we can compute the expected radio emission of a source in case it is a black hole.
We obtained the flux from in the 2–11 keV band from our JEM-X data, being 7.2 and 4.0$\times10^{-10}$ erg s$^{-1}$ cm$^{-2}$ for the 2003 May and June observations, respectively. This flux was translated to units by measuring the flux from the in the 2–11 keV band, using JEM-X data from an [*INTEGRAL*]{} observation close in time to our pointings. The flux was found to be $1.8\times10^{-8}$ erg s$^{-1}$ cm$^{-2}$, leading to fluxes of 40 and 22 mCrab for during revolutions 67 and 87, respectively. From this, and using $N_{\rm H}= 1.0\times10^{22}$ atom cm$^{-2}$ (average of the values obtained by Torrejón et al. [@torrejon04] and Masetti et al. [@masetti04] from [*BeppoSAX*]{} data) we computed the unabsorbed corrected flux following equation (1) of Gallo et al. ([@gallo03]), and then the resulting flux in Crab units scaled to 1 kpc distance (assuming a distance of 3 kpc to ). The relation discussed above then predicts a radio flux density, already scaled back again to 3 kpc, of 12.6$\pm$8.8 mJy at the time of revolution 67 and of 8.3$\pm$5.8 mJy for revolution 87 (where the errors come directly from the 1$\sigma$ uncertainties given in Gallo et al. [@gallo03] for the parameters of their fit). Thus, for revolution 67 the expected radio emission would be in the range 3.8–21.4 mJy, and for revolution 87 in the range 2.5–14.2 mJy.
We note that the lower expected radio flux density of 2.5 mJy is already more than 60 times greater than the 0.039 mJy 3$\sigma$ upper limit found with our VLA observations. Obviously, it can be argued that our observations were not simultaneous. During revolution 70 (which was in coincidence with the first radio observations) the flux found from ISGRI data is not significant enough and, unfortunately, the source was outside the FOV of JEM-X, which could have provided an X-ray flux suitable for this analysis. Nevertheless, we point out that the [*RXTE*]{}/ASM count rate during our first radio observation is very similar to that measured during revolution 87, so it is reasonable to compare the obtained 2.5 mJy limit with our measured 0.042 mJy 3$\sigma$ upper limit on that day, giving again a difference of a factor $\sim$60. It could also be argued that the source could have experienced a transition to the high/soft state, that would naturally prevent the detection of radio emission. However, in such a case the [*RXTE*]{}/ASM count rates should have increased considerably, while during both radio observations the count rates were similar or lower than during revolutions 67 and 87, when the photon indexes were typical of low/hard states. In summary, if the correlation between X-ray emission and radio emission reflects indeed a general property of black hole systems, we conclude that there is not a black hole in .
Moreover, Fender & Hendry ([@fender00]) show that all Galactic persistent black holes have detectable radio emission. As is a persistent source and does not show any detectable radio emission, it cannot host a black hole. Systems containing magnetised neutron stars ($B\gtrsim10^{11}$ G), on the other hand, do not show detectable radio emission.
The cyclotron feature
---------------------
The indication of a cyclotron feature centred at 32 keV strongly suggests that there is a magnetic neutron star in , in good agreement with the lack of radio emission. [*INTEGRAL*]{} is the third mission reporting the likely detection of this absorption feature (see Table \[table:comparison\]) and, even if none of the detections can be considered statistically significant, the fact that it appears in three independent datasets cannot be ignored.
If the line is indeed a CRSF we can compute the value of the magnetic field in the scattering region by means of the equation $[B/10^{12}~{\rm G}]=[E_{\rm
cycl}/11.6~{\rm keV}]\,(1+z)$, being $z$ the gravitational redshift at which we see the region. Considering that the line is produced at the surface of a canonical neutron star of 1.4 $M_\odot$ with a radius of 10 km, the gravitational redshift amounts to $z$=0.3 (see, e.g., Kreykenbohm et al. [@kreykenbohm04]), and from the position of the line centre at 32 keV we obtain a magnetic field of $3.6\times10^{12}$ G. This value, in agreement with those found by Torrejón et al. ([@torrejon04]) and Masetti et al. ([@masetti04]), is typical of magnetic neutron stars, and well within the range of $1.3$–$4.8\times10^{12}$ G obtained by Coburn et al. ([@coburn02]) for a sample of ten X-ray pulsars displaying CRSFs (see their table 7).
One is led to the conclusion that is the first system known in which an accreting magnetic neutron star does not appear as an X-ray pulsar. In principle, the possibility of very slow pulsations cannot be discarded. The wind-accreting X-ray source , with a B1 supergiant donor (Reig et al. [@reig96]), shows pulsations with a period of $\sim$2.8 h (Finley et al. [@finley94]). The orbital period of the system is $\sim$12 d, similar to that of . Our timing analysis rules out the possibility of significant pulsations from in the range 0.5–3.0 h. A modulation at a period of several hours is still possible, as existing datasets do not constrain strongly this period range. However, it seems more logical to conclude that geometrical effects are responsible for the lack of pulsations.
Conclusions
===========
We present the first [*INTEGRAL*]{} GPS results on together with contemporaneous VLA observations of the source. A broad high-energy spectrum (4–150 keV), joining JEM-X and ISGRI data, has been extracted and fitted with spectral models similar to those used in the analysis of previously published data obtained with other satellites. The evidence for the presence of a cyclotron line is furthered, as [*INTEGRAL*]{} becomes the third high-energy mission to observe a possible feature at 32$\pm$3 keV. If the feature is indeed a cyclotron line, it indicates a magnetic field strength of $3.6\times10^{12}$ G, typical of magnetised neutron stars.
Our VLA radio observations fail to detect the source at a very low level, indicating that any possible radio emission is at least 60 times weaker than what would be expected from a black hole system in the low/hard state. This lack of radio detection is again compatible with the presence of a magnetised neutron star. appears to be the first known system containing an accreting neutron star that does not show up as an X-ray pulsar, most likely due to a simple geometrical effect.
Longer high-energy exposures, such as an [*INTEGRAL*]{} long pointing, are needed to improve the S/N ratio to a level that will allow the confirmation of the presence of the cyclotron feature and the study of its eventual profile and energy changes with time and luminosity. Such observation would also allow the search for very long (several hours) or very weak pulsations.
We are grateful to the VLA Scheduling Committee, which allowed us to conduct the observations as an [*ad hoc*]{} proposal. We thank Silvia Martínez Núñez for very useful discussions about JEM-X data and Elena Gallo for useful clarifications on the X-ray/radio correlation. We acknowledge useful comments and clarifications from Nicolas Produit and [*INTEGRAL*]{} Science Data Center members. We acknowledge an anonymous referee for detailed and useful comments that helped to improve the paper. This research is supported by the Spanish Ministerio de Educación y Ciencia (former Ministerio de Ciencia y Tecnología) through grants AYA2001-3092, ESP-2002-04124-C03-02, ESP-2002-04124-C03-03 and AYA2004-07171-C02-01, partially funded by the European Regional Development Fund (ERDF/FEDER). P.B. acknowledges support by the Spanish Ministerio de Educación y Ciencia through grant ESP-2002-04124-C03-02. M.R. acknowledges support by a Marie Curie Fellowship of the European Community programme Improving Human Potential under contract number HPMF-CT-2002-02053. I.N. is a researcher of the programme [*Ramón y Cajal*]{}, funded by the Spanish Ministerio de Educación y Ciencia and the University of Alicante, with partial support from the Generalitat Valenciana and the European Regional Development Fund (ERDF/FEDER). This research has made use of the NASA’s Astrophysics Data System Abstract Service, and of the SIMBAD database, operated at CDS, Strasbourg, France.
Arnaud, K. A. 1996, in Astronomical Data Analysis Software and Systems V, ASP Conf. Ser., 101, 17
Bildsten, L., Chakrabarty, D., Chiu, J., et al. 1997, ApJS, 113, 367
Clark, J. S., Reig, P., Goodwin, S. P., et al. 2001, A&A, 376, 476
Clark, J. S., Goodwin, S. P., Crowther, P. A., et al. 2002, A&A, 392, 909
Coburn, W., Heindl, W. A., Rothschild, R. E., et al. 2002, ApJ, 580, 394
Condon, J. J., Cotton, W. D., Greisen, E. W., et al. 1998, AJ, 115, 1693
Corbet, R. H. D., & Peele, A. G. 2001, ApJ, 562, 936
Diehl, R., Baby, N., Beckmann, V., et al. 2003, A&A, 411, L117
de Martino, D., Matt, G., Belloni, T., Haberl, F., & Mukai, K. 2004, A&A, 419, 1009
Fender, R. P., & Hendry, M. A. 2000, MNRAS, 317, 1
Finley, J. P., Taylor, M., & Belloni, T. 1994, ApJ, 429, 356
Gallo, E., Fender, R. P., & Pooley, G. G. 2003, MNRAS, 344, 60
Goldwurm, A., David, P., Foschini, L., et al. 2003, A&A, 411, L223
Howarth, I. D., & Prinja, R. K. 1989, ApJS, 69, 527
Jahoda, K., Swank, J. H., Giles, A. B., et al. 1996, in EUV, X-ray, and Gamma-Ray Instrumentation for Astronomy VII, ed. O. H. Siegmund, & M. A. Gummin, SPIE, 2808, 59
Kaper, L., van der Meer, A., & Tijani, A. H. 2004, in Proc. of IAU Colloquium 191, Revista Mexicana de Astronomía y Astrofísica (Serie de Conferencias), Vol. 21, p. 128
Kreykenbohm, I., Wilms, J., Coburn, W., et al. 2004, A&A, 427, 975
Leitherer, C. 1988, ApJ, 326, 356
Maraschi, L., & Treves, A. 1981, MNRAS, 194, 1P
Martocchia, A., Motch, C., & Negueruela, I. 2005, A&A, 430, 245
Masetti, N., Dal Fiume, D., Cusumano, G., et al. 2002, A&A, 382, 104
Masetti, N., Dal Fiume, D., Amati, L, et al. 2004, A&A, 423, 311
Massi, M., Ribó, M., Paredes, J. M., et al. 2004, A&A, 414, L1
McClintock, J. E., & Remillard, R. A. 2004, in Compact Stellar X-Ray Sources, ed. W. H. G. Lewin & M. van der Klis (Cambridge University Press) in press \[[arXiv:astro-ph/0306213]{}\]
McSwain, M. V., Gies, D. R., Huang, W., et al. 2004, ApJ, 600, 927
Negueruela, I., & Reig, P. 2001, A&A, 371, 1056 (NR01)
Negueruela, I. 2004, in proceedings of ’The Many Scales of the Universe-JENAM 2004 Astrophysics Reviews, Kluwer Academic Publishers, eds. J. C. del Toro Iniesta et al., \[[arXiv:astro-ph/0411759]{}\]
Paredes, J. M., Martí, J., Ribó, M., & Massi, M. 2000, Science, 288, 2340
Paredes, J. M., Ribó, M., Ros, E., Martí, J., & Massi, M. 2002, A&A, 393, L99
Protassov, R., van Dyk, D. A., Connors, A., Kashyap, V. L., & Siemiginowska, A. 2002, ApJ, 571, 545
Reig, P., Chakrabarty, D., Coe, M. J., et al. 1996, A&A, 311, 879
Reig, P., Ribó, M., Paredes, J. M., & Martí, J. 2003, A&A, 405, 285
Ribó, M., Reig, P., Martí, J., & Paredes, J. M. 1999, A&A, 347, 518
Ribó, M., Negueruela, I., Torrejón, J. M., Blay, P., & Reig, P. 2005, A&A, submitted
Saraswat, P., & Apparao, K. M. V. 1992, ApJ, 401, 678
Shrader, C. R. & Titarchuk, L. 1999, ApJ, 521, L121
Skinner, G., & Connell, P. 2003, A&A, 411, L123
Sunyaev, R. A., & Titarchuk, L. G. 1980, A&A, 86, 121
Titarchuk, L. 1994, ApJ, 434, 570
Torrejón, J. M., Kreykenbohm, I., Orr, A., Titarchuk, L., & Negueruela, I. 2004, A&A, 423, 301
Westergaard, N. J., Kretschmar, P., Oxborrow, C. A., et al. 2003, A&A, 411, L257
White, N. E., Swank, J. H., & Holt, S. S. 1983, ApJ, 270, 711
Winkler, C., Courvoisier, T. J.-L., Di Cocco, G., et al. 2003, A&A, 411, L1
[^1]: Based on observations with [*INTEGRAL*]{}, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), Czech Republic and Poland, and with the participation of Russia and the USA.
[^2]: A detection is considered for ISGRI when the detection level, which is given by the software package and is not directly related to the number of $\sigma$ above background, lies above a value of 8. For the typical exposure times of GPS pointings ($\sim2$ ks), the sensitivity limit of ISGRI lies around 2.5 count s$^{-1}$ at 20–40 keV energy range.
[^3]: A detection is considered for JEM-X when the detection level, which is given by the software package and is not directly related to the number of $\sigma$ above background, lies above a value 20.
[^4]: [http://isdc.unige.ch/index.cgi?Soft+download]{}
[^5]: The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
[^6]: Private communication from the teams of the instruments.
[^7]: An F-test was applied to the spectral fits of revolution 87, wich has a better signal to noise ratio than that of revolution 67. The improvement of the $\chi^2$ by the inclusion of a cyclabs component has a 12% probability of ocurring by chance. One should take into account the limitations of this test when applied to lines (Protassov et al. [@protassov02])
[^8]: The significance of the CRSF detection could not be derived from the individual analysis of revolutions 67 and 87, due to the poor improvement of the statistics obtained when adding the CRSF component in both cases. We can only state the significance of the detection after the nice improvement in the signal-to-noise ratio achieved by the mosaic.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'D. Hutsemékers [^1]'
- 'J. Manfroid'
- 'E. Jehin'
- 'J.-M. Zucconi'
- 'C. Arpigny'
title: 'The [$^{16}$OH/$^{18}$OH]{} and [OD/OH]{} isotope ratios in comet C/2002 T7 (LINEAR) [^2]'
---
Introduction {#sect:intro}
============
The determination of the abundance ratios of the stable isotopes of the light elements in different objects of the Solar System provides important clues into the study of their origin and history. This is especially true for comets which carry the most valuable information regarding the material in the primitive solar nebula.
The [$^{16}$O/$^{18}$O]{} isotopic ratio has been measured from space missions in a few comets. In-situ measurements with the neutral and ion mass spectrometers onboard the Giotto spacecraft gave [$^{16}$O/$^{18}$O]{} = 495$\pm$37 for H$_2$O in comet 1P/Halley (Eberhardt et al. [@Eberhardt]). A deep integration of the spectrum of the bright comet 153P/2002 C1 (Ikeya-Zhang) with the sub-millimeter satellite Odin led to the detection of the H$_2^{18}$O line at 548 GHz (Lecacheux et al. [@Lecacheux]). Subsequent observations resulted in the determination of [$^{16}$O/$^{18}$O]{} = 530$\pm$60, 530$\pm$60, 550$\pm$75 and 508$\pm$33 in the Oort-Cloud comets Ikeya-Zhang, C/2001 Q4, C/2002 T7 and C/2004 Q2 respectively (Biver et al. [@Biver]). Within the error bars, these measurements are consistent with the terrestrial value ([$^{16}$O/$^{18}$O]{} [(SMOW[^3])]{} = 499), although marginally higher (Biver et al. [@Biver]). More recently, laboratory analyses of the silicate and oxide mineral grains from the Jupiter family comet 81P/Wild 2 returned by the Stardust space mission provided [$^{16}$O/$^{18}$O]{} ratios also in excellent agreement with the terrestrial value. Only one refractory grain appeared marginally depleted in $^{18}$O ([$^{16}$O/$^{18}$O]{} = 576$\pm$78) as observed in refractory inclusions in meteorites (McKeegan et al. [@McKeegan]).
The D/H ratio has been measured in four comets. In-situ measurements provided D/H = 3.16$\pm$0.34 10$^{-4}$ for H$_2$O in 1P/Halley (Eberhardt et al. [@Eberhardt], Balsiger et al. [@Balsiger]), a factor of two higher than the terrestrial value (D/H [(SMOW)]{} = 1.556 10$^{-4}$). The advent of powerful sub-millimeter telescopes, namely the Caltech Submillimeter Observatory and the James Clerck Maxwell telescope located in Hawaii, allowed the determination of the D/H ratio for two exceptionally bright comets. In comet C/1996 B2 (Hyakutake), D/H was found equal to 2.9$\pm$1.0 10$^{-4}$ in H$_2$O (Bockelée-Morvan et al. [@Bockelee]), while, in comet C/1995 O1 (Hale-Bopp), the ratios D/H = 3.3$\pm$0.8 10$^{-4}$ in H$_2$O and D/H = 2.3$\pm$0.4 10$^{-3}$ in HCN were measured (Meier et al. [@Meier1; @Meier2]), confirming the high D/H value in comets. Both Hyakutake and Hale-Bopp are Oort-Cloud comets. Finally, bulk fragments of 81P/Wild 2 grains returned by Stardust indicated moderate D/H enhancements with respect to the terrestrial value. Although D/H in 81P/Wild 2 cannot be ascribed to water, the measured values overlap the range of water D/H ratios determined in the other comets (McKeegan et al. [@McKeegan]).
Among a series of spectra obtained with UVES at the VLT to measure the [$^{14}$N/$^{15}$N]{} and [$^{12}$C/$^{13}$C]{} isotope ratios in various comets from the 3880$\,$ÅCN ultraviolet band (e.g. Arpigny et al. [@Arpigny], Hutsemékers et al. [@Hutsemekers], Jehin et al. [@Jehin2], Manfroid et al. [@Manfroid2]), we found that the spectrum of [C/2002 T7]{} appeared bright enough to detect the $^{18}$OH lines in the $A\,^{2}\Sigma^{+}
- X\,^{2}\Pi_{i}$ bands at 3100 Å allowing –for the first time– the determination of the [$^{16}$O/$^{18}$O]{} ratio from [*ground-based*]{} observations. We also realized that the signal-to-noise ratio of our data was sufficient to allow a reasonable estimate of the [OD/OH]{} ratio from the same bands.
The possibility of determining the [$^{16}$O/$^{18}$O]{} ratio from the OH ultraviolet bands has been emphasized by Kim ([@Kim]). Measurements of the OD/OH ratio were already attempted by A’Hearn et al. ([@AHearn]) using high resolution spectra from the International Ultraviolet Explorer and resulting in the upper limit D/H $< 4 \, 10^{-4}$ for comet C/1989 C1 (Austin). These observations now become feasible from the ground thanks to the high ultraviolet throughput of spectrographs like UVES at the VLT.
Observations and data analysis
==============================
Observations of comet [C/2002 T7]{} were carried out with UVES mounted on the 8.2m UT2 telescope of the European Southern Observatory VLT. Spectra in the wavelength range 3040$\,$Å–10420$\,$Å were secured in service mode during the period May 6, 2004 to June 12, 2004. The UVES settings 346+580 and 437+860 were used with dichroic \#1 and \#2 respectively. In the following, only the brighest ultraviolet spectra obtained on May 6, May 26 and May 28 are considered. The 0.44 $\times$ 10.0 arcsec slit provided a resolving power $R \simeq 80000$. The slit was oriented along the tail, centered on the nucleus on May 26, and off-set from the nucleus for the May 6 and May 28 observations. The observing circumstances are summarized in Table \[tab:obs\].
[lcccccr]{}\
Date & $r$ & $\dot{r}$ & $\Delta$ & Offset & $t$ & Airmass\
(2004) & (AU) & (km/s) & (AU) & (10$^{3}$ km) & (s) &\
\
May 6 & 0.68 & 15.8 & 0.61 & 1.3 & 1080 & 2.2-1.9\
May 26 & 0.94 & 25.6 & 0.41 & 0.0 & 2677 & 1.3-1.8\
May 26 & 0.94 & 25.6 & 0.41 & 0.0 & 1800 & 2.1-2.7\
May 28 & 0.97 & 25.9 & 0.48 & 10.0 &3600 & 1.3-1.7\
\
\
[$r$ and $\dot{r}$ are the comet heliocentric distance and radial velocity; $\Delta$ is the geocentric distance; $t$ is the exposure time; Airmass is given at the beginning and at the end of the exposure]{}
The spectra were reduced using the UVES pipeline (Ballester et al. [@Ballester]), modified to accurately merge the orders taking into account the two-dimensional nature of the spectra. The flat-fields were obtained with the deuterium lamp which is more powerful in the ultraviolet.
The data analysis and the isotopic ratio measurements were performed using the method designed to estimate the carbon and nitrogen isotopic ratios from the CN ultraviolet spectrum (Arpigny et al. [@Arpigny], Jehin et al. [@Jehin] and Manfroid et al. [@Manfroid]). Basically, we compute synthetic fluorescence spectra of the $^{16}$OH, $^{18}$OH and $^{16}$OD for the $A\,^{2}\Sigma^{+} - X\,^{2}\Pi_{i}$ (0,0) and (1,1) ultraviolet bands for each observing circumstance. Isotope ratios are then estimated by fitting the observed OH spectra with a linear combination of the synthetic spectra of the two species of interest.
The OH model
------------
We have developed a fluorescence model for OH similar to the one described by Schleicher and A’Hearn ([@Schleicher]). As lines of the OH(2-2) bands are clearly visible in our spectra we have included vibrational states up to $v=2$ in the A$^2\Sigma^+$ and X$^2\Pi_i$ electronic states. For each vibrational state rotational levels up to $J=11/2$ were included, leading to more than 900 electronic and vibration-rotation transitions. The system was then solved as described in Zucconi and Festou ([@Zucconi]).
Accurate OH wavelengths were computed using the spectroscopic constants of Colin et al. ([@Colin]) and Stark et al. ([@Stark]). OD wavelengths were computed using the spectroscopic constants of Abrams et al. ([@Abrams]) and Stark et al. ([@Stark]). $^{18}$OH wavelengths were derived from the $^{16}$OH ones using the standard isotopic shift formula; they are consistent with the measured values of Cheung et al. ([@Cheung]).
Electronic transition probabilities for OH and OD are given by Luque and Crosley ([@Luque1; @Luque2]). We used the dipole moments of OH and OD measured by Peterson et al. ([@Peterson]) to compute the rotational transition probabilities and the vibrational lifetimes computed by Mies ([@Mies]). Because of the very small difference in the structure of $^{18}$OH and $^{16}$OH the transition probabilities for $^{18}$OH and $^{16}$OH are the same.
The OH fluorescence spectrum is strongly affected by the solar Fraunhofer lines, especially in the 0-0 band, so a carefully calibrated solar atlas is required. We have used the Kurucz ([@Kurucz]) atlas above 2990 Å and the A’Hearn et al. ([@AHearn1]) atlas below.
The role of collisions in the OH emission, in particular those with charged particles inducing transitions in the $\Lambda$ doublet ground rotational state, was first pointed out by Despois et al. ([@Despois]) in the context of the 18 cm radio emission and then also considered in the UV emission by Schleicher ([@SchleicherPHD], Schleicher and A’Hearn [@Schleicher]). Modeling the effect of collisions may be done by adding the collision probability transition rate between any two levels, $i$ and $j$: $$C_{i,j} = \sum_c{n_c({\bf r})\,{\rm v}_c({\bf r})\,\sigma_c(i,j,{\rm v}_c)}$$ where the sum extends over all colliders. $n_c$ is the local density of the particles inducing the transition, ${\rm v}_c$ is the relative velocity of the particles and $\sigma_c$ is the collision cross section. It also depends on the energy of the collision i.e. of ${\rm
v}_c$. The reciprocal transition rates are obtained through detailed balance: $$C_{j,i} = C_{i,j}\frac{g_i}{g_j}\exp(E_{ij}/kT)$$ in which $g_i$ is the statistical weight and $E_{ij}$ is the energy separation between the states. In order to reduce the number of parameters required to model the collisions we have adopted a simplified expression of the form $C_{i,j} = q_\Lambda$ for the transition in the $\Lambda$ doublet ground state. In order to better fit the OH spectra we have also found necessary to take into account rotational excitation through a similar expression $C_{i,j} = q_{rot}$ with $q_{rot}$ different from 0 only for dipole transitions, i.e. when $\Delta J < 2$, which appeared to correctly fit the data. Furthermore, since OH and OD have similar dipole moments, we assumed that collisional cross-sections are identical for both molecules.
The model assumes that the $^{16}$OH lines are optically thin. This is verified by the fact that it correctly reproduces both the faint and strong OH emission lines.
[$^{16}$OH/$^{18}$OH]{}
-----------------------
Two $^{18}$OH lines at 3086.272 Å and 3091.046 Å are clearly detected in the (0,0) band. However these lines are strongly blended with the $\sim$ 500 times brighter $^{16}$OH emission lines and then not useful for an accurate flux estimate. In fact the (1,1) band at 3121 Å, while fainter, is better suited for the determination of [$^{16}$OH/$^{18}$OH]{} since (i) the wavelength separation between $^{18}$OH and $^{16}$OH is larger ($\simeq$ 0.3 Å instead of 0.1 Å), and (ii) the sensitivity of UVES rapidly increases towards longer wavelengths while the atmospheric extinction decreases, resulting in a better signal-to-noise ratio.
Fig. \[fig:fig1\] illustrates a part of the observed OH (1,1) band together with the synthetic spectrum from the model. Two $^{18}$OH lines are clearly identified.
To actually evaluate [$^{16}$OH/$^{18}$OH]{} we first select the 3 brighest and best separated $^{18}$OH lines at $\lambda$ = 3134.315$\,$Å, 3137.459$\,$Å and 3142.203$\,$Å. These lines are then doppler-shifted and co-added with proper weights to produce an average profile which is compared to the $^{16}$OH profile similarly treated (cf. Jehin et al. [@Jehin] for more details on the method). We verified that the $^{16}$OH faint wings and nearby prompt emission lines (analysed in detail in a forthcoming paper) do not contaminate the $^{18}$OH lines nor the measurement of the isotopic ratios. The ratio [$^{16}$OH/$^{18}$OH]{} is then derived through an iterative procedure which is repeated for each spectrum independently. For the spectra of May 6, 26 and 28 we respectively derive [$^{16}$OH/$^{18}$OH]{} = 410$\pm$60, 510$\pm$130 and 380$\pm$290. The uncertainties are estimated from the co-added spectra by considering the rms noise in spectral regions adjacent to the $^{18}$OH lines, and by evaluating errors in the positioning of the underlying pseudo-continuum (i.e. the dust continuum plus the faint wings of the strong lines). The weighted average of all measurements gives [$^{16}$OH/$^{18}$OH]{} = 425 $\pm$ 55.
Since OH is essentially produced from the dissociation of H$_2$O, [$^{16}$OH/$^{18}$OH]{} represents the [$^{16}$O/$^{18}$O]{} ratio in cometary water, with the reasonable assumption that photodissociation cross-sections are identical for H$_2$$^{18}$O and H$_2$$^{16}$O.
[OD/OH]{}
---------
The detection of OD lines is much more challenging since one may expect the OD lines to be a few thousand times fainter than the OH lines. Fortunately, the wavelength separation between OD and OH ($\gtrsim$ 10 Å) is much larger than between $^{18}$OH and $^{16}$OH such that both the (0,0) and (1,1) bands can be used with no OD/OH blending (apart from chance coincidences). Since no individual OD lines could be detected, we consider the 30 brighest OD lines (as predicted by the model) for co-addition. After removing 3 of them, blended with other emission lines, an average profile is built with careful Doppler-shifting and weighting as done for $^{18}$OH. Only our best spectra obtained on May 6 and May 26 are considered, noting that the (0,0) band –which dominates the co-addition– is best exposed on May 26 while the (1,1) band is best exposed on May 6, due to the difference in airmass. The resulting OD line profiles are illustrated in Fig. \[fig:fig3\] and \[fig:fig4\] and compared to a synthetic spectrum computed with [OD/OH]{} = 4 10$^{-4}$. OD is detected as a faint emission feature which is present [*at both epochs*]{}. From the measurement of the line intensities, we derive [OD/OH]{} = 3.3$\pm$1.1 10$^{-4}$ and 4.1$\pm$2.0 10$^{-4}$ for the spectra obtained on May 6 and 26 respectively. The weighted average is [OD/OH]{} = 3.5$\pm$1.0 10$^{-4}$. The difference in the lifetime of OD and OH (van Dishoeck and Dalgarno [@Vandishoeck]) does not significantly affect our results since the part of the coma sampled by the UVES slit is two orders of magnitude smaller than the typical OH scale-length. The uncertainties on [OD/OH]{} were estimated as for [$^{16}$OH/$^{18}$OH]{}. Possible errors on the isotopic ratios related to uncertainties on the collision coefficients were estimated via simulations and found to be negligible. Even in the hypothetical case that collisions differently affect OD and OH, errors are much smaller than the other uncertainties, as expected since the contribution of collisions is small with respect to the contribution due to pure fluorescence.
To estimate the cometary D/H ratio in water, HDO/H$_2$O must be evaluated. While the cross-section for photodissociation of HDO is similar to that of H$_2$O, the production of OD+H is favoured over OH+D by a factor around 2.5 (Zhang and Imre [@Zhang], Engel and Schinke [@Engel]). Assuming that the total branching ratio for HDO $\rightarrow$ OD + H plus HDO $\rightarrow$ OH + D is equal to that of H$_2$O $\rightarrow$ OH + H, we find HDO/H$_2$O $\simeq$ 1.4 OD/OH. With D/H = 0.5 HDO/H$_2$O, we finally derive D/H = 2.5$\pm$0.7 10$^{-4}$ in cometary water. The factor (OD+H)/(OH+D) = 2.5 adopted in computing the branching ratios for the photodissociation of HDO is an average value over the spectral region where the cross-sections peak. In fact (OD+H)/(OH+D) depends on the wavelength and roughly ranges between 2 and 3 over the spectral regions where absorption is significant (Engel and Schinke [@Engel], Zhang et al. [@Zhang2], Yi et al. [@Yi]). Fortunately, even if we adopt the extreme ratios (OD+H)/(OH+D) = 2 or (OD+H)/(OH+D) = 3 instead of 2.5, the value of the D/H isotopic ratio is not changed by more than 6%.
Discussion
==========
We have measured the oxygen isotopic ratio [$^{16}$O/$^{18}$O]{} = 425 $\pm$ 55 from the OH $A\,^{2}\Sigma^{+} - X\,^{2}\Pi_{i}$ ultraviolet bands in comet [C/2002 T7]{}. Although marginally smaller, our value do agree within the uncertainties with [$^{16}$O/$^{18}$O]{} = 550 $\pm$ 75 estimated from observations by the Odin satellite (Biver et al. [@Biver]), with the [$^{16}$O/$^{18}$O]{}ratios determined in other comets, and with the terrestrial value (Sect. \[sect:intro\]).
To explain the so-called “oxygen anomaly” i.e. the fact that oxygen isotope variations in meteorites cannot be explained by mass-dependent fractionation, models of the pre-solar nebula based on CO self-shielding were proposed, predicting enrichments, with respect to the [SMOW]{} value, of $^{18}$O in cometary water up to [$^{16}$O/$^{18}$O]{}$\sim$ 415 (Yurimoto & Kuramoto [@Yurimoto]). Recently, Sakamoto et al. ([@Sakamoto]) found evidence for such an enrichment in a primitive carbonaceous chondrite, supporting self-shielding models. The value of [$^{16}$O/$^{18}$O]{} we found in [C/2002 T7]{} is also marginally smaller than the terrestrial value and compatible with these predictions. On the other hand, the measurement of [$^{16}$O/$^{18}$O]{} = 440 $\pm$ 6 in the solar photosphere (Ayres et al. [@Ayres]; cf. Wiens et al. [@Wiens] for a review of other, less accurate, measurements) indicates that solar ratios may deviate from the terrestrial ratios by much larger factors than anticipated, requiring some revision of the models. More observations are then critically needed to get an accurate value of [$^{16}$O/$^{18}$O]{} in comets, assuming that cometary water is pristine enough and can be characterized by a small set of representative values. Namely, if self-shielding is important in the formation of the solar system, it is not excluded that significant variations can be observed between comets formed at different locations in the solar system, like Oort cloud and Jupiter-family comets.
We also detected OD and estimated D/H = 2.5 $\pm$ 0.7 10$^{-4}$ in water. Our measurement is compatible with other values of D/H in cometary water and marginally higher than the terrestrial value (Sect. \[sect:intro\]). Our observations were not optimized for the measurement of OD/OH (neither for [$^{16}$OH/$^{18}$OH]{}) and one of our best spectra was obtained at airmass $\sim$ 2 with less than 20 min of exposure time for a comet of heliocentric magnitude m$_r \simeq$ 5 (for comparison, comet Hale-Bopp reached m$_r \simeq$ $-$1). All these observing circumstances can be improved, including observations at negative heliocentric velocities to increase the OD/OH fluorescence efficiency ratio (cf. figure 1 of A’Hearn et al. [@AHearn]). This opens the possibility to routinely measure both the [$^{16}$O/$^{18}$O]{} and D/H ratios from the ground, together with the [$^{12}$C/$^{13}$C]{} and [$^{14}$N/$^{15}$N]{} ratios, for a statistically significant sample of comets of different types (e.g. Oort-cloud, Halley-type, and hopefully Jupiter-family comets although the latter are usually fainter). The measurement of D/H is especially important since it allows to limit the contribution of comets to the terrestrial water, the high D abundance implying that no more than about 10 to 30% of Earth’s water can be attributed to comets (e.g. Eberhardt et al. [@Eberhardt], Dauphas et al. [@Dauphas], Morbidelli et al. [@Morbidelli]). However, only a full census of D/H in comets could answer this question. In particular, if Jupiter-family comets, thought to have formed in farther and colder places in the Solar System, are characterized by an even higher D/H, closer to the ratio measured in the interstellar medium water, then the fraction of cometary H$_2$O brought onto the Earth could be even smaller.
We thank the referee, Dominique Bockelée-Morvan, for comments which helped to significantly improve the manuscript. We are also grateful to Paul Feldman and Hal Weaver for useful discussions.
Abrams, M.C., Davis, S.P., Rao, M.L.P., Engleman Jr, R. 1994, J. Mol. Spec. 165, 57 A’Hearn, M.F., Ohlmacher, J.T., Schleicher, D.G. 1983, Technical Report TR AP83-044 (College Park: University of Maryland) A’Hearn, M.F., Schleicher, D.G., West, R.A. 1985, , 297, 826 Arpigny, C., Jehin, E., Manfroid, J., et al. 2003, Science, 301, 1522 Ayres, T.R., Plymate, C., Keller, C.U. 2006, ,165, 618 Ballester, P., Modigliani, A., Boitquin, O., et al. 2000, The Messenger, 101, 31 Balsiger, H., Altwegg, K., Geiss, J., 1995, J. Geophys. Res. 100, 5827 Biver, N., Bockelée-Morvan, D., Crovisier, J., et al. 2007, Planetary Space Sci. 55, 1058 Bockelée-Morvan, D., Gautier, D., Lis, D.C., et al. 1998, Icarus, 133, 147 Cheung, A.S.-C., Chan, C.M.-T., Sze, N.S.-K. 1995, J. Mol. Spec. 174,205 Colin, R., Coheur, P.-F., Kiseleva, M., Vandaele, A., C. ; Bernath, P. F. 2002, J. Mol. Spec. 214, 225 Dauphas, N., Robert, F., Marty, B. 2000, Icarus 148, 508 Despois, D., Crovisier, J, Kazès, I 1981, , 99, 320 Eberhardt, P., Reber, M., Krankowsky, D., Hodges, R.R. 1995, , 302, 301 Engel, V., Schinke, R. 1988, J. Chem. Phys., 88, 6831 Hutsemékers, D., Manfroid, J., Jehin, E., et al. 2005, , 440, L21 Jehin, E., Manfroid, J., Cochran, A.L., et al. 2004, , 613, L161 Jehin, E., Manfroid, J., Hutsemékers, D. et al. 2006, , 641, L145 Kim, S.J. 2000, J. Astron. Space Sc., 17, 147 Kurucz, R.L. 2005, Mem. S.A.It. Suppl. 8, 189 Lecacheux, A., Biver, N., Crovisier, J., et al. 2003, , 402, L55 Luque, J., Crosley, D.R. 1998, J. Chem. Phys, 109, 439 Luque, J., Crosley, D.R. 1999, LIFBASE: Database and Spectral Simulation Program (Version 2,0), SRI International Report MP 99-009 Manfroid, J., Jehin, E., Hutsemékers, D., et al. 2005, , 432, L5 Manfroid, J., et al. 2008, in preparation McKeegan, K.D., Aléon, J., Bradley, J. 2006, Science, 314, 1724 Meier, R., Owen, T.C., Matthews, H.E., et al. 1998, Science, 279, 842 Meier, R., Owen, T.C., Jewiit, D.C., et al. 1998, Science, 279, 1707 Mies, F.H. 1974, J. Mol. Spec, 53, 150 Morbidelli, A., Chambers, J., Lunine, J.I., et al. 2000, Meteoritics & Planetary Science, 35, 1309 Peterson, K.L., Fraser, G.T., Klemperer, W. 1894, Can. J. Phys. 62, 1502 Sakamoto, N., Seto, Y., Itoh, S. et al. 2007, Science, 317, 231 Schleicher, D.G., 1983, Ph.D dissertation, University of Maryland Schleicher, D.G., A’Hearn, M.F. 1988, , 331, 1058 Stark, G., Brault, J.W., Abrams, M.C. 1994, J. Opt. Soc. Am. 11, 3 Van Dishoeck, E.F., Dalgarno, A. 1984, Icarus, 59, 305 Wiens, R.C., Bochsler, P., Burnett, D.S., Wimmer-Schweingruber, R.F. 2004, Earth Planetary Sci., 226, 549 Yi, W., Park, J., Lee, J. 2007, Chem. Phys. Letters, 439, 46 Yurimoto, H., Kuramoto, K. 2004, Science, 305, 1763 Zhang, J., Imre, D.G., 1988, Chem. Phys. Letters, 149, 233 Zhang, J., Imre, D.G., Frederik, J.H. 1989, J. Phys. Chem, 93, 1840 Zucconi, J.-M., Festou, M.C. 1985, ,150, 180
[^1]: DH is Senior Research Associate FNRS; JM is Research Director FNRS; and EJ is Research Associate FNRS
[^2]: Based on observations collected at the European Southern Observatory, Paranal, Chile (ESO Programme 073.C-0525).
[^3]: Standard Mean Ocean Water
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'A novel framework for meeting transcription using asynchronous microphones is proposed in this paper. It consists of audio synchronization, speaker diarization, utterance-wise speech enhancement using guided source separation, automatic speech recognition, and duplication reduction. Doing speaker diarization before speech enhancement enables the system to deal with overlapped speech without considering sampling frequency mismatch between microphones. Evaluation on our real meeting datasets showed that our framework achieved a character error rate (CER) of by using 11 distributed microphones, while a monaural microphone placed on the center of the table had a CER of . We also showed that our framework achieved CER of , which is only 2.1 percentage points higher than the CER in headset microphone-based transcription.'
address: ' Hitachi, Ltd. '
bibliography:
- 'mybib.bib'
title: |
Utterance-Wise Meeting Transcription System\
Using Asynchronous Distributed Microphones
---
**Index Terms**: meeting transcription, speech recognition, speaker diarization, asynchronous distributed microphones
Introduction
============
Meeting transcription is one practical use case of automatic speech recognition (ASR). Difficulties are i) that input audio signals suffer from reverberation and background noise because each utterance is recorded by distant microphones and ii) that they also suffer from speech overlap because each participant speaks at any time. To transcribe speech in such a wild condition, a powerful speech enhancement module is necessary. Most meeting transcription systems are therefore based on a microphone array [@stolcke2007sri; @hain2011transcribing; @ito2017probabilistic; @yoshioka2018recognizing], sometimes one with an omnidirectional camera [@hori2011low; @yoshioka2019advances] for face tracking. This means that the system requires special equipment to be introduced. If the microphone arrays can be replaced by more general devices, such as participants’ smartphones or tablets, the usability of the system will be drastically improved. When such devices are distributed to transcribe a meeting, the problem is that they are asynchronous, and speech separation methods for synchronized signals cannot be simply applied.
Recently, some methods of meeting transcription using asynchronous distributed microphones have been proposed. One is the session-wise approach proposed by Araki [@araki2017meeting; @araki2018meeting]. They first synchronized multichannel observations by solving sampling frequency mismatch, then applied session-wise speech enhancement using the minimum variance distortionless response (MVDR) beamformer, then fed the enhanced signals into an ASR module to obtain the final transcription results. They showed that speech enhancement using asynchronous distributed microphones improved the ASR performance [@araki2017meeting; @araki2018meeting]. The MVDR beamformer is a frequency-wise algorithm, however, so the well-known permutation problem of frequency-domain has to be solved. The common approach for multi-speaker cases is to prepare initial spatial correlation matrices from audio data with a fixed number of speakers and their positions [@higuchi2017online]. Therefore, when the number of speakers in the inference audio is different from, especially larger than, that in the training set, we cannot provide initial spatial correlation matrices. If we cannot obtain such spatial correlation matrices beforehand, we have to solve the permutation problem as a post-processing [@sawada2004robust; @sawada2010underdetermined], but there are few reports that these methods perform well on real noisy and reverberant data.
Another is the block-wise approach proposed by Yoshioka [@yoshioka2019meeting]. They synchronized input audio streams in a block-online manner and then applied block-wise speech separation. The separated audios are input into the ASR module, which is followed by speaker diarization. The benefit of this approach is that the effect of sampling frequency mismatch can be ignored within a block when the block is short enough because the scale of sampling frequency mismatch is about (parts per million) at most [@miyabe2015blind; @araki2019estimation]. However, their speech separation uses speech-vs-noise criteria and thus cannot deal with multiple speakers speaking simultaneously.
This paper investigates the utterance-wise approach, which is different from the session-wise or block-wise approaches described above. We first roughly synchronized audio signals recorded by distributed microphones and then applied speaker diarization. Speaker diarization is based on the clustering of features extracted from short segments, but we use features extracted from all the signals recorded by each microphone so that it can deal with overlapped speech. Then we applied guided source separation [@boeddeker2018front], which performed well for ASR in a dinner party scenario [@kanda2019guided; @zorila2019investigation]. This separation is conducted for each extracted utterance, which is short enough not to be suffered from sampling frequency mismatch between microphones. We applied ASR for each enhanced utterance, and finally, we conducted duplication reduction for ASR results to reduce the effect of errors on diarization or separation. Our approach can deal with speaker overlap without any methods to correct sampling frequency mismatch in the synchronization phase and solve the permutation problem in the speech enhancement phase. To evaluate our framework, we recorded eight sessions of real meetings using 11 distributed smartphones, each of which was equipped with a monaural microphone. The experimental results showed that our framework improved performance by using multiple microphones. We also showed that our framework could achieve performance comparable to that of headset microphone-based transcription if the oracle diarization results were known.
Method
======
![image](figs/overview.pdf){width="\linewidth"}
We assume that a meeting is recorded by $M$ asynchronous distributed microphones and transcription is based on the known number of speakers $K$ in an offline manner. An overview of our method is shown in . Given $M$ audio signals, we first synchronize them by maximizing their correlation. The correction of sampling frequency mismatch between signals is not conducted in the synchronization part. With the synchronized signals, we conduct clustering-based diarization to obtain utterances for each speaker. After that, we perform speech enhancement for each utterance by using the diarization results as guides to avoid the permutation problem. The enhanced utterances are fed into the ASR module to obtain ASR results. Finally, to reduce errors caused by diarization or separation, we apply duplication reduction for the ASR results. In this section, we explain the details of each module of the system.
Blind synchronization
---------------------
In this part, we conduct a correlation-based synchronization to correct start or end point differences of input signals. This rough synchronization can be operated under the existence of the sampling frequency mismatch. Assume that the observation of the $m$-th microphone ($m\in \{1,\dots,M\}$) is defined as $\hat{\mathbf{x}}_m\coloneqq\left[\hat{x}_{m,n}\right]_{n=1}^{N_m}$. We select an anchor $m_a$ from the $M$ microphones and calculate the shift $\delta_m$ between signals of the anchor $m_a$ and each microphone $m\in\{1,\dots,M\}$ as follows: $$\begin{gathered}
\delta_m=\begin{cases}
\operatorname*{arg\,max}_{\delta\in\mathbb{Z}}\sum_\nu x_{m_a,\nu}x_{m,\nu+\delta}&(m\neq m_a)\\
0 & (m=m_a),
\end{cases}\\
x_{m,\nu}=\begin{cases}
\hat{x}_{m,\nu}&\left(\nu\in\{1,\dots,N_m\}\right)\\
0&\left(\mathrm{otherwise}\right).\\
\end{cases}\end{gathered}$$ Synchronized signals $\mathbf{x}_m$ $(m=1,\dots,M)$ are defined in the time interval recorded by all the microphones as follows: $$\begin{aligned}
\mathbf{x}_m&=\left[\hat{x}_{m,n}\right]_{n=n_\text{begin}+\delta_m}^{n_\text{end}+\delta_m},\\
n_\text{begin}&=\max_{m'\in\left\{1,\dots,M\right\}}\left(1-\delta_{m'}\right),\\
n_\text{end}&=\min_{m'\in\left\{1,\dots,M\right\}}\left(N_{m'}-\delta_{m'}\right).\end{aligned}$$ In this study we assume that all the utterances to be transcribed are within the time interval of $\mathbf{x}_m$.
Speaker diarization
-------------------
In this paper, we conduct speaker diarization by clustering vectors. One drawback of the conventional clustering-based diarization using a monaural recording is that it cannot deal with speaker overlap because each timeslot is assigned to one speaker. On the other hand, in our scenario, each meeting has been recorded by distributed microphones. Therefore, even when two speakers spoke simultaneously, it is expected that one microphone could have captured the one speaker’s utterance at a sufficient signal-to-noise ratio (SNR) and another microphone could have captured the other speaker’s utterance at a sufficient SNR. In this study, we extract features from all the signals from all the microphones and conduct clustering for the extracted features all together to deal with speaker overlap.
We first split the synchronized observations $\{\mathbf{x}_m\}_m$ into short segments $\{\mathbf{x}_{m,t}\}_{m,t}$ with of window size and of window shift, where $t=1,\dots,T$ denotes the timeslot index. We apply power-based speech activity detection for each segment; as a result, each segment is classified as either speech or non-speech. From each speech segment, we extract features to be used for clustering. In this study, we concatenate two kinds of features: speaker characteristics based features and power ratio based features.
For features to represent speaker characteristics, we use x-vectors [@snyder2018xvectors], which are used in the state-of-the-art diarization systems [@sell2018diarization; @diez2019bayesian]. We extract x-vectors from the audio of each microphone so that we can obtain different speaker characteristics from the same timeslot; thus we can deal with speaker overlap. Before we use the vectors for clustering, we subtract a mean vector within a session from each x-vector and normalized it to have unit norm. As a result, we obtain microphone and timeslot-wise $D$-dimensional features $\mathbf{c}_{m,t}\in\mathbb{R}^D$.
Although x-vectors from distributed microphones are potentially beneficial to diarize overlapped speech, it becomes a problem that an utterance from the same speaker could be judged as one from multiple speakers because x-vectors suffer from speaker-microphone distance and noisy environments. Thus, we introduce power-based timeslot-wise features $\mathbf{p}_t\coloneqq{\left[p_{1,t},\dots,p_{M,t}\right]^\mathsf{T}}$, where $p_{m,t}$ is the average power at $\mathbf{x}_{m,t}$. This speaker diarization part is a session-level one, so we avoid using phase-based features like GCC-PHAT [@knapp1976generalized] because they suffer from the sampling frequency mismatch.
Final $(D+M)$-dimensional features to be clustered are $$\begin{aligned}
\mathbf{v}_{m,t}=\left[
\begin{array}{c}
\mathbf{c}_{m,t} \\
\lambda\mathbf{p}_{t}/{\left\lVert\mathbf{p}_t\right\rVert}
\end{array}\right],
\label{eq:feature}\end{aligned}$$ where $\lambda$ is the scaling factor to balance the effect of $\mathbf{c}_{m,t}$ and $\mathbf{p}_{t}$. We apply agglomerative hierarchical clustering for the features to divide the speech segments into $K$ clusters. As a result, each feature from a speech segment belongs to one of the clusters $\mathcal{C}_1,\dots,\mathcal{C}_K$, where $\mathcal{C}_k$ corresponds to the speech cluster of $k$-th speaker. We also define the additional noise cluster $\mathcal{C}_{K+1}\coloneqq\{\mathbf{v}_{m,t}\}_{m,t}$. The diarization results including noise $Y=\{y_t^{(k)}\}\in\left\{0,1\right\}^{(K+1)\times T}$ are calculated as $$\begin{aligned}
y_t^{(k)}=\begin{cases}
1&\left(\exists m\in\{1,\dots,M\},~\mathbf{v}_{m,t}\in \mathcal{C}_k\right)\\
0&\left(\mathrm{otherwise}\right).
\end{cases}\end{aligned}$$
In the diarization results, utterances are sometimes divided into some short fragments due to the existence of backchannels, noises, . In this study, we treat silence of or less between speech fragments from the same speaker as a speech by applying two iterations of binary closing along the time axis.
Here each timeslot in the diarization results corresponds to , which is inconsistent with the signals used in speech enhancement in the next section. Thus, we upsample the diarization results so that each timeslot corresponds to . Hereafter, $Y=\{y_t^{(k)}\}$ denotes the upsampled diarization results.
Speech enhancement
------------------
In this study, we conducted speech enhancement for each utterance by using guided source separation (GSS) [@boeddeker2018front]. While the original GSS utilized oracle speech activities, we instead use estimated diarization results described in the previous section.
We first apply Weighted Prediction Error [@nakatani2010speech] to the input multichannel signals in a short-time Fourier transform (STFT) domain for dereverberation. The frame length and the frame shift for the STFT were set to and , respectively. After that, speech separation by GSS [@boeddeker2018front] using a complex Angular Central Gaussian Mixture Model (cACGMM) [@ito2016complex] is applied. Given $M$-channel observations in the STFT domain $\mathbf{X}_{t,f}\in\mathbb{C}^M$, the probability density function of the cACGMM for the signals is defined as $$\begin{aligned}
p\left(\hat{\mathbf{X}}_{t,f};\{\alpha_f^{(k)},B_{f}^{(k)}\}_k\right)&=\sum_k \alpha_f^{(k)}\mathcal{A}\left(\hat{\mathbf{X}}_{t,f};B_{f}^{(k)}\right),
$$ where $\hat{\mathbf{X}}_{t,f}=\mathbf{X}_{t,f}/{\left\lVert\mathbf{X}_{t,f}\right\rVert}$ and $\alpha_f^{(k)}$ is the mixture weight for the $k$-th source of the frequency bin $f$. $\mathcal{A}(\hat{\mathbf{X}};B)$ is a complex Angular Central Gaussian distribution [@kent1997data] parameterized by $B\in\mathbb{C}^{M\times M}$. The cACGMM is optimized by the EM algorithm. At the E-step we calculate posteriors $\gamma_{t,f}^{(k)}$ for each speaker at time-frequency bin as follows: $$\begin{aligned}
\gamma_{t,f}^{(k)}\leftarrow\frac{\alpha_f^{(k)}y_t^{(k)}\frac{1}{\det\left(B_f^{(k)}\right)}\frac{1}{\left[{\hat{\mathbf{X}}^\mathsf{H}}_{t,f}\left(B_f^{(k)}\right)^{-1}\hat{\mathbf{X}}_{t,f}\right]^M}}{\sum_{k'}\alpha_f^{(k')}y_t^{(k')}\frac{1}{\det\left(B_f^{(k')}\right)}\frac{1}{\left[{\hat{\mathbf{X}}^\mathsf{H}}_{t,f}\left(B_f^{(k')}\right)^{-1}\hat{\mathbf{X}}_{t,f}\right]^M}}.
\label{eq:e_step}\end{aligned}$$ Here the diarization result $y_t^{(k)}$ works as a guide at this E-step to force the posterior probability to be zero when the speaker $k$ does not speak at time $t$. At the M-step the parameters $\alpha_f^{(k)}$ and $B_f^{(k)}$ are updated as follows: $$\begin{aligned}
\alpha_f^{(k)}\leftarrow\frac{1}{T}\sum_t\gamma_{t,f}^{(k)},\quad
B_f^{(k)}\leftarrow M\frac{\sum_t\gamma_{t,f}^{(k)}\frac{\hat{\mathbf{X}}_{t,f}{\hat{\mathbf{X}}^\mathsf{H}}_{t,f}}{{\hat{\mathbf{X}}^\mathsf{H}}_{t,f}\left(B_f^{(k')}\right)^{-1}\hat{\mathbf{X}}_{t,f}}}{\sum_t\gamma_{t,f}^{(k)}},\end{aligned}$$ where ${(\cdot)^\mathsf{H}}$ denotes Hermitian transpose. To solve the permutation problem, of audios before and after each segment are used as “context”. After 10 iterations of optimization, we calculate the spatial covariance matrices for speech and noise as follows: $$\begin{aligned}
R_{f}^\text{speech}&=\frac{1}{T}\sum_t \gamma_{t,f}^{(k_\text{target})}\mathbf{X}_{t,f}{\mathbf{X}^\mathsf{H}}_{t,f}\in\mathbb{C}^{M\times M},\\
R_{f}^\text{noise}&=\frac{1}{T}\sum_t \left(1-\gamma_{t,f}^{(k_\text{target})}\right)\mathbf{X}_{t,f}{\mathbf{X}^\mathsf{H}}_{t,f}\in\mathbb{C}^{M\times M}.\end{aligned}$$ Here we assume that the target speaker is $k_\text{target}\in\{1,\dots,K\}$. The MVDR beamformer $\mathbf{w}_f$ is calculated using the spatial covariance matrices as $$\begin{aligned}
\mathbf{w}_f&=\frac{{R_f^\text{noise}}^{-1}R_f^\text{speech}\mathbf{r}}{{\mathrm{tr}\left\{{R_f^\text{noise}}^{-1}R_f^\text{speech}\right\}}},\label{eq:beamformer}\end{aligned}$$
where $\mathbf{r}$ is an one-hot vector that corresponds to the reference microphone. Finally, Blind Analytic Normalization (BAN) postfilter [@warsitz2007blind] is applied for $\mathbf{w}_f$ to obtain the final beamformer, which is used for speech enhancement. The enhanced utterance in the STFT domain is calculated as $$\begin{aligned}
z_{t,f}={\mathbf{w}_f^\mathsf{H}}\mathbf{X}_{t,f}.\end{aligned}$$
Speech recognition
------------------
For each enhanced utterance, we apply ASR consisting of a CNN-TDNN-LSTM acoustic model (AM) [@kanda2018lattice] followed by 4-gram-based and recurrent neural network-based language models (LMs) [@kanda2017investigation]. The AM takes 40-dimensional log-scaled Mel-filterbank and 40-dimensional Mel-frequency cepstral coefficients as input audio features. 100-dimensional i-vectors are also fed into the AM for online adaptation for speaker and environment [@saon2013speaker]. It was trained by 1700 hours of Japanese speech corpus using the lattice-free maximum mutual information criterion [@povey2016purely]. The LMs were trained by transcriptions of the corpus used for AM training and the Wikipedia corpus.
Duplication reduction
---------------------
The diarization and speech enhancement is not perfect, so the same transcription is sometimes included in multiple estimated utterances. Therefore, we apply duplication reduction for the ASR results. Widely used ensemble techniques such as ROVER [@fiscus1997post] and confusion network combination [@evermann2000posterior] are for the different ASR results obtained from the same utterance; thus, they cannot be used in this situation where the utterances to be merged have different start and end points. To overcome this issue, we propose a combination technique for such utterances which have different time intervals. We first find which pairs of utterances should be merged. Given the set of $U$ ASR results $\mathcal{W}=\{(\mathbf{w}_u, k_u, t_u^\mathrm{s}, t_u^\mathrm{e})\}_{u=1}^U$, where $\mathbf{w}_u$, $k_u$, $t_u^\mathrm{s}$, and $t_u^\mathrm{e}$ denote the sequence of words, speaker, start time, and end time of $u$-th result, respectively, we calculate an adjacency matrix $A=\{a_{i,j}\}_{i,j}\in\{0,1\}^{U\times U}$ as follows: $$\begin{aligned}
a_{i,j}&=\begin{cases}
1&(\max{(t_i^{\mathrm{s}},t_j^{\mathrm{s}})}<\min{(t_i^{\mathrm{e}}, t_j^{\mathrm{e}})}~\land\\
&\quad s\left(\mathbf{w}_i, \mathbf{w}_j\right)>\tau~\land~k_i\neq k_j)\\
0&(\mathrm{otherwise}),
\end{cases}
\label{eq:adjacency_matrix}\end{aligned}$$ where $\tau\in[0,1]$ is the threshold value. Here $s\left(\mathbf{w}_i,\mathbf{w}_j\right)$ is the similarity between $\mathbf{w}_i$ and $\mathbf{w}_j$ defined as follows: $$\begin{aligned}
s\left(\mathbf{w}_i,\mathbf{w}_j\right)\coloneqq\frac{\max\left({\left\lvert\mathbf{w}_i\right\rvert},{\left\lvert\mathbf{w}_j\right\rvert}\right)-d\left(\mathbf{w}_i,\mathbf{w}_j\right)}{\min\left({\left\lvert\mathbf{w}_i\right\rvert},{\left\lvert\mathbf{w}_j\right\rvert}\right)},\end{aligned}$$ where $d(\mathbf{w}_i, \mathbf{w}_j)$ is the Levenshtein distance between $\mathbf{w}_i$ and $\mathbf{w}_j$, and ${\left\lvert\mathbf{w}\right\rvert}$ denotes the number of words in $\mathbf{w}$. With this adjacency matrix, all the elements in $\mathcal{W}$ can be clustered into $C$ clusters. We denote the clustering result as $\mathcal{C}=\{c_u\}_{u=1}^{U}\in\{1,\dots,C\}^U$, which fulfill $c_i=c_j$ if a path between $i$-th and $j$-th elements exists in $A$ and $c_i\neq c_j$ otherwise. Assuming that $\mathcal{W}_{k,c}\subseteq\mathcal{W}$ is the set of ASR results which belong to the cluster $c$ and are uttered by speaker $k$, we obtain the representative speaker $k^c$ of the cluster $c$ by $$\begin{aligned}
k^c&=\operatorname*{arg\,max}_{k\in\{1,\dots,K\}}f(\mathcal{W}_{k,c}),\end{aligned}$$ where $f(\cdot)$ is the selection function. In this study, we select the speaker with the longest utterance(s), , $f(\mathcal{W}_{k,c})=\sum_{(\mathbf{w},k,t^\mathrm{s},t^\mathrm{e})\in\mathcal{W}_{k,c}}{{\left\lvert\mathbf{w}\right\rvert}}$. The set of de-duplicated ASR results $\mathcal{W}'$ can be obtained as follows: $$\begin{aligned}
\mathcal{W}'&=\bigcup_{c\in\{1,\dots,C\}}\mathcal{W}_{k^c,c}.\end{aligned}$$
Experiments
===========
Data
----
![Recording environment. - denote smartphones, each of which is equipped with a monaural microphone.[]{data-label="fig:recording_environment"}](figs/recording_environment.pdf){width="0.8\linewidth"}
To evaluate the performance of our method, we collected eight sessions of real meeting data. The recording environment is shown in . Each session had at most eight participants and was recorded by 11 smartphones distributed on the table. Each smartphone was equipped with a monaural microphone to record meetings at / 16 bit. Each participant wore a headset microphone, and the groundtruth transcriptions were based on the headset recordings. The statistics of collected data are shown in . The recordings correspond to about two hours of meetings with an average overlap ratio of .
Results
-------
We investigated various combinations of asynchronous distributed microphones: 2 microphones (& in ), 3 microphones (&&), 6 microphones (-), and 11 microphones (-). For comparison, we also evaluated the performance of one monaural microphone () and of headset microphones that the participants wore during each session.
The character error rates (CERs) obtained using various microphone combinations in each session are shown in . In these experiments, the weighting parameter $\lambda$ in was set to $1.0$. By using multiple microphones, we could have reduced CERs, especially by using a large number of microphones. Note that in two-, three-, and six-microphone settings, using more microphones not always resulted in better CERs. This is because the sets of microphones in these settings are disjoint and the CERs highly depended on the positions of microphones and speakers. On the other hand, we observed the best CERs in almost every session by using all the 11 microphones. This result indicated that adding microphones has almost no negative effect on CERs. In , we also showed CERs with 11 microphones in the case when oracle diarization was used for GSS. It achieved the CER of , which is only 2.1 percentage points worse than the CER of obtained using headset microphones. It can be said that our method can potentially achieve nearly headset-level CERs when it is used with a more powerful diarization method [@fujita2019end2; @medennikov2020stc; @horiguchi2020endtoend].
In we show the average CERs over sessions with various weighting parameters $\lambda$ in . Combinations of speaker characteristics based features and power ratio based features improved transcription performance, especially when the number of microphones is smaller and the power ratio thus has less information about the directions of speakers.
Finally, we conducted ablation studies by removing binary closing in diarization, speech enhancement by using recordings of the reference microphone instead, and duplication reduction, respectively. Here we used 11 microphones with $\lambda=1.0$. The results are shown in . We found 1.9, 9.1, and 3.2 percentage points degradation from the baseline by removing binary closing, speech enhancement, and duplication reduction, respectively. From there results, we concluded that these three components contributed to the improvement of the CER.
The oracle diarization was used for speech enhancement.
\
Conclusions
===========
In this paper, we proposed a meeting transcription system based on utterance-wise processing using asynchronous distributed microphones. It consists of the following modules: blind synchronization, speaker diarization, speech enhancement, speech recognition, and duplication reduction. Evaluation on the real meeting data showed the effectiveness of our framework and its components, and also showed that it could perform comparably to the headset microphone-based transcription if the oracle diarization was given. The future perspective of this research is to operate this framework in an online manner.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The radiation of twisted photons by undulators filled with a homogeneous dielectric dispersive medium is considered. The general formulas for the average number of radiated twisted photons are obtained. The radiation of undulators in the dipole regime and the radiation of the helical and planar wigglers are studied in detail. It is shown that the selection rules for radiation of twisted photons established for undulators in a vacuum also holds for undulators filled with a dielectric medium. In the case of a medium with plasma permittivity the lower undulator harmonics do not form. This fact can be used for generation of twisted photons with nonzero orbital angular momentum on the lowest admissible harmonic. The use of the effect of inverse radiation polarization for generation of twisted photons with larger orbital angular momentum is described. The influence of the anomalous Doppler effect on the projection of the total angular momentum of radiated twisted photons is investigated. The parameters of the undulator and the charged particles are found such that the produced radiation, in particular, the Vavilov-Cherenkov radiation, is a pure source of twisted photons with definite nonzero orbital angular momentum. The developed theory is used to describe the radiation of twisted photons by beams of electrons and protons in the undulators filled with helium. We also consider the radiation of X-ray twisted photons by electrons in the undulator filled with xenon. The parameters are chosen so as to be achievable at the present experimental facilities.'
author:
- |
O.V. Bogdanov${}^{1),2)}$[^1], P.O. Kazinski${}^{1)}$[^2], and G.Yu. Lazarenko${}^{1),2)}$[^3]\
[${}^{1)}$ Physics Faculty, Tomsk State University, Tomsk 634050, Russia]{}\
[${}^{2)}$ Division for Mathematics and Computer Sciences]{},\
[Tomsk Polytechnic University, Tomsk 634050, Russia]{}
title: |
[**Generation of twisted photons by\
undulators filled with dispersive medium**]{}
---
Introduction
============
The undulator radiation represents a unique source of photons that combines a high degree of coherence and intensity with a large flexibility of its parameters and availability at the acceleration facilities. Nowadays the free-electron lasers (FELs) employing the undulator radiation are the standard tool for generation of an intense flux of coherent photons from THz up to X-ray spectral ranges [@NovoFEL; @XFEL; @HemStuXiZh14; @Hemsing7516; @Rubic17; @PRRibic19; @Gover19rmp]. Usually the chamber where the charged particles move in the undulator or the whole undulator are mounted in an ultra high vacuum. Nevertheless, there are theoretical and experimental works where the undulators and FELs filled with a dielectric medium are studied [@GinzbThPhAstr; @BazZhev77; @GevKorkh; @BarysFran; @SahKot05; @SahKotGrig; @KotSah12; @KonstKonst; @Appolonov; @GrichSad; @Reid93; @Pantell90; @YarFried; @Reid89prl; @ReidPant; @Pantell86; @Fisher88; @Pantell89; @Feinstein89prl; @ArutOgan94]. The presence of a medium in the undulator degrades the properties of the electron beam evolving in it but, on the other hand, one may adjust the parameters of the beam and the medium in such a way that the undulator will produce the photons with larger energies and narrower spectral bands than the vacuum undulator for a given energy of electrons. The degradation of the electron beam can be overcome. This is possible even in FELs where the coherence properties of radiation strongly depend on the beam configuration [@Reid93; @Pantell90; @Reid89prl; @ReidPant; @YarFried; @Fisher88; @Pantell89; @Feinstein89prl]. The use of proton beams makes this problem virtually negligible though, of course, the generation of undulator radiation becomes a much harder task.
The undulator radiation is known to be a source of twisted photons [@SasMcNu; @AfanMikh; @BordKN; @HemMar12; @BHKMSS; @HKDXMHR; @RibGauNin14; @KatohPRL; @KatohSRexp; @BKL2; @ABKT; @EpJaZo; @BKb; @BKL4; @EppGusel19; @BKL6; @BKLb; @Rubic17; @HemStuXiZh14], i.e., the source of the quanta of the electromagnetic field with the definite energy, the momentum projection onto the direction of propagation, the projection of the total angular momentum onto this axis, and the helicity [@GottfYan; @JaurHac; @BiaBirBiaBir; @JenSerprl; @JenSerepj; @PadgOAM25; @Roadmap16; @KnyzSerb; @TorTorTw; @AndBabAML; @TorTorCar; @MolTerTorTor; @MolTerTorTorPRL]. In the present paper, we investigate the influence of the homogeneous dispersive dielectric medium loaded in the undulator on the properties of radiation of twisted photons. In [@BKL5] the general theory was developed for the radiation of twisted photons by charged particles moving in an inhomogeneous dispersive medium. We apply this theory to describe the radiation of twisted photons by undulators filled with a medium. As a result, some general properties and selection rules for the radiation produced by these undulators are established. In particular, we find that the selection rules for radiation of twisted photons in vacuum undulators are also fulfilled for radiation of twisted photons in undulators filled with a homogeneous dispersive medium.
It turns out that the use of these undulators offers certain advantages in generation of twisted photons in comparison with the vacuum undulators in addition to the merits mentioned above. Namely, the radiation of lower harmonics generated in undulators filled with a dielectric medium having a plasma permittivity do not form for sufficiently large plasma frequencies. This allows one to generate the photons possessing a nonzero orbital angular momentum at the lowest radiation harmonic. In the case of vacuum undulators, this is possible only by the use of helically microbunched beams of particles [@HKDXMHR; @HemStuXiZh14]. Furthermore, suitably adjusting the energy of charged particles and the parameters of the medium, one can twist the Vavilov-Cherenkov (VC) radiation produced in the helical undulator. The photons of this radiation possessing zero projection of the total angular momentum become almost completely circularly polarized. Then the projection of their orbital angular momentum is $-s$, where $s$ is the helicity of radiated photons. The use of planar wigglers filled with a dielectric medium allows one to obtain the VC radiation with nonzero projection of the total angular momentum. However, the probability of radiation of twisted photons obeys the reflection symmetry [@BKL3; @BKL6; @BKL2] in this case.
Another interesting effect that we investigate in the paper is the inverse polarization of the helical undulator radiation in the paraxial regime. This effect manifests itself as domination of the radiation polarization that is inverse to the chirality of helical trajectory of a charged particle in the undulator. This effect also exists in the vacuum helical undulators (see, e.g., [@Bord.1; @BKL4]) but, for certain parameters, it becomes more pronounced in the helical undulators filled with a medium. For a given positive harmonic of undulator radiation, the twisted photons with parameters belonging to the domain of inverse polarization possess the modulus of the projection of the orbital angular momentum by $2\hbar$ more than in the domain of the usual radiation polarization. The negative harmonics of undulator radiation appear when the generation of VC radiation becomes possible. These harmonics correspond to the anomalous Doppler effect [@GrichSad; @GinzbThPhAstr; @BazZhev77; @BarysFran; @Nezlin; @KuzRukh08; @ShiNat18]. We study the influence of the anomalous Doppler effect on the properties of twisted photons produced in undulators. The twisted photons radiated at these harmonics in the helical undulator carry the projection of the total angular momentum $m={\varsigma}n$, where ${\varsigma}=\pm1$ is the chirality of the particle trajectory. In other words, the selection rule for radiation of twisted photons in helical undulators [@EppGusel19; @BKL4; @BKL2; @KatohSRexp; @KatohPRL; @BHKMSS; @SasMcNu] remains intact in the presence of a homogeneous dielectric medium but the sign of the harmonic number $n$ may take both values.
As examples, we consider the radiation of electrons and protons in undulators and wigglers filled with helium. We also describe the radiation of X-ray twisted photons with the projection of the orbital angular momentum $l=2$ produced by electrons in the undulator filled with xenon. In the latter case, the radiation is created near the photoabsorption $M$-edge of xenon. The same mechanism for generation of X-ray VC radiation from the beam of protons traversing the target made of amorphous carbon was experimentally verified in [@BazylevXrayVC]. The X-ray VC radiation from other materials was also observed [@BazylZhev; @KvdWLV]. The use of X-ray VC radiation for diagnostics of particle beams can be found in [@XrayVC2; @XrayVC1]. Notice that the angular momentum of radiated twisted photons in all the considered examples can be shifted to larger values by employing the addition rule valid for the radiation produced by helically microbunched beams of charged particles [@HemMar12; @HKDXMHR; @TrHemsing; @ExpHemsing; @BKLb]. The intensity of radiation can be increased with the help of the coherent radiation created by a periodic train of bunches, the frequency of one of the coherent harmonics must coincide with the energy of twisted photons radiated by one charged particle [@Ginzburg; @KuzRukh08; @Gover19rmp; @PRRibic19; @Rubic17; @Hemsing7516; @HemStuXiZh14; @KKST; @SPTS]. Currently, the highest number of a distinguishable coherent harmonic of the electron bunch train is of order $100$ with the corresponding energy of photons $474$ eV [@PRRibic19].
The paper is organized as follows. In Sec. \[Gener\_Form\_Sec\], the general formulas for the average number of twisted photons produced by charged particles traversing a dielectric plate are presented. We find the transformation law of the mode functions of twisted photons and establish the selection rules for their radiation. The formula for the average number of twisted photons created by Gaussian and helically microbunched beams of identical charged particles is also given. In Sec. \[Undul\_Sec\], we derive the average number of twisted photons radiated in the undulator filled with a homogeneous dispersive medium. We start with the estimates of multiple scattering of the radiating charged particles and find the restrictions on the parameters of the particle beam and the medium when this scattering can be neglected in describing the properties of radiation. Then, in Sec. \[Dip\_Appr\_Sec\], we obtain the general formula for the average number of twisted photons radiated by the undulator in the dipole regime. The properties of the energy spectrum of radiated photons are also discussed. Section \[Hel\_Wig\_Sec\] is devoted to the helical wiggler filled with a medium. We derive the formula for the average number of twisted photons produced by it and analyze the polarization properties of this radiation paying a special attention to the influence of the radiation polarization on the orbital angular momentum of radiated twisted photons. In Sec. \[Plan\_Wigg\_Sec\], we completely describe the radiation of a planar wiggler filled with a medium in terms of the twisted photons. In Conclusion we summarize the results. Throughout the paper we use the system of units such that $\hbar=c=1$ and $e^2=4\pi{\alpha}$, where ${\alpha}\approx1/137$ is the fine structure constant.
General formulas {#Gener_Form_Sec}
================
In the paper [@BKL5], the formalism was developed that allows one to describe the radiation of twisted photons by charged particles moving in an inhomogeneous dispersive medium. In particular, the formula was obtained for the probability to detect a twisted photon created by the charged particle passing through the dielectric plate of the width $L$ possessing the permittivity ${\varepsilon}(k_0)>0$ (see Sec. V.A of [@BKL5]). For the reader convenience, we present here some general formulas from that paper. Notice that such a dielectric plate can be a gas or liquid confined into the cuvette of a proper form.
Let the twisted photon recorded by the detector possess the helicity $s$, the projection of the total angular momentum $m$ to the axis $3$ (the axis $z$), the projection of the momentum $k_3$, and the modulus of the perpendicular component of the momentum $k_\perp$. The energy of such a photon is $$k_0=\sqrt{k_\perp^2+k_3^2},$$ and its state in the Coulomb gauge in the vacuum is characterized by the wave function [@GottfYan; @JaurHac; @BiaBirBiaBir; @JenSerprl; @JenSerepj; @KnyzSerb; @BKL2] $$\label{tw_phot_vac}
\begin{gathered}
\psi_3(m,k_3,k_\perp;{\mathbf{x}})=j_m(k_\perp x_+,k_\perp x_-) e^{ik_3x_3},\qquad \psi_\pm(s,m,k_3,k_\perp;{\mathbf{x}})=\frac{in_\perp}{s\pm n_3}\psi_3(m\pm1,k_3,k_\perp;{\mathbf{x}}),\\
\boldsymbol{\psi}(s,m,k_3,k_\perp;{\mathbf{x}})= \frac12\big[\psi_-(s,m,k_3,k_\perp;{\mathbf{x}}){\mathbf{e}}_+ +\psi_+(s,m,k_3,k_\perp;{\mathbf{x}}){\mathbf{e}}_-
\big]+\psi_3(m,k_3,k_\perp;{\mathbf{x}}){\mathbf{e}}_3,
\end{gathered}$$ where $n_\perp:=k_\perp/k_0$, $n_3:=k_3/k_0$, and ${\mathbf{e}}_\pm={\mathbf{e}}_1\pm i{\mathbf{e}}_2$. The basis unit vectors $\{{\mathbf{e}}_1,{\mathbf{e}}_2,{\mathbf{e}}_3\}$ constitute a right-handed triple and the unit vector ${\mathbf{e}}_3$ is directed along the $z$ axis.
The presence of the dielectric plate changes the vacuum mode functions of twisted photons. Let the dielectric plate be situated at $z\in[-L,0]$ and the detector of twisted photons be located in the vacuum in the region $z>0$. We assume that the typical size of the plate along the $x$ and $y$ axes is large and neglect the influence of the edge effects on the properties of radiation [@Pafomov]. As a rule, this is justified for the radiation of relativistic particles as long as their radiation is concentrated in a narrow cone. In the presence of the dielectric plate, the mode functions of the twisted photons have the form $$\label{vac_mode_f}
\begin{alignedat}{2}
a&\boldsymbol{\psi}(s,m,k_3,k_\perp)&\quad&\text{for $z>0$},\\
a&\big[b_+\boldsymbol{\psi}'(1,m,k'_3,k_\perp) +b_-\boldsymbol{\psi}'(-1,m,k'_3,k_\perp) +(k'_3\leftrightarrow-k'_3)\big]&\quad&\text{for $z\in[-L,0]$},\\
a&\big[a_+\boldsymbol{\psi}(1,m,k_3,k_\perp) +d_+\boldsymbol{\psi}(1,m,-k_3,k_\perp) +a_-\boldsymbol{\psi}(-1,m,k_3,k_\perp) +d_-\boldsymbol{\psi}(-1,m,-k_3,k_\perp)\big]&\quad&\text{for $z<-L$},
\end{alignedat}$$ where $$\psi'_3(m,k'_3,k_\perp)=\psi_3(m,k'_3,k_\perp),\qquad\psi'_\pm(s',m,k'_3,k_\perp)=\frac{in_\perp}{s'{\varepsilon}^{1/2}(k_0)\pm n'_3}\psi'_3(m\pm1,k'_3,k_\perp),$$ and $$\label{mass_shell_plate2}
n'_3:=k_3'/k_0=\sqrt{{\varepsilon}(k_0)-n_\perp^2}.$$ Also $$\label{Fresnel_coeff}
\begin{split}
b_\pm&=\frac{{\varepsilon}^{1/2}\pm s}{4{\varepsilon}n'_3}(\pm sn'_3+{\varepsilon}^{1/2}n_3),\\
a_{\pm}&=\frac{2 (1\pm s){\varepsilon}n_3n'_3\cos(k'_3L)-i({\varepsilon}^{2}n_{3}^{2}+n'^2_{3}\pm s{\varepsilon}(n_{3}^{2}+n'^2_{3}))\sin(k'_3L)}{4{\varepsilon}n_3n'_3}e^{ik_3L}, \\
d_{\pm}&=-i\frac{{\varepsilon}^{2}n_{3}^{2}-n'^2_{3}\pm s{\varepsilon}(n_{3}^{2}-n'^2_{3})}{4{\varepsilon}n_3n'_3}\sin(k'_3L) e^{-ik_3L},
\end{split}$$ where $s$ is the helicity of the mode function . The coefficients obey the unitarity relation $$1+|d_+|^2+|d_-|^2=|a_+|^2+|a_-|^2,$$ for real-valued $k_3$. The constant $a$ is found from the normalization of the mode functions: $$|a|^{-2}=|a_+|^2+|a_-|^2=\Big|1+\frac18\Big[({\varepsilon}^2+1)\Big(\frac{n^2_3}{n'^2_3} +\frac{n'^2_3}{{\varepsilon}^2 n^2_3}\Big)-4\Big]\sin^2(k'_3 L)\Big|.$$ The above formulas can be generalized to the case of the medium with absorption [@BKL5].
Let us denote as $\boldsymbol{\Phi}(s,m,k_\perp,k_3;{\mathbf{x}})$ the mode function . Then the probability to record the twisted photon created by the particles with charges $e_l$ is given by $$\begin{gathered}
\label{prob_plate}
dP(s,m,k_\perp,k_3)=|a|^2\bigg|\sum_{l}e_l\int_{-\infty}^\infty d\tau e^{-ik_0 x^0_l(\tau_l)}
\Big\{\dot{x}_{3l}(\tau_l)\Phi_3(s,m,k_\perp,k_3;{\mathbf{x}}_l(\tau_l))+\\
+\frac12\big[\dot{x}_{+l}(\tau_l)\Phi_-(s,m,k_\perp,k_3;{\mathbf{x}}_l(\tau_l)) +\dot{x}_{-l}(\tau_l)\Phi_+(s,m,k_\perp,k_3;{\mathbf{x}}_l(\tau_l))\big] \Big\} \bigg|^2 n_\perp^3\frac{dk_3 dk_\perp}{16\pi^2},\end{gathered}$$ where $x_l^\mu(\tau)$ are the world lines of particles. Strictly speaking, the quantity is the probability to detect a twisted photon only in the first Born approximation with respect to the classical current of the charged particle. If one neglects the quantum recoil and replaces the current operator by a c-number quantity, then the equations of quantum electrodynamics are exactly solvable. In that case, the probability to record a twisted photon is expressed though , whereas the expression is equal to the average number of radiated twisted photons (see for details [@BKL2; @BKL4; @BKL5; @ippccb]).
Notice some general properties of the expression . Let $A(s,m,k_\perp,k_3;j]$ be the amplitude of radiation of a twisted photon by the current $j_i$ entering into . Then on rotating the current $j_i(x)$ around the detector axis by the angle of ${\varphi}$, $j_i\rightarrow j_i^{\varphi}$, the amplitude transforms as $$A(s,m,k_\perp,k_3;j^{\varphi}]=e^{im{\varphi}} A(s,m,k_\perp,k_3;j],$$ i.e., it has the same transformation law as in a vacuum. Consequently, the selection rules established in Sec. 2 of [@BKL3] also hold for the radiation of twisted photons by charged particles in the presence of a dielectric plate. Furthermore, the radiation produced by the current of particles moving parallel to some plane containing the detector axis, i.e., the current being such that $\arg j_+=const$, obeys the reflection symmetry $$\label{refl_symm}
dP(s,m,k_\perp,k_3)=dP(-s,-m,k_\perp,k_3).$$ The proof of this relation is the same as it was given in [@BKL2] for the radiation from a charged particle moving along a planar trajectory in a vacuum. Selecting suitably the axes $x$ and $y$, one may put $j_+(x)=j_-(x)$. Then, performing the rotation around the $z$ axis by the angle of $\pi$, we obtain that $j^\pi_+(x)=j^\pi_-(x)$ and $$j_3\rightarrow j^\pi_3,\qquad j_\pm\rightarrow -j^\pi_\pm,\qquad\Phi_3(s,m)\rightarrow\Phi_3(-s,-m),\qquad \Phi_\pm(s,m)\rightarrow-\Phi_\mp(-s,-m).$$ Whence $$A(s,m,k_\perp,k_3;j_\pi]=A(-s,-m,k_\perp,k_3;j]=e^{im\pi}A(s,m,k_\perp,k_3;j].$$ Taking the modulus squared of the both parts of the last equality, we arrive at the relation .
Further, we shall need the formula for the probability of radiation of twisted photons by the beam of identical charged particles with small dispersion of the initial momenta (see for more detail [@BKb; @BKLb]). This formula is deduced from the general formula . The radiation produced by a helically microbunched beam of particles with the helix pitch ${\delta}$ and the chirality $\chi_b=\pm1$ is concentrated at the harmonics $$\label{coherent_harm}
k_0=2\pi\chi_b n_c{\beta}_3/{\delta},\qquad \chi_b n_c>0,\;n_c\in \mathbb{Z},$$ where ${\beta}_3$ is the velocity of particles along the axis $3$. At these harmonics, the average number of radiated twisted photons becomes $$\label{dP_bunch}
dP_\rho(s,m,k_\perp,k_3)=N\sum_{j=-\infty}^\infty f_{m-j}dP_1(s,j,k_\perp,k_3) +N(N-1)|\bar{{\varphi}}_{n_c}|^2dP_1(s,m-n_c,k_\perp,k_3),$$ where $N$ is the number of particles, $dP_1(s,m,k_\perp,k_3)$ is the average number of twisted photons created by one charged particle moving long the center of the beam, $f_{m}$ and $\bar{{\varphi}}_{n}$ are the incoherent and coherent interference factors, respectively. The explicit expressions for these factors are presented in [@BKb; @BKLb]. If $k_3{\delta}\gg1$ and $k_3{\sigma}_3\gg1$, where ${\sigma}_3$ is the longitudinal dimension of the beam, then the coherent contribution is strongly suppressed and formula is valid for any energy of the radiated photon. Notice that the usual Gaussian beam of particles is obtained from the helically microbunched one when ${\delta}\gg{\sigma}_3$.
Undulator {#Undul_Sec}
=========
Consider the radiation of twisted photons by one charged particle moving in the undulator filled with the dielectric medium with permittivity ${\varepsilon}(k_0)$. The trajectory of the particle has the form (see, e.g., [@Bord.1]) $$\label{traj_undul}
x^i(t)=r^i(t)+{\beta}^i t,\qquad {\beta}^i=(0,0,{\beta}_3),\qquad t\in[-TN_u,0],\qquad L=TN_u,$$ where $t$ is the laboratory time, ${\beta}_3\in[0,1)$, $r^i(t)$ is a periodic function of $t$ with the period $T=:2\pi/\omega$ and zero average over this period, $N_u\gg1$ is the number of undulator sections, the length of one section is ${\lambda}_0=2\pi{\beta}_3/\omega$. For $t<-TN_u$ or $t>0$, the particle moves along the undulator axis with the velocity $${\beta}_\parallel:=\sqrt{1-1/{\gamma}^2}.$$ The parts of the particle trajectory are joined continuously at the instants $t=-TN_u$ and $t=0$. The undulator strength parameter $K$ is determined by the relation $${\beta}_3^2=1-\frac{1+K^2}{{\gamma}^2}.$$ It can also be defined as $$K^2={\gamma}^2{\langle}{\beta}_\perp^2{\rangle},$$ where the angular brackets denote the average over the trajectory period. The magnetic field strength in the undulator is $$eH=\frac{\omega m_p}{z{\beta}_3} \sqrt{2}K,$$ where $m_p$ is the mass of a charged particle and $z$ is its charge in the units of the elementary charge. In the relativistic case, ${\gamma}\gg1$, the following estimates for the parameters of particle’s trajectory hold $$\label{rel_appr}
r^2_{1,2}\sim\frac{K^2}{\omega^2{\gamma}^2},\qquad |r_3|\lesssim\frac{K^2}{2\pi\omega{\gamma}^2},\qquad \frac{K}{{\gamma}}\ll1,\qquad{\beta}_3\approx1-\frac{1+K^2}{2{\gamma}^2}.$$ We also assume throughout this paper that the axis of the detector of twisted photons coincides with the undulator axis. Then the periodic part of the particle trajectory is written as $$\textbf{r}=\frac12(r_+{\mathbf{e}}_- +r_-{\mathbf{e}}_+)+r^3{\mathbf{e}}_3,$$ where $r_\pm=r^1\pm ir^2$.
We will be interested in the contribution to radiation of twisted photons produced by the charged particle on the part of the trajectory $t\in[-TN_u,0]$, i.e., we will discard the contribution of transition radiation. Such an approximation is valid for sufficiently large $N_u$. Besides, we will assume that the multiple scattering of the charged particle on the particles of the medium does not considerably affect the properties of radiation produced by one particle. This is justified (see, e.g., [@BazylevXrayVC; @BazylZhev]) when the average square of the multiple scattering angle [@BazylZhev; @BazylevXrayVC; @Migdal57; @TMikaelian; @PDG10], $$q\approx\Big(\frac{zE_s}{m_p{\gamma}{\beta}_\parallel^2}\Big)^2\frac{L}{L_{rad}}=\frac{4\pi z^2}{{\alpha}{\gamma}^2{\beta}_\parallel^4}\frac{m_e^2L}{m_p^2L_{rad}},$$ satisfies $$q\lesssim n_\perp^2,\qquad q\lesssim {\langle}{\beta}_\perp^2{\rangle}=K^2/{\gamma}^2.$$ Here $m_e$ is the electron mass and $L_{rad}$ is the radiation length. The formula for the radiation length in the medium consisting of single type nuclei with the charge $Z$ reads as [@PDG10] $$L_{rad}^{-1}=\frac{M_p}{716.4}n_m Z(Z+1)\ln\frac{287}{Z^{1/2}},$$ where $M_p$ is the proton mass in grams and $n_m$ is the particle number of the medium in $1$ cm${}^{3}$. To run such an undulator in the FEL regime, more stringent conditions on the parameters of the particle beam and the dielectric medium must be imposed [@Reid93; @Pantell90; @Reid89prl; @ReidPant; @YarFried; @Fisher88; @Pantell89; @Feinstein89prl; @Appolonov].
Dipole approximation {#Dip_Appr_Sec}
--------------------
Let us find, at first, the average number of twisted photons radiated by a charged particle in the undulator in the dipole regime when $$\label{dipole_appr}
|k'_3r_3|\ll1,\qquad k_\perp|r_+|\ll1,\qquad K\ll1.$$ The evaluation of the radiation amplitude entering into is reduced to the evaluation of the integrals $$\label{I_integrals}
\begin{split}
I_3&=\int_{-TN_u}^0 dt\dot{x}_3e^{-ik_0t+ik_3'x_3(t)}j_m\big(k_\perp x_+(t),k_\perp x_-(t)\big),\\
I_\pm&=\frac{in_\perp}{s'{\varepsilon}^{1/2}\mp n_3'}\int_{-TN_u}^0 dt\dot{x}_\pm e^{-ik_0t+ik_3'x_3(t)}j_{m\mp1}\big(k_\perp x_+(t),k_\perp x_-(t)\big).
\end{split}$$ In the dipole approximation , we have $$\label{j_m_expans}
\begin{split}
j_m&={\delta}_{m0}\Big(1-\frac14k_\perp^2|r_+|^2\Big) +\frac12{\delta}_{m1}k_\perp r_+ -\frac12{\delta}_{m,-1}k_\perp r_-+\cdots,\\
j_{m+1}&={\delta}_{m,-1} +\frac12{\delta}_{m0}k_\perp r_+ -\frac12{\delta}_{m,-2}k_\perp r_-+\cdots,\\
j_{m-1}&={\delta}_{m1} +\frac12{\delta}_{m2}k_\perp r_+ -\frac12{\delta}_{m0}k_\perp r_-+\cdots,\\
\end{split}$$ where the properties of the functions $j_m$ have been used (see (A3), (A4) of [@BKL2]). For brevity, we do not write out the arguments of the functions $j_m$.
![[The average number of twisted photons produced by electrons moving in the helical undulator with the chirality ${\varsigma}=1$. The Lorentz factor is ${\gamma}=500$, $E=256$ MeV. The undulator is filled with helium under the pressure $1/4$ atm and the temperature $0$ ${}^o$C. The gas concentration is calculated as $n_m=p/(k_BT)$. In accordance with formula , the plasma frequency $\omega_p\approx0.14$ eV. The number of undulator periods $N_u=40$ and the period ${\lambda}_0=1$ cm. The undulator strength parameter $K=1/5$, which corresponds to the magnetic field strength $H=3.03$ kG in the undulator. The projection of the total angular momentum of radiated photons per photon is $\ell=3$ and the energy of photons at the third harmonic is $k_0=112$ eV. The ratios $n_\perp^2/q\approx7.3$ and $K^2/(q{\gamma}^2)\approx1.2$ that means that the multiple scattering can be neglected. The dependence $k_0(n_\perp)$ for the different harmonics is presented on the left panel in Fig. \[znk\_plots\]. At the given energy of photons, the harmonics with $n<3$ are not formed (see the plot for $n=m=1$). On the left panel: The radiation from one electron is described. The radiation of twisted photons with $s=-1$ is strongly suppressed. On the right panel: The radiation from the beam of electrons is considered. The beam is supposed to have a Gaussian profile with the longitudinal dimension ${\sigma}_3=150$ $\mu$m (duration $0.5$ ps) and the transverse size ${\sigma}_\perp=1.25$ $\mu$m. The coherent contribution to radiation of twisted photons is strongly suppressed. We see that the radiation of photons with projection of the orbital angular momentum $l=m-s=2$ dominates.]{}[]{data-label="plsm_perm_plots"}](vrm_dip_less_1.pdf "fig:"){width="0.49\linewidth"} ![[The average number of twisted photons produced by electrons moving in the helical undulator with the chirality ${\varsigma}=1$. The Lorentz factor is ${\gamma}=500$, $E=256$ MeV. The undulator is filled with helium under the pressure $1/4$ atm and the temperature $0$ ${}^o$C. The gas concentration is calculated as $n_m=p/(k_BT)$. In accordance with formula , the plasma frequency $\omega_p\approx0.14$ eV. The number of undulator periods $N_u=40$ and the period ${\lambda}_0=1$ cm. The undulator strength parameter $K=1/5$, which corresponds to the magnetic field strength $H=3.03$ kG in the undulator. The projection of the total angular momentum of radiated photons per photon is $\ell=3$ and the energy of photons at the third harmonic is $k_0=112$ eV. The ratios $n_\perp^2/q\approx7.3$ and $K^2/(q{\gamma}^2)\approx1.2$ that means that the multiple scattering can be neglected. The dependence $k_0(n_\perp)$ for the different harmonics is presented on the left panel in Fig. \[znk\_plots\]. At the given energy of photons, the harmonics with $n<3$ are not formed (see the plot for $n=m=1$). On the left panel: The radiation from one electron is described. The radiation of twisted photons with $s=-1$ is strongly suppressed. On the right panel: The radiation from the beam of electrons is considered. The beam is supposed to have a Gaussian profile with the longitudinal dimension ${\sigma}_3=150$ $\mu$m (duration $0.5$ ps) and the transverse size ${\sigma}_\perp=1.25$ $\mu$m. The coherent contribution to radiation of twisted photons is strongly suppressed. We see that the radiation of photons with projection of the orbital angular momentum $l=m-s=2$ dominates.]{}[]{data-label="plsm_perm_plots"}](vrm_dip_less_N.pdf "fig:"){width="0.49\linewidth"}
It is useful to represent the periodic part of the trajectory as a Fourier series $$\label{traj_Four}
\mathbf{r}=\sum_{n=-\infty}^\infty \mathbf{r}_n e^{i\omega n t},\qquad \mathbf{r}_0=0.$$ Substituting the expansions , into and neglecting $k_3'r_3$ in the exponent, we obtain $$\begin{split}
I_3&={\delta}_{m0}{\varphi}_0\Big(\beta_3 -\frac14 k_\perp^2\sum_{n=-\infty}^\infty|r_{n+}|^2\Big) +\frac12 {\delta}_{m1}\sum_{n=-\infty}^\infty k_\perp r_{n+}{\varphi}_n -\frac12 {\delta}_{m,-1}\sum_{n=-\infty}^\infty k_\perp r_{n-}{\varphi}_n,\\
I_\pm&=\frac{n_\perp}{s'{\varepsilon}^{1/2}\mp n'_3}\Big[{\delta}_{m0}{\varphi}_0\sum_{n=-\infty}^\infty \frac{k_\perp\omega n}{2}|r_{n+}|^2 -{\delta}_{m,\pm1}\sum_{n=-\infty}^\infty \omega nr_{n\pm}{\varphi}_n \Big],
\end{split}$$ where the terms giving a negligible contribution to the radiation probability are discarded (see the estimates , ) and $$\label{vfn}
{\varphi}_n:=2\pi e^{iT N_u[k_0(1-n_3'{\beta}_3)-n\omega]/2}{\delta}_{N_u}\big(k_0(1-n_3'{\beta}_3)-n\omega\big),\qquad {\delta}_{N_u}(x):=\frac{\sin(T N_u x/2)}{\pi x}.$$ Then the contribution to the radiation amplitude of a twisted photon that comes from the wave function $\psi'(s',m,k'_3,k_\perp)$ is written as $$\begin{gathered}
I_3+\frac12(I_++I_-)={\delta}_{m0}{\varphi}_0\Big[\beta_3 -\frac14\sum_{n=-\infty}^\infty(k_\perp^2-2s'{\varepsilon}^{1/2}k_0\omega n)|r_{n+}|^2 \Big]+\\
+\frac12{\delta}_{m1}\sum_{n=-\infty}^\infty\Big[1 -\frac{\omega n}{k_0(s'{\varepsilon}^{1/2}-n'_3)}\Big] k_\perp r_{n+}{\varphi}_n -\frac12{\delta}_{m,-1}\sum_{n=-\infty}^\infty \Big[1 +\frac{\omega n}{k_0(s'{\varepsilon}^{1/2}+n'_3)}\Big] k_\perp r_{n-}{\varphi}_n.\end{gathered}$$ As a result, the average number of twisted photons produced by one relativistic charged particle in the undulator in the dipole regime becomes $$\label{dP_dip}
dP(s,m,k_\perp,k_3)=|zea|^2\big({\delta}_{m0}|B_0|^2 +{\delta}_{m1}|B_+|^2 +{\delta}_{m,-1}|B_{-}|^2\big) n_\perp^3\frac{dk_3dk_\perp}{64\pi^2},$$ where $$\begin{split}
B_0&={\varphi}_0\Big[\Big(\frac1{{\varepsilon}}+\frac{n_3}{n_3'}\Big)\Big(\beta_3 -\frac14\sum_{n=-\infty}^\infty k_\perp^2|r_{n+}|^2\Big) +\frac{s}{2} \Big(1+\frac{n_3}{n'_3}\Big) \sum_{n=-\infty}^\infty k_0\omega n|r_{n+}|^2 \Big],\\
B_\pm&=\sum_{n=-\infty}^\infty \frac{k_\perp r_{n\pm}}{2}\Big\{\frac{1}{{\varepsilon}}+\frac{n_3}{n'_3} -\frac{\omega n}{k_\perp n_\perp}\Big[ n_3 +\frac{n_3'}{{\varepsilon}} \pm s\Big(1+\frac{n_3}{n'_3}\Big)\Big] \Big\}{\varphi}_n+(k'_3\leftrightarrow-k'_3).
\end{split}$$ For $N_u\gtrsim10$, the functions $|{\varphi}_n({\sigma}k_3')|$, ${\sigma}=\pm1$, possess sharp maxima at $$\label{harmonics}
k_0=\frac{n\omega}{1-{\sigma}n'_3{\beta}_3}\;\Leftrightarrow\;n_\perp=\sqrt{{\varepsilon}(k_0)-(1-n\omega/k_0)^2/{\beta}_3^2},$$ where $n_\perp\in[0,1]$. These relations determine the undulator radiation spectrum. The peculiarities of this spectrum will be discussed below in Sec. \[spectrum\_sect\]. The plots of the average number of twisted photons produced in the helical undulator filled with helium are presented in Fig. \[plsm\_perm\_plots\].
Neglecting the terms that are small for $N_u\gtrsim10$, we have $$\label{Bpm}
|B_\pm|^2=\sideset{}{'}\sum_{n=-\infty}^\infty \frac{k_\perp^2 |r_{n\pm}|^2}{8}\Big|\frac{1}{{\varepsilon}}+\frac{n_3}{n'_3} -\frac{\omega n}{k_\perp n_\perp}\Big[ n_3 +\frac{n_3'}{{\varepsilon}} \pm s\Big(1+\frac{n_3}{n'_3}\Big)\Big] \Big|^2|{\varphi}_n|^2+(k'_3\leftrightarrow-k'_3),$$ where the prime at the sum sign reminds that only those $n$ are kept that satisfy . It is supposed that there are unforbidden harmonics from the intervals contributing considerably to for a given $k_0$. As we see, in the dipole regime, the most part of undulator radiation consists of the twisted photons with the projection of the total angular momentum $m=\{-1,0,1\}$. Recall that, in the dipole regime, the undulator in a vacuum radiates mainly the twisted photons with $m=\pm1$ [@BKL2]. The contribution with $m=0$ in the case of the undulator filled with a medium corresponds to the VC radiation. Also note that in the ultraparaxial regime when $$\label{ultraparaxial}
n_\perp^2\ll1,\qquad n_\perp^2\ll{\varepsilon},\qquad n_\perp^2\ll{\varepsilon}^{1/2}\omega n/k_0,$$ the expression can be simplified to $$|B_\pm|^2 \approx {\delta}_{s,\pm1}\big|1+{\varepsilon}^{-1/2}\big|^2 \sideset{}{'}\sum_{n=-\infty}^\infty \frac{\omega^2 n^2 |r_{n\pm}|^2}{2n_\perp^2} |{\varphi}_n|^2+(k'_3\leftrightarrow-k'_3),$$ which is valid in the leading order. Thus we see from that, in this approximation, the main contribution to the radiation of twisted photons with $m=\pm1$ comes from the twisted photons with the projection of the orbital angular momentum $l=m-s=0$. This value of the orbital angular momentum can be shifted by an integer number by employing the helically microbunched beams of charged particles as a radiator in such an undulator [@HKDXMHR; @BKLb].
### Spectrum {#spectrum_sect}
Now we discuss some typical characteristics of the radiation spectrum . For a fixed $k_0$, we have $$\label{n_interv}
n\in
\left\{
\begin{array}{ll}
\frac{k_0}{\omega}[1-{\beta}_3{\varepsilon}^{1/2},1-{\beta}_3\chi^{1/2}]\cup \frac{k_0}{\omega}[1+{\beta}_3\chi^{1/2},1+{\beta}_3{\varepsilon}^{1/2}], & \hbox{$\chi\geqslant0$;} \\[0.3em]
\frac{k_0}{\omega}[1-{\beta}_3{\varepsilon}^{1/2},1+{\beta}_3{\varepsilon}^{1/2}], & \hbox{$\chi\in(-1,0)$.}
\end{array}
\right.$$ where $\chi:={\varepsilon}-1$. The boundaries $k_0(1\mp{\beta}_3\chi^{1/2})/\omega$ of the intervals of admissible harmonic numbers correspond to the condition of the total internal reflection, $n_\perp=1$. The harmonics with $n>0$ describe the usual undulator radiation modified by the presence of the medium. In this case, ${\sigma}=\pm1$. The branch with ${\sigma}=-1$ describes the contribution to radiation from the reflected wave and $k_0<n\omega$ in this case, whereas the branch with ${\sigma}=1$ comes from the direct wave and $k_0>n\omega$. The harmonics with $n\leqslant0$ are realized only for ${\beta}_3{\varepsilon}^{1/2}\geqslant1$ and ${\sigma}=1$. The harmonic $n=0$ corresponds to the VC radiation and reproduces the condition for the Cherenkov cone with $n_\perp\equiv\sin\theta$ in this case. The harmonics with $n<0$ describe the radiation with the anomalous Doppler effect [@GrichSad; @GinzbThPhAstr; @BazZhev77; @BarysFran; @Nezlin; @KuzRukh08; @ShiNat18]. The function $n_\perp(n,k_0)$ is an increasing function of $n$ for $k_0>\omega n$, i.e., for ${\sigma}=1$, and a decreasing function of $n$ for $k_0<\omega n$, i.e., for ${\sigma}=-1$. Notice also that for $\chi\leqslant0$ and $$\label{harm_forbid}
k_0(1-{\beta}_3{\varepsilon}^{1/2})>\omega n_0,$$ the harmonics with $n\in[1,n_0]$ are not formed. If the inequality is fulfilled for any $k_0$ within the transparence zone of the medium, then the lower harmonics are completely absent.
![[On the left panel: The dependence $k_0(n_\perp)$ for the harmonics $n=\overline{3,6}$ of radiation of twisted photons in the undulator with the parameters described in Fig. \[plsm\_perm\_plots\]. The harmonics with $n=\{1,2\}$ are forbidden. The thin horizontal line corresponds to $k_0=112$ eV. On the right panel: The function $|z(n_k)|$ for the different values of $\bar{\chi}$. The undulator strength parameter $K=1/2$. The inclined straight line is $z={\varsigma}\bar{k}_0 n_k/m=n_k/2$. The straight vertical line is $n_k=\sqrt{\bar{\chi}-\bar{\chi}_c}$. The straight horizontal lines: the dotted line is $z=1$; the dashed lines are $z=b_{2,1}\approx1.53$, $z=c_{2,1}\approx2.57$; the thin solid line is $z=b_{1,1}\approx1.84$. The unique intersection point of the line $z={\varsigma}\bar{k}_0 n_k/m$ with the curve $z(n_k)$ gives $n_k$ and, thereby, $\bar{k}_0$ for the radiated twisted photons. Then it is not difficult to see which polarization dominates for these parameters.]{}[]{data-label="znk_plots"}](spectr_dip_less.pdf "fig:"){width="0.48\linewidth"} ![[On the left panel: The dependence $k_0(n_\perp)$ for the harmonics $n=\overline{3,6}$ of radiation of twisted photons in the undulator with the parameters described in Fig. \[plsm\_perm\_plots\]. The harmonics with $n=\{1,2\}$ are forbidden. The thin horizontal line corresponds to $k_0=112$ eV. On the right panel: The function $|z(n_k)|$ for the different values of $\bar{\chi}$. The undulator strength parameter $K=1/2$. The inclined straight line is $z={\varsigma}\bar{k}_0 n_k/m=n_k/2$. The straight vertical line is $n_k=\sqrt{\bar{\chi}-\bar{\chi}_c}$. The straight horizontal lines: the dotted line is $z=1$; the dashed lines are $z=b_{2,1}\approx1.53$, $z=c_{2,1}\approx2.57$; the thin solid line is $z=b_{1,1}\approx1.84$. The unique intersection point of the line $z={\varsigma}\bar{k}_0 n_k/m$ with the curve $z(n_k)$ gives $n_k$ and, thereby, $\bar{k}_0$ for the radiated twisted photons. Then it is not difficult to see which polarization dominates for these parameters.]{}[]{data-label="znk_plots"}](znk2.pdf "fig:"){width="0.48\linewidth"}
Consider an important particular case of the plasma permittivity in more detail. In this case, $$\label{plasm_perm}
{\varepsilon}(k_0)=1-\omega^2_p/k_0^2,$$ where $\omega_p$ is the plasma frequency. For the material consisting of a single type of nuclei, the following approximate formula holds [@LandLifshECM]: $$\label{plasm_freq}
\omega_p^2=4\pi{\alpha}Z n_m/m_e.$$ As long as $\chi<0$ in this case, then $n\geqslant1$ and $$k_0=\frac{n\omega\pm|{\beta}_3|\sqrt{n^2\omega^2n_3^2-(1-{\beta}_3^2 n_3^2)\omega_p^2}}{1-{\beta}_3^2 n_3^2}.$$ The energy of photons radiated at the harmonic $n$ belongs to the interval $k_0\in[k_0^-,k_0^+]$, where $$k_0^\pm=\frac{n\omega \pm|{\beta}_3|\sqrt{n^2\omega^2-(1-{\beta}_3^2)\omega_p^2}}{1-{\beta}_3^2}.$$ Notice that $k_0^-\geqslant\omega_p$ and $k_0^-=\omega_p$ only for $\omega n=\omega_p$. The maximum value of $n_\perp=n_\perp^c$ at a given harmonic is $$(n_\perp^c)^2=\frac{n^2\omega^2+\omega_p^2({\beta}_3^2-1)}{n^2\omega^2+\omega_p^2{\beta}_3^2}.$$ It corresponds to the energy $$k_0^c=\frac{n^2\omega^2+\omega_p^2{\beta}_3^2}{n\omega}.$$ Besides, if $$\label{prohib_harm}
n_0<(1-{\beta}_3^2)^{1/2}\omega_p/\omega,$$ then the harmonics with $n\in[1,n_0]$ are not formed for any $n_\perp$. This fact can be employed for generation of twisted photons with large projection of the total angular momentum $m$ at the lowest admissible harmonic (see Figs. \[plsm\_perm\_plots\] and \[znk\_plots\]). The plots of the dependence $k_0(n_\perp)$ for several lower harmonics are presented in Fig. \[znk\_plots\].
Helical wiggler {#Hel_Wig_Sec}
---------------
Now we turn to the radiation of undulator in the wiggler regime when the conditions are violated. As in the case of undulator in a vacuum, a simple analytic expression for the average number of twisted photons radiated by a charged particle moving along a trajectory of a general form in the wiggler filled with a medium cannot be derived. Therefore, we consider the two particular cases: the ideal helical wiggler and the planar wiggler.
![[On the left panel: The dependence $k_0(n_\perp)$ for the harmonics $n=\overline{1,6}$ of radiation of twisted photons in the undulator filled with helium under the pressure $1$ atm and the temperature $0$ ${}^o$C. The experimental data for permittivity are taken from [@RefrIndex]. On the right panel: The average number of twisted photons produced by one electron moving in such helical wiggler with the chirality ${\varsigma}=1$. The Lorentz factor is ${\gamma}=167$, $E=85.3$ MeV. The number of wiggler periods $N_u=25$ and the period ${\lambda}_0=1$ cm. The undulator strength parameter $K=1$, which corresponds to the magnetic field strength $H=15.1$ kG in the wiggler. The energy of photons at the zeroth harmonic (the VC radiation) is taken as $k_0=6.9$ eV. The ratios $n_\perp^2/q\approx1.0$ and $K^2/(q{\gamma}^2)\approx12$ that means that the multiple scattering can be neglected. We see that, at the zeroth harmonic, the radiation of photons with $s=1$ is suppressed and so the radiation of photons with projection of the orbital angular momentum $l=-1$ dominates.]{}[]{data-label="VC_pol_plots"}](spectr_VC1.pdf "fig:"){width="0.47\linewidth"} ![[On the left panel: The dependence $k_0(n_\perp)$ for the harmonics $n=\overline{1,6}$ of radiation of twisted photons in the undulator filled with helium under the pressure $1$ atm and the temperature $0$ ${}^o$C. The experimental data for permittivity are taken from [@RefrIndex]. On the right panel: The average number of twisted photons produced by one electron moving in such helical wiggler with the chirality ${\varsigma}=1$. The Lorentz factor is ${\gamma}=167$, $E=85.3$ MeV. The number of wiggler periods $N_u=25$ and the period ${\lambda}_0=1$ cm. The undulator strength parameter $K=1$, which corresponds to the magnetic field strength $H=15.1$ kG in the wiggler. The energy of photons at the zeroth harmonic (the VC radiation) is taken as $k_0=6.9$ eV. The ratios $n_\perp^2/q\approx1.0$ and $K^2/(q{\gamma}^2)\approx12$ that means that the multiple scattering can be neglected. We see that, at the zeroth harmonic, the radiation of photons with $s=1$ is suppressed and so the radiation of photons with projection of the orbital angular momentum $l=-1$ dominates.]{}[]{data-label="VC_pol_plots"}](VC1.pdf "fig:"){width="0.48\linewidth"}
We start with the ideal helical wiggler. In this case, the trajectory of the charged particle takes the form with $$r_\pm=re^{\pm i{\varsigma}\omega t},\qquad r_3=0,\qquad K={\gamma}\omega r,$$ where $\omega>0$ and ${\varsigma}=\pm1$. Then the integrals can easily be evaluated $$I_3={\beta}_3{\varphi}_{{\varsigma}m}J_m(k_\perp r),\qquad I_\pm=\mp{\varphi}_{{\varsigma}m}\frac{{\varsigma}\omega rn_\perp}{{\varepsilon}^{1/2}\mp n'_3}J_{m\mp1}(k_\perp r).$$ As a result, neglecting the transition radiation, we arrive at $$dP(s,m,k_\perp,k_3)=\bigg|zea{\varphi}_{{\varsigma}m}\Big[\Big(\frac{1}{{\varepsilon}} +\frac{n_3}{n'_3}\Big)\Big({\beta}_3 -\frac{{\varsigma}m\omega n_3'}{n_\perp k_\perp}\Big)J_m -\Big(1 +\frac{n_3}{n'_3}\Big)\frac{s{\varsigma}K}{n_\perp{\gamma}}J'_m\Big] +(k_3'\leftrightarrow-k_3')\bigg|^2 n_\perp^3\frac{dk_\perp dk_3}{64\pi^2},$$ where, for brevity, we omit the arguments of the Bessel functions. For $N_u\gtrsim10$, taking into account , we have approximately $$\label{dP_hel_wig}
dP(s,m,k_\perp,k_3)=|zea{\varphi}_{{\varsigma}m}|^2\Big|\Big(\frac{1}{{\varepsilon}} +\frac{n_3}{n'_3}\Big)\frac{{\varepsilon}{\beta}_3-n_3'}{n_\perp^2}J_m -\Big(1 +\frac{n_3}{n'_3}\Big)\frac{s{\varsigma}K}{n_\perp{\gamma}}J'_m\Big|^2n_\perp^3\frac{dk_\perp dk_3}{64\pi^2} +(k_3'\leftrightarrow-k_3'),$$ in the leading order in $N_u$. The radiation spectrum has the form with $n={\varsigma}m$, i.e., the selection rule is the same as for the ideal helical wiggler in a vacuum. In the relativistic case, the contribution of the reflected wave to is strongly suppressed. The plots of the average number of twisted photons produced in the wiggler filled with helium are given in Figs. \[VC\_pol\_plots\] and \[VC\_pol\_beam\_plots\].
The case when the paraxial approximation holds, $$|\chi|\ll1,\qquad n_\perp\ll1,$$ is of a peculiar interest. In this case, the projection of the orbital angular momentum of the radiated twisted photon can be introduced as $l:=m-s$. This approximation implies $$\label{spectrum}
\bar{k}_0=\frac{2{\varsigma}m}{K^{-2}+1+n_k^2-\bar{\chi}},\qquad n_k^2=\frac{2{\varsigma}m}{\bar{k}_0}+\bar{\chi}-1-K^{-2},\qquad k_\perp r=\bar{k}_0 n_k,$$ where $n_k:=n_\perp{\gamma}/K$, $\bar{k}_0:=k_0K^2/(\omega{\gamma}^2)$, and $\bar{\chi}:=\chi{\gamma}^2/K^2$.
![[The same as on the left panel in Fig. \[VC\_pol\_plots\] but for beams of electrons. On the left panel: The radiation from a Gaussian beam of electrons is considered. We see that, at the zeroth harmonic, the radiation of photons with projection of the orbital angular momentum $l=-1$ dominates. On the right panel: The radiation from a helically microbunched beam of electrons is described. The chirality of the beam $\chi=-1$, the longitudinal dimension ${\sigma}_3=150$ $\mu$m (duration $0.5$ ps), and the transverse size ${\sigma}_\perp=16$ $\mu$m. The number of coherent harmonic is $n_c=2$ and the helix pitch ${\delta}=0.36$ $\mu$m is chosen such that the coherent radiation is concentrated at $k_0=6.9$ eV. The coherent contribution to radiation dominates and the contribution of higher harmonics is suppressed at the given energy of photons (compare with the plot on the left panel). The fulfillment of the addition rule is clearly seen. The radiation of twisted photons with projection of the orbital angular momentum $l=-3$ prevails.]{}[]{data-label="VC_pol_beam_plots"}](VC_gN.pdf "fig:"){width="0.48\linewidth"} ![[The same as on the left panel in Fig. \[VC\_pol\_plots\] but for beams of electrons. On the left panel: The radiation from a Gaussian beam of electrons is considered. We see that, at the zeroth harmonic, the radiation of photons with projection of the orbital angular momentum $l=-1$ dominates. On the right panel: The radiation from a helically microbunched beam of electrons is described. The chirality of the beam $\chi=-1$, the longitudinal dimension ${\sigma}_3=150$ $\mu$m (duration $0.5$ ps), and the transverse size ${\sigma}_\perp=16$ $\mu$m. The number of coherent harmonic is $n_c=2$ and the helix pitch ${\delta}=0.36$ $\mu$m is chosen such that the coherent radiation is concentrated at $k_0=6.9$ eV. The coherent contribution to radiation dominates and the contribution of higher harmonics is suppressed at the given energy of photons (compare with the plot on the left panel). The fulfillment of the addition rule is clearly seen. The radiation of twisted photons with projection of the orbital angular momentum $l=-3$ prevails.]{}[]{data-label="VC_pol_beam_plots"}](VC_hel_N.pdf "fig:"){width="0.48\linewidth"}
### Polarization and angular momentum of radiation
The degree of polarization of radiation is specified by the ratio $$A(s):=\frac{dP(s)}{dP(1)+dP(-1)}.$$ In the paraxial approximation and for $m\neq0$, $$\label{polariz}
A(s)=\frac{\big[J'_m({\varsigma}mz) -s{\varsigma}(n_k-1/z)J_m({\varsigma}mz) \big]^2}{2\big[J'^2_m({\varsigma}mz) +(n_k-1/z)^2J^2_m({\varsigma}mz)\big]},\qquad z:=\frac{2n_k}{n_k^2+\bar{\chi}_c-\bar{\chi}}=\frac{{\varsigma}\bar{k}_0}{m}n_k,$$ where $\bar{\chi}_c:=1+K^{-2}$. For $m=0$, i.e., for the VC radiation, we have $$A(s)=\frac{\big[J_1(\bar{k}_0n_k) +s{\varsigma}n_kJ_0(\bar{k}_0n_k) \big]^2}{2\big[J^2_1(\bar{k}_0n_k) +n_k^2J^2_0(\bar{k}_0n_k)\big]}.$$ All the radiated photons possess the helicity $s$ provided that $$\label{circ_pol}
J'_m({\varsigma}mz)=-s{\varsigma}(n_k-1/z)J_m({\varsigma}mz),\qquad J_1(\bar{k}_0n_k)=s{\varsigma}n_kJ_0(\bar{k}_0n_k).$$ The linear polarization appears when $A(1)=A(-1)=1/2$, i.e., $$\label{linear_pol}
(n_k-1/z)J_m({\varsigma}mz)J'_m({\varsigma}mz)=0,\qquad J_0(\bar{k}_0n_k)J_1(\bar{k}_0n_k)=0.$$ The second equalities in and correspond to the case $m=0$. For $m\neq0$, the equations and should be solved with account for the radiation spectrum written in the form of the last equality in . For $m=0$, the condition of the existence of VC radiation, $\bar{\chi}\geqslant\bar{\chi}_c$, should be satisfied.
If one regards the degree of polarization as a function of $n_k$, then the regions where the radiation of photons with a definite helicity dominates are separated by the roots of the equation and the regions with the different signs of the helicity are interlaced. Developing for $m\neq0$ as a series in $n_k$, it is not difficult to see that for $n_k\rightarrow0$, i.e., in the ultraparaxial regime, the radiation with $s\operatorname{sgn}(m)=1$ prevails. Furthermore, the positivity of the photon energy implies that, for $\bar{\chi}<\bar{\chi}_c$, the photons are radiated with ${\varsigma}\operatorname{sgn}m=1$.
For $m\neq0$, the graphical solution of the first equation in is given in Fig. \[znk\_plots\]. We describe it below. Let $j_{m,p}$ and $j'_{m,p}$, $p=\overline{1,\infty}$, be the positive roots of $J_m(x)$ and $J'_m(x)$, respectively, and $$b_{m,p}:=j_{m,p}/|m|,\qquad c_{m,p}:=j'_{m,p}/|m|.$$ The properties of zeros of the Bessel functions (see, e.g., [@NIST]) imply that $b_{m,p}>1$, $c_{m,p}>1$ and $b_{m,p}\rightarrow1$, $c_{m,p}\rightarrow1$ for $|m|\rightarrow\infty$. For $\bar{\chi}<\bar{\chi}_c$, the function $z(n_k)$ has a unique maximum at the point $$\label{nk0}
n_k=n_k^0=\sqrt{\bar{\chi}_c-\bar{\chi}},\qquad z(n_k^0)=1/n_k^0,$$ and the inflection point at $n_k=\sqrt{3}n_k^0$. At the maximum, $z(n^0_k)\geqslant1$ when $\bar{\chi}\in[K^{-2},\bar{\chi}_c)$. The plots of $z(n_k)$ for $\bar{\chi}\geqslant\chi_c$ are given in Fig. \[znk\_plots\]. As is seen from Fig. \[znk\_plots\], the properties of polarization of radiation for $m\neq0$ are as follows.
![[On the left panel: The dependence $k_0(n_\perp)$ for the harmonics $n=\overline{0,6}$ of radiation of twisted photons in the undulator filled with helium under the pressure $4$ atm and the temperature $0$ ${}^o$C. The experimental data for permittivity are taken from [@RefrIndex]. On the right panel: The dependence $k_0(n_\perp)$ for the harmonics $n=\overline{0,4}$ of radiation of twisted photons in the undulator filled with xenon under the pressure $1/2000$ atm and the temperature $0$ ${}^o$C. The data for permittivity near the $M$-edge of xenon are taken from [@BazylZhev; @BazylevXrayVC]. For simplicity, the peak is approximated by a Guassian with the center energy $k_0=680$ eV and the dispersion $15$ eV. This Gaussian is added to the plasma permittivity and the maximum of the obtained permittivity is fitted to the data [@BazylZhev; @BazylevXrayVC]. One should bear in mind that in [@BazylZhev; @BazylevXrayVC] the data are given for the pressure $0.2$ atm. The thin horizontal line is $k_0=680$ eV. This energy is slightly below the formation threshold of VC radiation.]{}[]{data-label="spectra_plots"}](spectr_pro60.pdf "fig:"){width="0.473\linewidth"} ![[On the left panel: The dependence $k_0(n_\perp)$ for the harmonics $n=\overline{0,6}$ of radiation of twisted photons in the undulator filled with helium under the pressure $4$ atm and the temperature $0$ ${}^o$C. The experimental data for permittivity are taken from [@RefrIndex]. On the right panel: The dependence $k_0(n_\perp)$ for the harmonics $n=\overline{0,4}$ of radiation of twisted photons in the undulator filled with xenon under the pressure $1/2000$ atm and the temperature $0$ ${}^o$C. The data for permittivity near the $M$-edge of xenon are taken from [@BazylZhev; @BazylevXrayVC]. For simplicity, the peak is approximated by a Guassian with the center energy $k_0=680$ eV and the dispersion $15$ eV. This Gaussian is added to the plasma permittivity and the maximum of the obtained permittivity is fitted to the data [@BazylZhev; @BazylevXrayVC]. One should bear in mind that in [@BazylZhev; @BazylevXrayVC] the data are given for the pressure $0.2$ atm. The thin horizontal line is $k_0=680$ eV. This energy is slightly below the formation threshold of VC radiation.]{}[]{data-label="spectra_plots"}](spectr_Xe.pdf "fig:"){width="0.48\linewidth"}
For $\bar{\chi}\leqslant K^{-2}$ there is a single solution to the first equation in at $n_k=n^0_k$. Therefore, for $n_k<n_k^0$, the radiation with $s{\varsigma}=1$ dominates, whereas, for $n_k>n_k^0$, the main contribution comes from the radiation with $s{\varsigma}=-1$. We will refer to the parameter space, where $s{\varsigma}=-1$ for the main contribution to radiation, as the domain with inverted radiation polarization. In the absence of the medium, the existence of this domain can easily be explained without any calculations. To this end, one needs to find the helicity of created radiation in the reference frame where the charge is at rest on average (the synchrotron frame). In this frame, the radiation with $s{\varsigma}=1$ dominates in the half-space $z>0$, whereas the radiation with $s{\varsigma}=-1$ prevails in the half-space $z<0$. Then, by using the Lorentz transformation, one passes to the laboratory frame and takes into account that the photon helicity is Lorentz-invariant (see for more detail [@Bord.1; @BKL4]). The value $n_k=n_k^0$ with $\bar{\chi}=0$ corresponds to the angle at which the orbit plane of a charge in the synchrotron frame is seen in the laboratory frame. Formula gives the value of this angle with account for the nonvanishing electric susceptibility, $\bar{\chi}\neq0$. For $\bar{\chi}=K^{-2}$, we have $n_k^0=1$ (see Fig. \[znk\_plots\]).
If $\bar{\chi}\in[\bar{\chi}_c, K^{-2})$, then $n_k\geqslant n_k^{(+)}$ corresponds to the domain with inverted radiation polarization for any harmonic, and $n_k\leqslant n_k^{(-)}$ is the region with the usual radiation polarization (the radiation with $s{\varsigma}=1$ dominates) for any number of the radiation harmonic. Here $$n^{(\pm)}_k:=1\pm\sqrt{1+\bar{\chi}-\bar{\chi}_c},$$ which have been found from the condition $z(n_k^{(\pm)})=1$. For $n_k\in(n_k^{(-)},n_k^{(+)})$, the regions with the different signs of $s{\varsigma}$ are interlaced and separated by the roots of the first equation in , viz., by the solutions of the equations $$z=b_{m,p}, \qquad z=c_{m,p}, \qquad n_k=n_k^0,$$ where $p=\overline{1,\infty}$. The energy of radiated photons is found from the intersection point of the plots $z=z(n_k)$ and $z=\bar{k}_0n_k/|m|$. Notice that the dipole approximation is not applicable in the domain $|z|\gtrsim1$.
For $\bar{\chi}>\bar{\chi}_c$, there appears the region, $n_k<|n_k^0|$, where the radiation with the anomalous Doppler effect is observed. In this region, ${\varsigma}\operatorname{sgn}m=-1$. The domain with inverted radiation polarization for any radiation harmonic is specified by the inequalities $n_k\leqslant |n_k^{(-)}|$ or $n_k\geqslant n_k^{(+)}$. For $n_k\in(|n_k^{(-)}|,n_k^{(+)})$, the regions with the different signs of $s{\varsigma}$ are interlaced and separated by the roots of the equations $$|z|=b_{m,p}, \qquad |z|=c_{m,p},$$ where $p=\overline{1,\infty}$. The energy of a radiated photon is found from the last equality in .
When $m=0$ and $\bar{\chi}\geqslant\bar{\chi}_c$, i.e., the VC radiation is considered, it is convenient to introduce the variable $y:=\bar{k}_0n_k$, where $n_k=|n_k^0|=\sqrt{\bar{\chi}-\bar{\chi}_c}$, for the analysis of the degree of radiation polarization. If $y<j_{0,1}$ then the radiation with $s{\varsigma}=1$ dominates. In increasing $y$, the intervals where the radiation with $s{\varsigma}=1$ or $s{\varsigma}=-1$ dominates are interlaced as $$s{\varsigma}=
\left\{
\begin{array}{ll}
1, & \hbox{$y\in(j_{1,p},j_{0,p+1})$;} \\
-1, & \hbox{$y\in(j_{0,p},j_{1,p})$,}
\end{array}
\right.\quad p=\overline{1,\infty}.$$ The VC radiation is completely circularly polarized and consists of photons with the helicity $s$ provided that $$\bar{k}_0=s{\varsigma}\frac{yJ_0(y)}{J_1(y)}.$$ In this case, the orbital angular momentum of radiated photons becomes $$l=m-s=-s.$$ In particular, in the dipole regime, $y\ll1$, we have $$\label{VC_twist}
\bar{k}_0=2,\qquad s{\varsigma}=1,\qquad 2|n_k^0|\ll1.$$ Notice that for this photon energy the radiation of harmonics with $n={\varsigma}m\geqslant1$ is not described by the formulas of the dipole approximation, because in this case $$\bar{k}_0n_k=2\sqrt{n+\bar{\chi}-\bar{\chi}_c}>2.$$ Therefore, the radiation at these harmonics is not suppressed and its intensity can exceed the intensity of the VC radiation (see Figs. \[VC\_pol\_plots\] and \[VC\_pol\_beam\_plots\]).
![[On the left panel: The average number of twisted photons produced by one proton moving in the helical undulator with the chirality ${\varsigma}=1$. The Lorentz factor is ${\gamma}=59$, $E=55.4$ GeV. The number of undulator periods $N_u=40$ and the period ${\lambda}_0=1$ cm. The undulator strength parameter is $K=8.7\times10^{-3}$, which corresponds to the magnetic field strength $H=241.5$ kG in it. The energy spectrum of photons is given on the left panel in Fig. \[spectra\_plots\]. The ratios $n_\perp^2/q\approx1.1\times 10^6$ and $K^2/(q{\gamma}^2)\approx462$ that means that the multiple scattering can be neglected. The radiation at the first harmonic with $s=-1$ dominates and so the radiation of photons with projection of the orbital angular momentum $l=2$ prevails. On the right panel: The same as on the left panel but for the Gaussian beam of protons. The coherent contribution to radiation is strongly suppressed.]{}[]{data-label="protons_plots"}](pr601.pdf "fig:"){width="0.48\linewidth"} ![[On the left panel: The average number of twisted photons produced by one proton moving in the helical undulator with the chirality ${\varsigma}=1$. The Lorentz factor is ${\gamma}=59$, $E=55.4$ GeV. The number of undulator periods $N_u=40$ and the period ${\lambda}_0=1$ cm. The undulator strength parameter is $K=8.7\times10^{-3}$, which corresponds to the magnetic field strength $H=241.5$ kG in it. The energy spectrum of photons is given on the left panel in Fig. \[spectra\_plots\]. The ratios $n_\perp^2/q\approx1.1\times 10^6$ and $K^2/(q{\gamma}^2)\approx462$ that means that the multiple scattering can be neglected. The radiation at the first harmonic with $s=-1$ dominates and so the radiation of photons with projection of the orbital angular momentum $l=2$ prevails. On the right panel: The same as on the left panel but for the Gaussian beam of protons. The coherent contribution to radiation is strongly suppressed.]{}[]{data-label="protons_plots"}](pr60N.pdf "fig:"){width="0.473\linewidth"}
Now we consider the orbital angular momentum of twisted photons radiated at the harmonics with $n\neq0$. At these harmonics, the projection of the orbital angular momentum of photons is $$l=m-s={\varsigma}(n-s{\varsigma}).$$ When $n\geqslant1$ and $s{\varsigma}=1$, this relation reproduces the known selection rule for the orbital angular momentum of twisted photons radiated by a helical undulator [@HemMar12; @KatohSRexp; @KatohPRL; @BHKMSS; @SasMcNu; @RibGauNin14]. However, at the same harmonic but in the domain with inverted radiation polarization, the absolute value of the orbital angular momentum of a radiated photon is by $2\hbar$ more than in the region with the usual polarization. The same property has the radiation in the parameter space where the anomalous Doppler effect is realized, $n\leqslant-1$, but for $s{\varsigma}=1$. In particular, selecting suitably the energy of the observed photons, the parameters of the radiating charged particle and the medium, one can secure that the first harmonic of the undulator radiation consists of the twisted photons with $l=2{\varsigma}$.
As for the undulator radiation in a vacuum, this effect was discussed in [@BKL4]. As is seen from the analysis given above, in a vacuum, $\bar{\chi}=0$, and for the photon energy $$\label{inv_polar}
\bar{k}_0<(n_k^0)^{-2}=(1+K^{-2})^{-1}\;\Leftrightarrow\;k_0<\frac{\omega{\gamma}^2}{1+K^2},\qquad n^2_\perp{\gamma}^2=\frac{2\omega{\gamma}^2}{k_0}-1-K^2,$$ the photons at the first harmonic possess the projection of the orbital angular momentum $l=2{\varsigma}$. The presence of the medium with $\chi>0$ allows one to decrease the cone opening down to $n^0_k=1$ and, as follows from and , to increase the energy of photons possessing $l=2{\varsigma}$ at the first harmonic.
![[On the left panel: The average number of twisted photons produced by one electron moving in the helical undulator with the chirality ${\varsigma}=1$. The Lorentz factor is ${\gamma}=2.84\times 10^4$, $E=14.5$ GeV. The number of undulator periods $N_u=10$ and the period ${\lambda}_0=1$ cm. The undulator strength parameter $K=1/10$, which corresponds to the magnetic field strength $H=1.5$ kG in it. The energy spectrum of photons is given on the right panel in Fig. \[spectra\_plots\]. The ratios $n_\perp^2/q\approx 6.9\times 10^6$ and $K^2/(q{\gamma}^2)\approx1.7$ that means that the multiple scattering can be neglected. The photons at the first harmonic $k_0=680$ eV possess $s=-1$ and so they have the projection of the orbital angular momentum $l=2$. The large peak with $m=0$ near the origin is the contribution of transition radiation. On the right panel: The same as on the left panel but for the Gaussian beam of electrons. The coherent contribution to radiation is strongly suppressed.]{}[]{data-label="Xe_Xray_plots"}](Xe_1.pdf "fig:"){width="0.48\linewidth"} ![[On the left panel: The average number of twisted photons produced by one electron moving in the helical undulator with the chirality ${\varsigma}=1$. The Lorentz factor is ${\gamma}=2.84\times 10^4$, $E=14.5$ GeV. The number of undulator periods $N_u=10$ and the period ${\lambda}_0=1$ cm. The undulator strength parameter $K=1/10$, which corresponds to the magnetic field strength $H=1.5$ kG in it. The energy spectrum of photons is given on the right panel in Fig. \[spectra\_plots\]. The ratios $n_\perp^2/q\approx 6.9\times 10^6$ and $K^2/(q{\gamma}^2)\approx1.7$ that means that the multiple scattering can be neglected. The photons at the first harmonic $k_0=680$ eV possess $s=-1$ and so they have the projection of the orbital angular momentum $l=2$. The large peak with $m=0$ near the origin is the contribution of transition radiation. On the right panel: The same as on the left panel but for the Gaussian beam of electrons. The coherent contribution to radiation is strongly suppressed.]{}[]{data-label="Xe_Xray_plots"}](Xe_N.pdf "fig:"){width="0.48\linewidth"}
#### Dipole regime.
As a particular case of the above general relations, we consider the polarization of radiation with $m\neq0$ in the dipole regime when $|mz|\ll1$. Then formula implies that the undulator radiation is completely circularly polarized and consists of the photons with the helicity $s$ provided that $$n_k=
\left\{
\begin{array}{ll}
\frac{|m|z}{2(|m|+1)}, & \hbox{$s\operatorname{sgn}(m)=1$, $s{\varsigma}=1$;} \\[0.3em]
\frac2{z}, & \hbox{$s\operatorname{sgn}(m)=-1$, $s{\varsigma}=-1$.}
\end{array}
\right.$$ Whence, in the first case, $$\bar{k}_0=2(|m|+1),\qquad n_k^2=\frac{|m|}{|m|+1}+\bar{\chi}-\bar{\chi}_c,\qquad \bar{k}_0n_k\ll1.$$ And, as it was established above in general, the radiation with $s\operatorname{sgn}(m)=1$ dominates for $n_k<|n_k^{(-)}|$. In that case, the orbital angular momentum of the photons radiated at the first harmonic is $l=0$. In the second case, $$\bar{k}_0=2|m|/n_k^2,\qquad\bar{\chi}=\bar{\chi}_c,\qquad 2|m|/n_k\ll1.$$ For these parameters, the radiation completely consists of the twisted photons with helicity $s=-{\varsigma}$ and projection of the orbital angular momentum $l=2{\varsigma}$ (see Figs. \[spectra\_plots\], \[protons\_plots\], and \[Xe\_Xray\_plots\]). This value of the orbital angular momentum can be shifted by an integer number employing the coherent radiation of the helically microbunched beams of particles in undulators [@BKLb; @HKDXMHR], the center of such a beam moving along the trajectory .
Planar wiggler {#Plan_Wigg_Sec}
--------------
The trajectory of a charged particle propagating in the planar undulator has the form with (see for details, e.g., [@Bord.1]) $$r_{\pm}=\pm\sqrt{2}i\frac{\beta_{3} K}{\omega \gamma}\sin(\omega t), \qquad r_{3}=-\frac{\beta_{3} K^{2}}{4\omega \gamma^{2}}\sin(2\omega t).$$ Let us find the average number of twisted photons radiated by the charged particle taking into account that $K/\gamma\ll1$. Then the velocity components of the charge moving in the medium become $$\dot{x}_{\pm}\approx \pm \sqrt{2}i\frac{K}{\gamma}\cos(\omega t), \qquad \dot{x}_{3}\approx1 .$$ The integrals are evaluated in the same way as for the planar wiggler in a vacuum studied in Sec. 5.B.2 of [@BKL2] $$I_{3}=\sum_{n=-\infty}^{\infty} {\varphi}_{n}f_{n,m}, \qquad I_{\pm}=\mp\frac{s'{\varepsilon}^{1/2}\pm n'_{3}}{n_{k}}\sum_{n=-\infty}^{\infty} {\varphi}_{n} f_{n,m}^{\pm},$$ As before, the functions ${\varphi}_{n}$ are defined by formula and, for brevity, the following notation has been introduced $$\label{f}
\begin{split}
f_{n,m}&:=\pi (1+ (-1)^{n+m})\sum_{k=-\infty}^{\infty} J_{k}\Big(\frac{\beta_{3} k'_{3} K^{2} }{4 \omega \gamma^{2}}\Big) J_{(n-m)/2+k}\Big(\frac{\beta_{3} K k_{\bot}}{\sqrt{2} \omega \gamma}\Big) J_{(n+m)/2+k}\Big(\frac{\beta_{3} K k_{\bot}}{\sqrt{2} \omega \gamma}\Big), \\
f_{n,m}^{\pm}&:= \frac{f_{n+1,m\mp1} +f_{n-1,m\mp1}}{\sqrt{2}}.
\end{split}$$ Neglecting the contribution of the transition radiation, the average number of twisted photons radiated by one particle takes the form $$\begin{gathered}
\label{planar}
dP(s,m,k_\perp,k_3)=\bigg|zea\sum_{n=-\infty}^{\infty}{\varphi}_{n}\Big[\Big(\frac{1}{{\varepsilon}}+\frac{n_{3}}{n'_{3}}\Big)\Big(f_{n,m}-\frac{n'_{3}}{2 n_{k}}\big(f_{n,m}^{+}+f_{n,m}^{-}\big)\Big)+\frac{s}{2 n_{k}}\Big(1+\frac{n_{3}}{n'_{3}}\Big)\big(f_{n,m}^{-}-f_{n,m}^{+}\big)\Big]\\
+(k'_{3}\leftrightarrow - k'_{3})\bigg|^2 n_\perp^3\frac{dk_\perp dk_3}{64\pi^2}.\end{gathered}$$ The radiation spectrum is given by formula and the analysis of its peculiarities is presented in Sec. \[spectrum\_sect\]. Taking into account that $$f^+_{n,m}=f^-_{n,-m},$$ it is easy to see that obeys the reflection symmetry .
![[The average number of twisted photons produced by electrons in the planar wiggler. The parameters of the wiggler and the energy of electrons are the same as in Fig. \[VC\_pol\_plots\]. On the left panel: The average number of twisted photons produced by one electron moving in such planar wiggler. The energy of photons at the zeroth harmonic (the VC radiation) is taken as $k_0=6.9$ eV. The ratios $n_\perp^2/q\approx1.0$ and $K^2/(q{\gamma}^2)\approx12$ that means that the multiple scattering can be neglected. We see that the selection rule $m+n$ is an even number is satisfied. The reflection symmetry also holds. At the zeroth harmonic, the radiation of photons with $s=-1$ is suppressed. Therefore the radiation of photons with projection of the orbital angular momentum $l=1$ dominates and there is a small admixture of photons with $l=3$. On the right panel: The same as on the left panel but for the Gaussian beam.]{}[]{data-label="Planar_wig_plots"}](VC_pl_1.pdf "fig:"){width="0.48\linewidth"} ![[The average number of twisted photons produced by electrons in the planar wiggler. The parameters of the wiggler and the energy of electrons are the same as in Fig. \[VC\_pol\_plots\]. On the left panel: The average number of twisted photons produced by one electron moving in such planar wiggler. The energy of photons at the zeroth harmonic (the VC radiation) is taken as $k_0=6.9$ eV. The ratios $n_\perp^2/q\approx1.0$ and $K^2/(q{\gamma}^2)\approx12$ that means that the multiple scattering can be neglected. We see that the selection rule $m+n$ is an even number is satisfied. The reflection symmetry also holds. At the zeroth harmonic, the radiation of photons with $s=-1$ is suppressed. Therefore the radiation of photons with projection of the orbital angular momentum $l=1$ dominates and there is a small admixture of photons with $l=3$. On the right panel: The same as on the left panel but for the Gaussian beam.]{}[]{data-label="Planar_wig_plots"}](VC_pl_N.pdf "fig:"){width="0.48\linewidth"}
If the thickness of the dielectric medium $L$ is large, the contribution of the reflected wave is suppressed in comparison with the contribution of the direct wave. Moreover, the contribution of the terms describing the interference between different harmonics is small in comparison with the values of at the peaks of the harmonics. Having neglected these small contributions, we deduce the expression for the average number of twisted photons radiated at the $n$-th harmonic $$\label{planarn}
dP(s,m,k_\perp,k_3)=|zea{\varphi}_{n}|^{2}\bigg|\Big(\frac{1}{{\varepsilon}}+\frac{n_{3}}{n'_{3}}\Big)\Big(f_{n,m}-\frac{n'_{3}}{2 n_{k}}\big(f_{n,m}^{+}+f_{n,m}^{-}\big)\Big)+\frac{s}{2 n_{k}}\Big(1+\frac{n_{3}}{n'_{3}}\Big)\big(f_{n,m}^{-}-f_{n,m}^{+}\big)\bigg|^2 n_\perp^3\frac{dk_\perp dk_3}{64\pi^2}.$$ As follows from the explicit expressions for the functions $f_{n,m}, f_{n,m}^{\pm}$ entering into and , the number $n+m$ is an even one for radiated twisted photons. This selection rule was obtained in [@BKL2] for the radiation of twisted photons by a planar undulator in a vacuum and it was shown in [@BKL6] that this selection rule is preserved when the quantum recoil is taken into account. In particular, this selection rule implies that the VC radiation in the planar undulator corresponding to $n=0$ consists of the twisted photons with an even projection of the total angular momentum $m$ and not just with $m=0$. The plots of the average number of twisted photons radiated by electrons in the planar wiggler filled with helium are presented in Fig. \[Planar\_wig\_plots\].
Conclusion
==========
Let us sum up the results. We described the properties of radiation of twisted photons produced by undulators filled with a homogeneous dielectric medium. Both the dipole and wiggler regimes were considered. We started with the general formula for the probability to detect a twisted photon radiated by a charged particle passing through a dielectric plate derived in [@BKL5]. We proved that the selection rules established in [@BKL3] for radiation of twisted photons by charged particles in a vacuum also hold in the presence of a dielectric plate. In particular, we proved the reflection symmetry property for the radiation probability of twisted photons by planar currents.
Then the formulas for the average number of twisted photons radiated by an undulator filled with a dielectric medium were deduced. We studied in detail the undulator radiation in the dipole regime and the ideal helical and planar wigglers. We analyzed the spectrum of energies of radiated photons paying a special attention to the case of a plasma permittivity. In particular, we showed that, for sufficiently large plasma frequencies , the lower harmonics of undulator radiation do not form (see Figs. \[plsm\_perm\_plots\] and \[znk\_plots\]).
We also investigated the spectrum of twisted photon radiation with respect to the projection of the total angular momentum $m$ and the orbital angular momentum $l$. We showed that, in the general dipole case, the undulator radiation mainly consists of the twisted photons with $m=\{-1,0,1\}$ provided the lower harmonics are not prohibited by the energy spectrum at a given energy. Recall that the undulator dipole radiation in a vacuum is mainly comprised by the twisted photons with $m=\pm1$ [@BKL2]. In the case of the undulator dipole radiation filled with a medium, the radiation with $m=0$ corresponds to the VC radiation. In the ultraparaxial approximation , we found that the most part of twisted photons of the dipole undulator radiation with $m=\pm1$ possess the orbital angular momentum $l=0$.
In considering a helical wiggler filled with a dielectric medium, we found that the selection rule $m={\varsigma}n$, where $n$ is the harmonic number and ${\varsigma}=\pm1$ is the chirality of the helical trajectory, is satisfied. This selection rule is the same as in the vacuum case but the harmonic number can be negative for the medium with the electric susceptibility $\chi>0$ due to the anomalous Doppler effect. The case $n=0$ corresponds to the VC radiation. A peculiar polarization properties of radiation created by the helical wiggler filled with a medium allows one to produce the radiation with a well-defined orbital angular momentum $l$ in the paraxial regime (see Figs. \[VC\_pol\_plots\], \[VC\_pol\_beam\_plots\], \[protons\_plots\], and \[Xe\_Xray\_plots\]). We described thoroughly these polarization properties and found the parameter space where the radiation with a given $l$ dominates.
For example, for any given harmonic, we found the domains with inverted polarization where the radiation with $s{\varsigma}=-1$ prevails, $s$ being the helicity of radiated twisted photons. Such domains exist already for the vacuum undulator radiation (see, e.g., [@Bord.1; @BKL4]). The presence of a dielectric medium allows one to increase the radiation yield and the energy of photons in these domains. In the paraxial regime, $l={\varsigma}(n-s{\varsigma})$ and so the absolute value of the orbital angular momentum at a given harmonic, $n\geqslant1$, is by $2\hbar$ more in the domain with inverted polarization than in the region where the usual polarization, $s{\varsigma}=1$, prevails. As for the harmonics $n\leqslant-1$, where the anomalous Doppler effect appears, the situation is reverse. For a given harmonic, the orbital angular momentum is by $2\hbar$ more in the domain with $s{\varsigma}=1$ than in the domain with inverted polarization.
Besides, we found the parameter space where the VC radiation is almost completely circularly polarized. As long as $m=0$ for the VC radiation produced in the helical undulator, the photons of VC radiation possess a definite nonzero orbital angular momentum in this case, provided the paraxial approximation is valid. For example, the VC radiation is constituted by the twisted photons with orbital angular momentum $l=-{\varsigma}$ at the photon energy $k_0\approx2\omega{\gamma}^2/K^2$ near the threshold of the VC radiation (see Figs. \[VC\_pol\_plots\] and \[VC\_pol\_beam\_plots\]).
The spectrum over $m$ of the twisted photons produced in the planar wiggler was also studied. We proved that the selection rule, $m+n$ is an even number, is fulfilled for this radiation. It has the same form as for the planar wiggler radiation in a vacuum [@BKL2; @BKL6]. It was also found that the VC radiation produced by charged particles in such a wiggler consists of twisted photons with even projections of the total angular momentum and not just with $m=0$ (see Fig. \[Planar\_wig\_plots\]). Of course, the reflection symmetry of the probability of radiation of twisted photons holds in this case.
All the above mentioned properties are valid for the undulator radiation produced by one charged particle. We investigated how these properties change when the radiation of a beam of particles is considered (see Figs. \[VC\_pol\_beam\_plots\], \[protons\_plots\], \[Xe\_Xray\_plots\], and \[Planar\_wig\_plots\]). As expected, the properties of radiation of twisted photons by a structureless beam of charged particles are the same as for the radiation generated by one particle when $k_\perp{\sigma}_\perp\lesssim1$, where ${\sigma}_\perp$ is the transverse size of the beam [@BKb; @BKLb]. The use of periodically microbunched beams of particles allows one to amplify the intensity of radiation by means of the coherent effects. If, in addition, such a microbunched beam is helical, the total angular momentum of radiated twisted photons can also be increased (see Fig. \[VC\_pol\_beam\_plots\]). Nowadays, the techniques are elaborated allowing one to produce such a coherent radiation up to X-ray spectral range [@Gover19rmp; @PRRibic19; @Rubic17; @Hemsing7516; @HemStuXiZh14].
As examples, we described the radiation of twisted photons produced by electron and proton beams in the undulators filled with helium in the ultraviolet and X-ray spectral ranges. Moreover, we considered the production of X-ray twisted photons with $l=2$ by the electron beam propagating in the undulator filled with xenon. This radiation is created near the photoabsorption $M$-edge of xenon.
Thus we see that the presence of a dispersive medium inside of the undulator offers additional possibilities for generation of hard twisted photons with desired properties. We did not study in this paper the effect of a periodic modulation of the dielectric medium on the properties of radiation of twisted photons [@Appolonov; @TMikaelian; @Ginzburg; @BazylZhev; @BKL5]. This will be the subject of our future research. Besides, as was shown in [@FedSmir; @BazylZhev], the VC radiation can be generated in the gamma ray spectral range near the lines of Mössbauer transitions. The usual VC radiation is an equiprobable mixture of twisted photons with $l=\pm1$ in the paraxial regime (see, e.g., [@Kaminer; @OAMeVCh; @BKL7; @BKL5]). The theory developed in the present paper can be employed to twist these gamma rays and to make them to be in an eigenstate of the orbital angular momentum operator.
#### Acknowledgments.
The reported study was funded by RFBR, project number 20-32-70023.
[999]{}
G. N. Kulipanov *et al*., Novosibirsk free electron laser-facility description and recent experiments, IEEE Trans. Terahertz Science Technology **5**, 798 (2015).
European XFEL, https://www.xfel.eu.
E. Hemsing, G. Stupakov, D. Xiang, A. Zholents, Beam by design: Laser manipulation of electrons in modern accelerators, Rev. Mod. Phys. **86**, 897 (2014).
E. Hemsing *et al*., Echo-enabled harmonics up to the 75th order from precisely tailored electron beams, Nature Phot. **10**, 512 (2016).
P. R. Ribič *et al*., Extreme-ultraviolet vortices from a free-electron laser, Phys. Rev. X **7**, 031036 (2017).
P. R. Ribič *et al*., Coherent soft X-ray pulses from an echo-enabled harmonic generation free-electron laser, Nature Phot. **13**, 555 (2019).
A. Gover *et al*., Superradiant and stimulated-superradiant emission of bunched electron beams, Rev. Mod. Phys. **91**, 035003 (2019).
V. L. Ginzburg, *Theoretical Physics and Astrophysics* (Pergamon, London, 1979).
V. A. Bazylev, N. K. Zhevago, Electromagnetic radiation of particles channeled in a crystal, Zh. Eksp. Teor. Fiz. **73**, 1697 (1977) \[Sov. Phys. JETP **46**, 891 (1977)\].
L. A. Gevorgyan, N. A. Korkhmazyan, Hard undulator radiation in a dispersive medium in the dipole approximation, Phys. Lett. A **74**, 453 (1979).
V. G. Baryshevsky, I. M. Frank, Light emission by an oscillator moving through a refractive plate, Yad. Fiz. **36**, 1442 (1982) \[in Russian\].
R. H. Pantell *et al*., Benefits and costs of the gas-loaded, free electron laser, Nucl. Instrum. Methods A **250**, 312 (1986).
J. Feinstein *et al*., Experimental results on a gas-loaded free-electron laser, Phys. Rev. Lett. **60**, 18 (1988).
A. S. Fisher *et al*., Observations of gain and pressure tuning in a gas-loaded FEL, Nucl. Instrum. Methods A **272**, 89 (1988).
M. B. Reid *et al*., Experimental elimination of plasma effects in a gas-loaded, free-electron laser, Phys. Rev. Lett. **62**, 249 (1989).
M. B. Reid, R. H. Pantell, An ultraviolet gas-loaded free-electron laser, IEEE J. Quantum Electron. **25**, 34 (1989).
R. H. Pantell *et al*., Effects of introducing a gas into the free-electron laser, J. Opt. Soc. Am. B **6**, 1008 (1989).
R. H. Pantell, M. Özcan, Gas-loaded free-electron lasers, Phys. Fluids B: Plasma Physics **2**, 1311 (1990).
S. Yariv, L. Friedland, Electron beam transport in gas-loaded free-electron lasers, Phys. Fluids B: Plasma Physics **2**, 3114 (1990).
M. B. Reid, Reduction of plasma electron density in a gas ionized by an electron beam: Use of a gaseous dielectric, J. Appl. Phys. **73**, 4212 (1993).
V. M. Arutyunyan, S. G. Oganesyan, The stimulated Cherenkov effect, Phys. Usp. **37**, 1005 (1994).
V. V. Apollonov *et al*., Gas-plasma and superlattice free-electron lasers exploiting a medium with periodically modulated refractive index, Laser and Particle Beams **16**, 267 (1998).
V. M. Grichine, S. S. Sadilov, Radiation energy loss of an accelerated charge in an absorbing medium, Phys. Lett. B **559**, 26 (2003).
A. A. Saharian, A. S. Kotanjyan Synchrotron radiation from a charge moving along a helical orbit inside a dielectric cylinder, J. Phys. A **38**, 4275 (2005).
A. A. Saharian, A. S. Kotanjyan, M. L. Grigoryan, Electromagnetic field generated by a charge moving along a helical orbit inside a dielectric cylinder, J. Phys. A: Math. Theor. **40**, 1405 (2007).
A. S. Kotanjyan, A. A. Saharian, Synchrotron radiation inside a dielectric waveguide, J. Phys.: Conf. Ser. **357**, 012009 (2012).
I. A. Konstantinovich, A. V. Konstantinovich, Radiation spectrum of system of electrons moving in spiral in medium, Proc. SPIE **11369**, 113690C (2020).
S. Sasaki, I. McNulty, Proposal for generating brilliant X-ray beams carrying orbital angular momentum, Phys. Rev. Lett. **100**, 124801 (2008).
A. Afanasev, A. Mikhailichenko, On generation of photons carrying orbital angular momentum in the helical undulator, arXiv:1109.1603.
V. A. Bordovitsyn, O. A. Konstantinova, E. A. Nemchenko, Angular momentum of synchrotron radiation, Russ. Phys. J. **55**, 44 (2012).
E. Hemsing, A. Marinelli, Echo-enabled X-ray vortex generation, Phys. Rev. Lett. **109**, 224801 (2012).
J. Bahrdt *et al*., First observation of photons carrying orbital angular momentum in undulator radiation, Phys. Rev. Lett. **111**, 034801 (2013).
E. Hemsing *et al*., Coherent optical vortices from relativistic electron beams, Nature Phys. **9**, 549 (2013).
P. R. Ribič, D. Gauthier, G. De Ninno, Generation of coherent extreme-ultraviolet radiation carrying orbital angular momentum, Phys. Rev. Lett. **112**, 203602 (2014).
M. Katoh *et al*., Angular momentum of twisted radiation from an electron in spiral motion, Phys. Rev. Lett. **118**, 094801 (2017).
M. Katoh *et al*., Helical phase structure of radiation from an electron in circular motion, Sci. Rep. **7**, 6130 (2017).
O. V. Bogdanov, P. O. Kazinski, G. Yu. Lazarenko, Probability of radiation of twisted photons by classical currents, Phys. Rev. A **97**, 033837 (2018).
S. V. Abdrashitov, O. V. Bogdanov, P. O. Kazinski, T. A. Tukhfatullin, Orbital angular momentum of channeling radiation from relativistic electrons in thin Si crystal, Phys. Lett. A **382**, 3141 (2018).
V. Epp, J. Janz, M. Zotova, Angular momentum of radiation at axial channeling, Nucl. Instrum. Methods B **436**, 78 (2018).
O. V. Bogdanov, P. O. Kazinski, Probability of radiation of twisted photons by axially symmetric bunches of particles, Eur. Phys. J. Plus **134**, 586 (2019).
O. V. Bogdanov, P. O. Kazinski, G. Yu. Lazarenko, Semiclassical probability of radiation of twisted photons in the ultrarelativistic limit, Phys. Rev. D **99**, 116016 (2019).
V. Epp, U. Guselnikova, Angular momentum of radiation from a charge in circular and spiral motion, Phys. Lett. A **383**, 2668 (2019).
O. V. Bogdanov, P. O. Kazinski, G. Yu. Lazarenko, Planar wiggler as a tool for generating hard twisted photons, JINST **15**, C04008 (2020).
O. V. Bogdanov, P. O. Kazinski, G. Yu. Lazarenko, Probability of radiation of twisted photons by cold relativistic particle bunches, Annals Phys. **415**, 168116 (2020).
G. Molina-Terriza, J. P. Torres, L. Torner, Management of the angular momentum of light: Preparation of photons in multidimensional vector states of angular momentum, Phys. Rev. Lett. **88**, 013601 (2002).
K. Gottfried, T.-M. Yan, *Quantum Mechanics: Fundamentals* (Springer, New York, 2003).
J. P. Torres, L. Torner, S. Carrasco, Digital spiral imaging, Opt. Express **13**, 873 (2005).
R. Jáuregui, S. Hacyan, Quantum-mechanical properties of Bessel beams, Phys. Rev. A **71**, 033411 (2005).
G. Molina-Terriza, J. P. Torres, L. Torner, Twisted photons, Nature Phys. **3**, 305 (2007).
I. Bialynicki-Birula, Z. Bialynicka-Birula, Beams of electromagnetic radiation carrying angular momentum: The Riemann-Silberstein vector and the classical-quantum correspondence, Opt. Commun. **264**, 342 (2006).
U. D. Jentschura, V. G. Serbo, Generation of high-energy photons with large orbital angular momentum by Compton backscattering, Phys. Rev. Lett. **106**, 013001 (2011).
U. D. Jentschura, V. G. Serbo, Compton upconversion of twisted photons: Backscattering of particles with non-planar wave functions, Eur. Phys. J. C **71**, 1571 (2011).
J. P. Torres, L. Torner (Eds.), *Twisted Photons* (Wiley-VCH, Weinheim, 2011).
D. L. Andrews, M. Babiker (Eds.), *The Angular Momentum of Light* (Cambridge University Press, New York, 2013).
M. J. Padgett, Orbital angular momentum 25 years on, Optics Express **25**, 11267 (2017).
H. Rubinsztein-Dunlop *et al*., Roadmap on structured light, J. Opt. **19**, 013001 (2017).
B. A. Knyazev, V. G. Serbo, Beams of photons with nonzero projections of orbital angular momenta: New results, Phys. Usp. **61**, 449 (2018).
O. V. Bogdanov, P. O. Kazinski, G. Yu. Lazarenko, Probability of radiation of twisted photons in the isotropic dispersive medium, Phys. Rev. A **100**, 043836 (2019).
O. V. Bogdanov, P. O. Kazinski, G. Yu. Lazarenko, Probability of radiation of twisted photons in the infrared domain, Annals Phys. **406**, 114 (2019).
V. G. Bagrov, G. S. Bisnovatyi-Kogan, V. A. Bordovitsyn, A. V. Borisov, O. F. Dorofeev, V. Ya. Epp, V. S. Gushchina, V. C. Zhukovskii, *Synchrotron Radiation Theory and its Development* (World Scientific, Singapore, 1999).
M. V. Nezlin, Negative-energy waves and the anomalous Doppler effect, Sov. Phys. Usp. **19**, 946 (1976).
M. V. Kuzelev, A. A. Rukhadze, Spontaneous and stimulated emission induced by an electron, electron bunch, and electron beam in a plasma, Phys. Usp. **51**, 989 (2008).
X. Shi *et al*., Superlight inverse Doppler effect, Nature Phys. **14**, 1001 (2018).
V. A. Bazylev *et al*., X-ray Čerenkov radiation. Theory and experiment, Zh. Eksp. Teor. Fiz. **81**, 1664 (1981) \[Sov. Phys. JETP **54**, 884 (1982)\].
V. A. Bazylev, N. K. Zhevago, *Radiation from Fast Particles in a Medium and External Fields* (Nauka, Moscow, 1987) \[in Russian\].
W. Knulst, M. J. van der Wiel, O. J. Luiten, J. Verhoeven, High-brightness, narrowband, compact soft x-ray Cherenkov sources in the water window, Appl. Phys. Lett. **83**, 4050 (2003).
A. S. Konkov, A. S. Gogolev, A. P. Potylitsyn, X-ray Cherenkov radiation as a source for relativistic charged particle beam diagnostics, in Proceedings of IBIC-2013 (Oxford, UK, 2013), p. 910.
M. Shevelev, A. Konkov, A. Aryshev, Soft-x-ray Cherenkov radiation generated by a charged particle moving near a finite-size screen, Phys. Rev. A **92**, 053851 (2015).
E. Hemsing, J. B. Rosenzweig, Coherent transition radiation from a helically microbunched electron beam, J. Appl. Phys. **105**, 093101 (2009).
E. Hemsing *et al*., Experimental observation of helical microbunching of a relativistic electron beam, Appl. Phys. Lett. **100**, 091110 (2012).
V. L. Ginzburg, V. N. Tsytovich, *Transition Radiation and Transition Scattering* (Hilger, Bristol, 1990).
S. E. Korbly, A. S. Kesar, J. R. Sirigiri, R. J. Temkin, Observation of frequency-locked coherent terahertz Smith-Purcell radiation, Phys. Rev. Lett. **94**, 054803 (2005).
D. Y. Sergeeva, A. P. Potylitsyn, A. A. Tishchenko, M. N. Strikhanov, Smith-Purcell radiation from periodic beams, Opt. Express **25**, 26310 (2017).
V. E. Pafomov, Radiation of a charged particle in the presence of interfaces, Proc. P. N. Lebedev Phys. Inst. **44**, 28 (1971) \[in Russian\].
P. O. Kazinski, Inclusive probability of particle creation on classical backgrounds, arXiv:2001.06234.
A. B. Migdal, Bremsstrahlung and pair production at high energies in condensed media, J. Exptl. Theoret. Phys. (U.S.S.R.) **32**, 633 (1957) \[Sov. Phys. JETP **5**, 527 (1957)\].
M. L. Ter-Mikaelian, *High-Energy Electromagnetic Processes in Condensed Media* (Wiley Interscience, New York, 1972).
K. Nakamura *et al*. (Particle Data Group), Review of particle physics, J. Phys. G **37**, 075021 (2010).
L. D. Landau, E. M. Lifshitz, *Electrodynamics of Continuous Media* (Pergamon, Oxford, 1984).
M. N. Polyanskiy, http://refractiveindex.info.
F. W. J. Olver, D. W. Lozier, R. F. Boisvert, C. W. Clark (Eds.), *NIST Handbook of Mathematical Functions* (Cambridge University Press, New York, NY, 2010).
V. V. Fedorov, A. I. Smirnov, On the possibility of Cerenkov emission of $\gamma$ quanta by electrons, Pis’ma Zh. Eksp. Teor. Fiz. **23**, 34 (1976) \[JETP Lett. **23**, 29 (1976)\].
I. Kaminer *et al*., Quantum Cerenkov radiation: Spectral cutoffs and the role of spin and orbital angular momentum, Phys. Rev. X **6**, 011006 (2016).
I. P. Ivanov, V. G. Serbo, V. A. Zaytsev, Quantum calculation of the Vavilov-Cherenkov radiation by twisted electrons, Phys. Rev. A **93**, 053825 (2016).
O. V. Bogdanov, P. O. Kazinski, G. Yu. Lazarenko, Proposal for experimental observation of the twisted photons in transition and Vavilov-Cherenkov radiations, JINST (to be published); arXiv:2001.05229.
[^1]: E-mail: `bov@tpu.ru`
[^2]: E-mail: `kpo@phys.tsu.ru`
[^3]: E-mail: `laz@phys.tsu.ru`
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'There are different categorizations of the definition of a [*ring*]{} such as [*Ann-category*]{} \[6\], [*ring category*]{} \[2\],... The main result of this paper is to prove that every axiom of the definition of a [*ring category*]{}, without the axiom $x_{0}=y_{0},$ can be deduced from the axiomatics of an [*Ann-category*]{}.'
author:
- |
Nguyen Tien Quang, Nguyen Thu Thuy and Che Thi Kim Phung\
[*Hanoi National University of Education*]{}
title: 'THE RELATION BETWEEN ANN-CATEGORIES AND RING CATEGORIES'
---
Introduction
============
Categories with monoidal structures $\oplus, \otimes$ (also called [*categories with distributivity constraints*]{}) were presented by Laplaza \[3\]. M. Kapranov and V.Voevodsky \[2\] omitted requirements of the axiomatics of Laplaza which are related to the commutativity constraints of the operation $\otimes$ and presented the name [*ring categories*]{} to indicate these categories.
To approach in an other way, monoidal categories can be “smoothed” to become a [*category with group structure,*]{} when they are added the definition of invertible objects (see Laplaza \[4\], Saavedra Rivano \[9\]). Now, if the back ground category is a [*groupoid*]{} (i.e., each morphism is an isomorphism) then we have [*monoidal category group-like*]{} (see A. Frölich and C. T. C. Wall \[1\], or a [*Gr-category*]{} (see H. X. Sinh \[11\]). These categories can be classified by $H^{3}(\Pi, A)$. Each Gr-category $\mathcal G$ is determined by 3 invariants: The group $\Pi$ of classes of congruence objects, $\Pi-$module $A$ of automorphisms of the unit $1$, and an element $\overline{h}\in H^{3}(\Pi, A),$ where $h$ is induced by the associativity constraint of $\mathcal G.$
In 1987, in \[6\], N. T. Quang presented the definition of an [*Ann-category*]{}, as a categorization of the definition of rings, when a symmetric Gr-category (also called Pic-category) is equiped with a monoidal structure $\otimes$. In \[8\], \[7\], Ann-categories and [*regular*]{} Ann-categories, developed from the ring extension problem, have been classified by, respectively, Mac Lane ring cohomology \[5\] and Shukla algebraic cohomology \[10\].
The aim of this paper is to show clearly the relation between the definition of an [*Ann-category*]{} and a [*ring category*]{}.
For convenience, let us recall the definitions. Moreover, let us denote $AB$ or $A.B$ instead of $A{\otimes }B.$
Fundamental definitions
=======================
[**The axiomatics of an Ann-category**]{}\
An Ann-category consists of:\
i) A groupoid ${\mathcal{A}}$ together with two bifunctors ${\oplus},{\otimes }:{\mathcal{A}}\times{\mathcal{A}}\longrightarrow {\mathcal{A}}.$\
ii) A fixed object $0\in {\mathcal{A}}$ together with naturality constraints $a^+,c,g,d$ such that $({\mathcal{A}},{\oplus},a^+,c,(0,g,d))$ is a Pic-category.\
iii) A fixed object $1\in{\mathcal{A}}$ together with naturality constraints $a,l,r$ such that $({\mathcal{A}},{\otimes },a,(1,l,r))$ is a monoidal $A$-category.\
iv) Natural isomorphisms ${\frak{L}},{\frak{R}}$ $${\frak{L}}_{A,X,Y}:A{\otimes }(X{\oplus}Y)\longrightarrow (A{\otimes }X){\oplus}(A{\otimes }Y)$$ $${\frak{R}}_{X,Y,A}:(X{\oplus}Y){\otimes }A\longrightarrow(X{\otimes }A){\oplus}(Y{\otimes }A)$$ such that the following conditions are satisfied:\
(Ann-1) For each $A\in {\mathcal{A}},$ the pairs $(L^A,\breve{L^A}),(R^A,\breve{R^A})$ determined by relations:\
$$\begin{aligned}
&L^A & = &A{\otimes }- \;\;\;\; &R^A&=&-{\otimes }A\\
&\breve{L^A}_{X,Y}& = &{\frak{L}}_{A, X, Y}\;\;\;\; &\breve{R^A}_{X,
Y}&=&\mathcal{R}_{X, Y, A}
\end{aligned}$$ are ${\oplus}$-functors which are compatible with $a^+$ and $c.$\
(Ann-2) For all $ A,B,X,Y\in {\mathcal{A}},$ the following diagrams:
$$\begin{diagram}
\node{(AB)(X{\oplus}Y)}\arrow{s,l}{\breve{L}^{AB}} \node{ A(B(X{\oplus}Y))}\arrow{w,t}{\quad a_{A, B, X{\oplus}Y}\quad}\arrow{e,t}{\quad
id_A{\otimes }\breve{L}^B\quad}\node{ A(BX{\oplus}BY)}\arrow{s,r}{\breve{L}^A}\\
\node{(AB)X{\oplus}(AB)Y}\node[2]{A(BX){\oplus}A(BY)}\arrow[2]{w,t}
{\quad\quad a_{A, B, X}{\oplus}a_{A, B, Y} \quad\quad}
\end{diagram}\tag{1.1}$$
$$\begin{diagram}
\node{(X{\oplus}Y)(BA)}\arrow{s,l}{\breve{R}^{BA}}\arrow{e,t}{\quad
a_{ X{\oplus}Y, B, A}\quad}\node{((X{\oplus}Y)B)A}\arrow{e,t}{\quad
\breve{R}^B{\otimes }id_A\quad}\node{ (XB{\oplus}YB)A}\arrow{s,r}{\breve{R}^A}\\
\node{X(BA){\oplus}Y(BA)}\arrow[2]{e,t} {\quad\quad a_{X, B, A}{\oplus}a_{Y, B, A} \quad\quad}\node[2]{(XB)A{\oplus}(YB)A}
\end{diagram}\tag{1.1'}$$ $$\begin{diagram}
\node{(A(X{\oplus}Y))B}\arrow{s,l}{\breve{L}^{A}{\otimes }id_B} \node{
A((X{\oplus}Y)B)}\arrow{w,t}{\quad a_{A, X{\oplus}Y,
B}\quad}\arrow{e,t}{\quad id_A{\otimes }\breve{R}^B\quad}\node{
A(XB{\oplus}YB)}\arrow{s,r}{\breve{L}^A}\\
\node{(AX{\oplus}AY)B}\arrow{e,t}{\quad \breve{R}^B
\quad}\node{(AX)B{\oplus}(AY)B}\node{A(XB){\oplus}A(YB)}\arrow{w,t}
{\quad a{\oplus}a\quad}
\end{diagram}\tag{1.2}$$ $$\begin{diagram}
\node{(A{\oplus}B)X{\oplus}(A{\oplus}B)Y}\arrow{s,r}{\breve{R}^{X}{\oplus}\breve{R}^Y} \node{(A{\oplus}B)(X{\oplus}Y)}\arrow{w,t}{\breve{L}^{A{\oplus}B}}\arrow{e,t}{
\breve{R}^{X{\oplus}Y}}\node{A(X{\oplus}Y){\oplus}B(X{\oplus}Y)}\arrow{s,l}{\breve{L}^A{\oplus}\breve{L}^B}\\
\node{(AX{\oplus}BX){\oplus}(AY{\oplus}BY)}\arrow[2]{e,t} {\quad\quad v
\quad\quad}\node[2]{(AX{\oplus}AY){\oplus}(BX{\oplus}BY)}
\end{diagram}\tag{1.3}$$
commute, where $v=v_{U,V,Z,T}:(U{\oplus}V){\oplus}(Z{\oplus}T)\longrightarrow(U{\oplus}Z){\oplus}(V{\oplus}T)$ is the unique functor built from $a^+,c,id$ in the monoidal symmetric category $({\mathcal{A}},{\oplus}).$\
(Ann-3) For the unity object $1\in {\mathcal{A}}$ of the operation ${\oplus},$ the following diagrams:
$$\begin{diagram}
\node{1(X\oplus Y)} \arrow[2]{e,t}{\breve{L}^1}
\arrow{se,b}{l_{X\oplus Y}}
\node[2]{1X\oplus 1Y} \arrow{sw,b}{l_X\oplus l_Y} \\
\node[2]{X\oplus Y}
\end{diagram}\tag{1.4}$$
$$\begin{diagram}
\node{(X\oplus Y)1} \arrow[2]{e,t}{\breve{R}^1}
\arrow{se,b}{r_{X\oplus Y}}
\node[2]{X1\oplus Y1} \arrow{sw,b}{r_X\oplus r_Y} \\
\node[2]{X\oplus Y}
\end{diagram}\tag{1.4'}$$
commute.
[**Remark.**]{} The commutative diagrams (1.1), (1.1’) and (1.2), respectively, mean that:\
$$\begin{aligned} (a_{A, B, -})\;:\;& L^A.L^B &\longrightarrow \;& L^{AB}\\
(a_{-,A,B})\;:\;&R^{AB}&\longrightarrow \;&R^A.R^B\\
(a_{A, - ,B})\;:\;&L^A.R^B&\longrightarrow \;&R^B.L^A
\end{aligned}$$ are ${\oplus}$-functors.\
The diagram (1.3) shows that the family $(\breve{L}^Z_{X,Y})_Z=(\mathcal{L}_{-,X,Y})$ is an ${\oplus}$-functor between the ${\oplus}$-functors $Z\mapsto Z(X{\oplus}Y)$ and $Z\mapsto ZX{\oplus}ZY$, and the family $(\breve{R}^C_{A,B})_C=(\mathcal{R}_{A, B,-})$ is an ${\oplus}$-functor between the functors $C\mapsto (A{\oplus}B)C$ and $C\mapsto AC{\oplus}BC.$\
The diagram (1.4) (resp. (1.4’)) shows that $l$ (resp. $r$) is an ${\oplus}$-functor from $L^1$ (resp. $R^1$) to the unitivity functor of the ${\oplus}$-category ${\mathcal{A}}$.
[**The axiomatics of a ring category**]{}
A [*ring category*]{} is a category ${\mathcal{R}}$ equiped with two monoidal structures ${\oplus},{\otimes }$ (which include corresponding associativity morphisms $a^{{\oplus}}_{A,B,C},a^{{\otimes }}_{A,B,C}$ and unit objects denoted 0, 1) together with natural isomorphisms $$u_{A,B}:A{\oplus}B\to B{\oplus}A,\qquad\qquad v_{A,B,C}:A{\otimes }(B{\oplus}C)\to (A{\otimes }B){\oplus}(A{\otimes }C)$$ $$w_{A,B,C}:(A{\oplus}B){\otimes }C\to (A{\otimes }C){\oplus}(B{\otimes }C),$$ $$x_A:A{\otimes }0\to 0,\qquad y_A:0{\otimes }A\to 0.$$ These isomorphisms are required to satisfy the following conditions.
$K1 (\bullet{\oplus}\bullet)$ The isomorphisms $u_{A,B}$ define on ${\mathcal{R}}$ a structure of a symmetric monoidal category, i.e., they form a braiding and $u_{A,B}u_{B,A}=1.$
$K2 (\bullet{\otimes }(\bullet{\oplus}\bullet))$ For any objects $A,B,C$ the diagram [$$\begin{diagram}
\node{A{\otimes }(B{\oplus}C)}\arrow{s,l}{A{\otimes }u_{B,C}} \arrow{e,t}{v_{A,B,C}}\node{(A{\otimes }B){\oplus}(A{\otimes }C)}\arrow{s,r}{u_{A{\otimes }B,A{\otimes }C}} \\
\node{A{\otimes }(C{\oplus}B)}\arrow{e,t}{v_{A,C,B}}\node{(A{\otimes }C){\oplus}(A{\otimes }B)}
\end{diagram}$$]{} is commutative.
$K3 ((\bullet{\oplus}\bullet){\otimes }\bullet)$ For any objects $A,B,C$ the diagram [$$\begin{diagram}
\node{(A{\oplus}B){\otimes }C}\arrow{s,l}{u_{A,B}{\otimes }C} \arrow{e,t}{w_{A,B,C}}\node{(A{\otimes }C){\oplus}(B{\otimes }C)}\arrow{s,r}{u_{A{\otimes }C,B{\otimes }C}} \\
\node{(B{\oplus}A){\otimes }C}\arrow{e,t}{w_{B,A,C}}\node{(B{\otimes }C){\oplus}(A{\otimes }C)}
\end{diagram}$$]{} is commutative.
$K4 ((\bullet{\oplus}\bullet{\oplus}\bullet){\otimes }\bullet)$ For any objects $A,B,C,D$ the diagram [$$\begin{diagram}
\node{(A{\oplus}(B{\oplus}C)D)}\arrow{s,l}{a^{{\oplus}}_{A,B,C}{\otimes }D} \arrow{e,t}{w_{A,B{\oplus}C,D}}\node{AD{\oplus}((B{\oplus}C)D)}\arrow{e,t}{AD{\oplus}w_{B,C,D}}\node{AD{\oplus}(BD{\oplus}CD)}\arrow{s,r}{a^{{\oplus}}_{AD,BD,CD}} \\
\node{((A{\oplus}B){\oplus}C)D}\arrow{e,t}{w_{A{\oplus}B,C,D}}\node{(A{\oplus}B)D{\oplus}CD}\arrow{e,t}{w_{A,B,D}{\oplus}CD}\node{(AD{\oplus}BD){\oplus}CD}
\end{diagram}$$]{} is commutative.
$K5 (\bullet{\otimes }(\bullet{\oplus}\bullet{\oplus}\bullet))$ For any objects $A,B,C,D$ the diagram [$$\begin{diagram}
\node{A(B{\oplus}(C{\oplus}D))}\arrow{s,l}{A{\otimes }a^{{\oplus}}_{B,C,D}} \arrow{e,t}{v_{A,B,C{\oplus}D}}\node{AB{\oplus}A(C{\oplus}D)}\arrow{e,t}{AB{\oplus}v_{A,C,D}}\node{AB{\oplus}(AC{\oplus}AD)}\arrow{s,r}{a^{{\oplus}}_{AB,AC,AD}} \\
\node{A((B{\oplus}C){\oplus}D)}\arrow{e,t}{v_{A,B{\oplus}C,D}}\node{A(B{\oplus}C){\oplus}AD}\arrow{e,t}{v_{A,B,C}{\oplus}AD}\node{(AB{\oplus}AC){\oplus}AD}
\end{diagram}$$]{} is commutative.
$K6 (\bullet{\otimes }\bullet{\otimes }(\bullet{\oplus}\bullet))$ For any objects $A,B,C,D$ the diagram [$$\begin{diagram}
\node{A(B(C{\oplus}D))}\arrow{s,l}{a^{{\otimes }}_{A,B,C{\oplus}D}} \arrow{e,t}{A{\otimes }v_{B,C,D}}\node{A(BC{\oplus}BD)}\arrow{e,t}{v_{A,BC,BD}}\node{A(BC){\oplus}A(BD)}\arrow{s,r}{a^{{\otimes }}_{A,B,C}{\oplus}a^{{\otimes }}_{A,B,D}} \\
\node{(AB)(C{\oplus}D)} \arrow[2]{e,t}{v_{AB,C,D}}\node[2]{(AB)C{\oplus}(AB)D}
\end{diagram}$$]{} is commutative.
$K7 ((\bullet{\oplus}\bullet){\otimes }\bullet{\otimes }\bullet)$ Similar to the above.
$K8 (\bullet{\otimes }(\bullet{\oplus}\bullet){\otimes }\bullet)$ Similar to the above.
$K9 ((\bullet{\oplus}\bullet){\otimes }(\bullet{\oplus}\bullet))$ For any objects $A,B,C,D$ the diagram
(12,4.5)
(0,0)[$((AC{\oplus}BC){\oplus}AD){\oplus}BD$]{} (0.1,1.5)[$(AC{\oplus}BC){\oplus}(AD{\oplus}BD)$]{} (0.3,3)[$(A{\oplus}B)C{\oplus}(A{\oplus}B)D$]{} (0.8,4.5)[$(A{\oplus}B)(C{\oplus}D)$]{}
(10,0)[$(AC{\oplus}(BC{\oplus}AD)){\oplus}BD$]{} (10,1.5)[$(AC{\oplus}(AD{\oplus}BC)){\oplus}BD$]{} (9.8,3)[$((AC{\oplus}AD){\oplus}BC){\oplus}BD$]{} (9.8,4.5)[$(AC{\oplus}AD){\oplus}(BC{\oplus}BD)$]{} (5.2,4.5)[$A(C{\oplus}D){\oplus}B(C{\oplus}D)$]{}
(1.8,4.3)[(0,-1)[0.8]{}]{} (1.8,2.8)[(0,-1)[0.8]{}]{} (1.8,1.3)[(0,-1)[0.8]{}]{}
(11.6,4.3)[(0,-1)[0.8]{}]{} (11.6,1.9)[(0,1)[0.8]{}]{} (11.6,1.3)[(0,-1)[0.8]{}]{}
(3.1,4.6)[(1,0)[1.9]{}]{} (8.3,4.6)[(1,0)[1.3]{}]{} (9.8,0.1)[(-1,0)[6.2]{}]{}
is commutative (the notation for arrows have been omitted, they are obvious).
$K10 (0{\otimes }0)$ The maps $x_0,y_0:0{\otimes }0\to 0$ coincide.
$K11 (0{\otimes }(\bullet{\oplus}\bullet))$ For any objects $A,B$ the diagram [$$\begin{diagram}
\node{0{\otimes }(A{\oplus}B)}\arrow{s,l}{y_{A{\oplus}B}} \arrow{e,t}{v_{0,A,B}}\node{(0{\otimes }A){\oplus}(0{\otimes }B)}\arrow{s,r}{y_a{\oplus}y_B}\\
\node{0}\node{0{\oplus}0}\arrow{w,t}{l^{{\oplus}}_0=r^{{\oplus}}_0}
\end{diagram}$$]{} is commutative.
$K12 ((\bullet{\oplus}\bullet){\otimes }0)$ Similar to the above.
$K13 (0{\otimes }1)$ The maps $y_1,r^{{\otimes }}_0:0{\otimes }1\to 0$ coincide.
$K14 (1{\otimes }0)$ Similar to the above.
$K15 (0{\otimes }\bullet{\otimes }\bullet)$ For any objects $A,B$ the diagram [$$\begin{diagram}
\node{0{\otimes }(A{\otimes }B)}\arrow{s,l}{y_{A{\otimes }B}} \arrow{e,t}{a^{{\otimes }}_{0,A,B}}\node{(0{\otimes }A){\otimes }B}\arrow{s,r}{y_A{\otimes }B}\\
\node{0}\node{0{\otimes }B}\arrow{w,t}{y_B}
\end{diagram}$$]{} is commutative.
$K16 (\bullet{\otimes }0{\otimes }\bullet),(\bullet{\otimes }\bullet{\otimes }0)$ For any objects $A,B$ the diagrams [$$\begin{diagram}
\node{A{\otimes }(0{\otimes }B)}\arrow{s,l}{A{\otimes }y_{B}} \arrow[2]{e,t}{a^{{\otimes }}_{A,0,B}}\node[2]{(A{\otimes }0){\otimes }B}\arrow{s,r}{x_A{\otimes }B}\\
\node{A{\otimes }0}\arrow{e,t}{x_A}\node{0}
\node{0{\otimes }B}\arrow{w,t}{y_B}
\end{diagram}$$]{} [$$\begin{diagram}
\node{A{\otimes }(B{\otimes }0)}\arrow{s,l}{A{\otimes }x_B}\arrow{e,t}{a^{{\otimes }}_{A,B,0}}\node{(A{\otimes }B){\otimes }0}\arrow{s,r}{x_{A{\otimes }B}}\\
\node{A{\otimes }0}\arrow{e,t}{x_A}\node{0}
\end{diagram}$$]{} are commutative.
$K17 (\bullet(0{\oplus}\bullet))$ For any objects $A,B$ the diagram [$$\begin{diagram}
\node{A{\otimes }(0{\oplus}B)}\arrow{s,l}{A{\otimes }l^{{\oplus}}_B} \arrow{e,t}{v_{A,0,B}}\node{(A{\otimes }0){\oplus}(A{\otimes }B)}\arrow{s,r}{x_A{\oplus}(A{\otimes }B)}\\
\node{A{\otimes }B}\node{0{\oplus}(A{\otimes }B)}\arrow{w,t}{l^{{\oplus}}_{A{\otimes }B}}
\end{diagram}$$]{} is commutative.
$K18 ((0{\oplus}\bullet){\otimes }\bullet),(\bullet{\otimes }(\bullet{\oplus}0)),((\bullet{\oplus}0){\otimes }\bullet)$ Similar to the above.
The relation between an Ann-category and a ring category
========================================================
In this section, we will prove that the axiomatics of a ring category, without K10, can be deduced from the one of an Ann-category. First, we can see that, the functor morphisms $a^{\oplus}, a^{\otimes}, u, l^{{\oplus}}, r^{{\oplus}}, v, w,$ in Definiton 2 are, respectively, the functor morphisms $a_{+}, a, c, g, d, {\frak{L}}, {\frak{R}}$ in Definition 1. Isomorphisms $x_A, y_A$ coincide with isomorphisms $\widehat{L}^A, \widehat{R}^A$ referred in Proposition 1.
We now prove that diagrams which commute in a ring category also do in an Ann-category.
K1 obviously follows from (ii) in the definition of an Ann-category.
The commutative diagrams $K2, K3, K4, K5$ are indeed the compatibility of functor isomorphisms $(L^A, \Breve L^A), (R^A, \Breve R^A)$ with the constraints $a_{+}, c$ (the axiom Ann-1).
The diagrams $K5-K9,$ respectively, are indeed the ones in (Ann-2). Particularly, K9 is indeed the decomposition of (1.3) where the morphism $v$ is replaced by its definition diagram: [$$\begin{diagram}
\node{(P{\oplus}Q){\oplus}(R{\oplus}S)}\arrow{s,l}{v} \arrow{e,t}{a_{+}}\node{((P{\oplus}Q){\oplus}R){\oplus}S}\node{(P{\oplus}(Q{\oplus}R)){\oplus}S}\arrow{w,t}{a_{+}{\oplus}S}\arrow{s,r}{(P{\oplus}c){\oplus}S}\\
\node{(P{\oplus}R){\oplus}(Q{\oplus}S)}\arrow{e,t}{a_{+}}\node{((P{\oplus}R){\oplus}Q){\oplus}S}\node{(P{\oplus}(R{\oplus}Q)){\oplus}S.}\arrow{w,t}{a_{+}{\oplus}S}
\end{diagram}$$]{}
[**The proof for K17, K18**]{}
Let $P,$ $P^{'}$ be Gr-categories, $(a_{+}, (0, g, d)), (a^{'}_{+}, (0^{'}, g^{'}, d^{'}))$ be respective constraints, and $(F, \Breve F):P\rightarrow P^{'}$ be $\oplus$-functor which is compatible with $(a_{+}, a^{'}_{+}).$ Then $(F, \Breve F)$ is compatible with the unitivity constraints $(0, g, d)), (0^{'}, g^{'}, d^{'})).$
First, the isomorphism $\widehat{F}:F0\to 0'$ is determined by the composition $$\begin{diagram}
\node{u=F0{\oplus}F0}\node{F(0{\oplus}0)}
\arrow {w,t}{\widetilde{F}}
\arrow{e,t}{F(g)}
\node{F0}\node{0'{\oplus}F0.}\arrow{w,t}{g'}
\end{diagram}$$ Since $F0$ is a regular object, there exists uniquely the isomorphism $\widehat{F}:F0\to 0'$ such that $\widehat{F}{\oplus}id_{F0}=u.$ Then, we may prove that $\widehat{F}$ satisfies the diagrams in the definition of the compatibility of the ${\oplus}$-functor $F$ with the unitivity constraints.
In an Ann-category ${\mathcal{A}},$ there exist uniquely isomorphisms $$\hat L^A: A{\otimes }0 \longrightarrow 0, \qquad \hat R^A: 0{\otimes }A \longrightarrow 0$$ such that the following diagrams [$$\begin{diagram}
\node{AX}\node{A(0{\oplus}X)}\arrow{w,t}{L^A(g)}\arrow{s,r}{\breve L^A\qquad(2.1)}
\node{AX}\node{A(X{\oplus}0)}\arrow{w,t}{L^A(d)}\arrow{s,r}{\breve L^A\qquad(2.1')}\\
\node{0{\oplus}AX}\arrow{n,l}{g}\node{A0{\oplus}AX}\arrow{w,t}{\hat L^A{\oplus}id}
\node{AX{\oplus}0}\arrow{n,l}{d}\node{AX{\oplus}A0}\arrow{w,t}{id{\oplus}\hat L^A}
\end{diagram}$$]{} [$$\begin{diagram}
\node{AX}\node{(0{\oplus}X)A}\arrow{w,t}{R^A(g)}\arrow{s,r}{\breve R^A\qquad(2.2)}
\node{AX}\node{(X{\oplus}0)A}\arrow{w,t}{R^A(d)}\arrow{s,r}{\breve R^A\qquad(2.2')}\\
\node{0{\oplus}AX}\arrow{n,l}{g}\node{0A{\oplus}XA}\arrow{w,t}{\hat R^A{\oplus}id}
\node{AX{\oplus}0}\arrow{n,l}{d}\node{XA{\oplus}0A}\arrow{w,t}{id{\oplus}\hat R^A}
\end{diagram}$$]{} commute, i.e., $L^A$ and $R^A$ are U-functors respect to the operation ${\oplus}$.
Since $(L^A, \breve L^A)$ are ${\oplus}$-functors which are compatible with the associativity constraint $a^{{\oplus}}$ of the Picard category $({\mathcal{A}},{\oplus}),$ it is also compatible with the unitivity constraint $(0,g,d)$ thanks to Lemma 1. That means there exists uniquely the isomorphism $\hat L^A$ satisfying the diagrams $(2.1)$ and $(2.1')$. The proof for $\hat R^A$ is similar. The diagrams commute in Proposition 1 are indeed K17, K18.
[**The proof for 15, K16**]{}
Let $(F,\breve F), (G,\breve G)$ be ${\oplus}$-functors between ${\oplus}$-categories ${\mathcal{C}}, {\mathcal{C}}'$ which are compatible with the constraints $(0, g, d), (0', g', d')$ and $\widetilde F: F(0)\longrightarrow 0', \widetilde G: G(0)\longrightarrow 0'$ are respective isomorphisms. If $\alpha: F \longrightarrow G$ in an ${\oplus}$-morphism such that $\alpha_0$ is an isomorphism, then the diagram [$$\begin{diagram}
\node{F0}\arrow[2]{r,t}{\alpha_0}\arrow{se,b}{\hat F}\node[2]{G0}\arrow{sw,b}{\hat G}\\
\node[2]{0'}
\end{diagram}$$]{} commutes.
Let us consider the diagram
(9,3.8)
(-0.2,0.9)[$F0$]{} (2,0.9)[$F(0{\oplus}0)$]{} (4.8,0.9)[$G(0{\oplus}0)$]{} (7.8,0.9)[$G0$]{}
(-0.4,2.5)[$0'{\oplus}F0$]{} (2,2.5)[$F0{\oplus}F0$]{} (4.8,2.5)[$G0{\oplus}G0$]{} (7.6,2.5)[$0'{\oplus}G0$]{}
(3.8,0.1)[$u_0$]{} (3.6,3.7)[$id{\oplus}u_0$]{} (0.8,1.1)[$F(g)$]{} (3.6,1.1)[$u_{0{\oplus}0}$]{} (6.5,1.1)[$G(g)$]{} (0.9,2.7)[$\breve{F}{\oplus}id$]{} (3.5,2.7)[$u_0{\oplus}u_0$]{} (6.3,2.7)[$\breve{G}{\oplus}id$]{}
(-0.3,1.7)[$g'$]{} (2.3,1.7)[$\widetilde{F}$]{} (5.1,1.7)[$\widetilde{G}$]{} (7.7,1.7)[$g'$]{}
(0,0.8)[(0,-1)[0.8]{}]{} (0,0)[(1,0)[8]{}]{} (8,0)[(0,1)[0.8]{}]{}
(0,3.6)[(0,-1)[0.7]{}]{} (0,3.6)[(1,0)[8]{}]{} (8,3.6)[(0,-1)[0.7]{}]{}
(0,2.3)[(0,-1)[1]{}]{} (8,2.3)[(0,-1)[1]{}]{}
(2.6,1.3)[(0,1)[1]{}]{} (5.4,1.3)[(0,1)[1]{}]{}
(1.8,2.6)[(-1,0)[1]{}]{} (3.3,2.6)[(1,0)[1.3]{}]{} (6.1,2.6)[(1,0)[1.3]{}]{}
(0.3,1)[(1,0)[1.5]{}]{} (3.3,1)[(1,0)[1.3]{}]{} (6.1,1)[(1,0)[1.5]{}]{}
(3.8,3.2)[(I)]{} (1,1.7)[(II)]{} (3.7,1.7)[(III)]{} (6.4,1.7)[(IV)]{} (3.8,0.5)[(V)]{}
In this diagram, (II) and (IV) commute thanks to the compatibility of ${\oplus}$-functors $(F,\breve F), (G,\breve G)$ with the unitivity constraints; (III) commutes since $u$ is a ${\oplus}$-morphism; (V) commutes thanks to the naturality of $g'.$ Therefore, (I) commutes, i.e., $$\breve{G}\circ u_0{\oplus}u_0=\breve{F}{\oplus}u_0.$$ Since $F0$ is a regular object, $\breve{G}\circ u_0=\breve{F}.$
For any objects $X, Y\in \text{ob}{\mathcal{A}}$ the diagrams $$\begin{aligned}
{\scriptsize\begin{diagram}
\node{X{\otimes }(Y{\otimes }0)}\arrow{e,t}{id{\otimes }\widehat{L}^Y}\arrow{s,l}{a}\node{X{\otimes }0}
\arrow{s,r}{\widehat L^X \qquad(2.3)}
\node{0{\otimes }(X{\otimes }Y)}\arrow{e,t}{\widehat R^{XY}}\arrow{s,l}{a}\node{0}\\
\node{(X{\otimes }Y){\otimes }0}\arrow{e,t}{\widehat L^{XY}}\node{0}
\node{(0{\otimes }X){\otimes }Y}\arrow{e,t}{\widehat R^X{\otimes }id}\node{0{\otimes }Y}\arrow{n,r}{\widehat R^Y \qquad(2.3')}
\end{diagram}\nonumber}\end{aligned}$$ [$$\begin{diagram}
\node{X{\otimes }(0{\otimes }Y)}\arrow[2]{e,t}{a}\arrow{s,l}{id{\otimes }\hat R^Y}\node[2]{(X{\otimes }0){\otimes }Y}
\arrow{s,r}{\widehat L^X{\otimes }id\qquad(2.4)}\\
\node{X{\otimes }0}\arrow{e,t}{\widehat L^X}\node{0}
\node{0{\otimes }Y}\arrow{w,t}{\widehat R^Y}
\end{diagram}$$]{} commute.
To prove the first diagram commutative, let us consider the diagram [$$\begin{diagram}
\node{X{\otimes }(Y{\otimes }0)}\arrow{e,t}{id{\otimes }\hat L^Y}\arrow{s,l}{a}\arrow{se,t}{\widehat{L^X\circ L^Y}}
\node{X{\otimes }0}\arrow{s,r}{\hat L^X}\\
\node{(X{\otimes }Y){\otimes }0}\arrow{e,t}{\hat L^{XY}}\node{0}
\end{diagram}$$]{} According to the axiom (1.1), $(a_{X, Y, Z})_Z$ is an ${\oplus}$-morphism from the functor $L = L^X\circ L^Y$ to the functor $G = L^{XY}$. Therefore, from Lemma 2, (II) commutes. (I) commutes thanks to the determination of $\hat L$ of the composition $L = L^\circ L^Y$. So the outside commutes.
The second diagram is proved similarly, thanks to the axiom (1.1’). To prove that the diagram (2.4) commutes, let us consider the diagram [$$\begin{diagram}
\node{X{\otimes }(0{\otimes }Y)}\arrow[2]{e,t}{a}\arrow{s,l}{id{\otimes }\widehat{R^Y}}\arrow{se,t}{\hat H}\node[2]{(X{\otimes }0){\otimes }Y}\arrow{s,r}{\widehat{L^X}{\otimes }id}\arrow{sw,t}{\hat K}\\
\node{X{\otimes }0}\arrow{e,b}{\widehat{L^X}}\node{0}\node{0{\otimes }Y}\arrow{w,b}{\widehat{R^Y}}
\end{diagram}$$]{} where $H = L^X\circ R^Y$ and $K = R^Y\circ L^X$. Then (II) and (III) commute thanks to the determination of the isomorphisms $H$ and $K$. From the axiom (1.2), $(a_{X,Y,Z})_Z$ is an ${\oplus}$-morphism from the functor $H$ to the functor $K$. So from Lemma 2, (I) commutes. Therefore, the outside commutes. The diagrams in Proposition 2 are indeed K15, K16.
[**Proof for K11**]{}
In an Ann-category, the diagram [$$\begin{diagram}
\node{0{\oplus}0}\arrow{e,t}{g_0=d_0}\node{0}\\
\node{(0{\otimes }X){\oplus}(0{\otimes }Y)}\arrow{n,l}{\widehat{R}^X{\oplus}\widehat{R}^Y}
\node{0{\otimes }(X{\oplus}Y)}\arrow{n,r}{\widehat{R}^{XY}\qquad(2.5)}\arrow{w,t}{\breve{L}^0}
\end{diagram}$$]{} commutes.
Let us consider the diagram
(14,7.3)
(1.2,0)[$A(B{\oplus}C){\oplus}0(B{\oplus}C)$]{} (8.1,0)[$A(B{\oplus}C){\oplus}0$]{} (1,1.7)[$(AB{\oplus}AC){\oplus}(0B{\oplus}0C)$]{} (7.9,1.7)[$(AB{\oplus}AC){\oplus}(0{\oplus}0)$]{} (1.1,3.5)[$(AB{\oplus}0B){\oplus}(AC{\oplus}0C)$]{} (8.2,3.5)[$(AB{\oplus}0){\oplus}(AC{\oplus}0)$]{} (1.3,5.2)[$(A{\oplus}0)B{\oplus}(A{\oplus}0)C$]{} (9,5.2)[$AB{\oplus}AC$]{} (1.7,7)[$(A{\oplus}0)(B{\oplus}C)$]{} (8.9,7)[$A(B{\oplus}C)$]{}
(2.9,0.8)[$\breve{L}^A{\oplus}\breve{L}^0$]{} (2.9,2.9)[$v$]{} (2.9,4.5)[$\breve{R}^B{\oplus}\breve{R}^C$]{} (2.9,6.2)[$\breve{L}^{A{\oplus}0}$]{}
(9.8,0.8)[$\breve{L}^A{\oplus}d_0^{-1}$]{} (9.8,2.9)[$v$]{} (9.8,4.5)[$d_{AB}{\oplus}d_{AC}$]{} (9.8,6.2)[$\breve{L}^A$]{}
(0.2,2.6)[$\breve{R}^{B{\oplus}C}$]{} (11.7,2.6)[$d$]{}
(5.5,0.3)[$f'_A{\oplus}id$]{} (4.6,2)[$(id{\oplus}id){\oplus}(\widehat{R}^B{\oplus}\widehat{R}^C)$]{} (4.7,3.8)[$(id{\oplus}\widehat{R}^B){\oplus}(id{\oplus}\widehat{R}^C)$]{} (4.9,5.5)[$(d_A{\otimes }id){\oplus}(d_A{\otimes }id)$]{} (5.6,7.3)[$d_A{\otimes }id$]{}
(5.5,6.3)[(I)]{} (5.5,4.5)[(II)]{} (5.5,2.7)[(III)]{} (5.5,1)[(IV)]{} (0.5,4.3)[(V)]{} (11.2,4)[(VI)]{}
(2.7,0.5)[(0,1)[0.9]{}]{} (2.7,3.3)[(0,-1)[0.9]{}]{} (2.7,5)[(0,-1)[0.9]{}]{} (2.7,6.8)[(0,-1)[1.1]{}]{}
(9.6,0.4)[(0,1)[1]{}]{} (9.6,3.2)[(0,-1)[0.9]{}]{} (9.6,4)[(0,1)[1]{}]{} (9.6,6.8)[(0,-1)[1.1]{}]{}
(4.2,0.1)[(1,0)[3.7]{}]{} (4.4,1.8)[(1,0)[3.3]{}]{} (4.4,3.6)[(1,0)[3.6]{}]{} (4.2,5.3)[(1,0)[4.6]{}]{} (3.9,7.1)[(1,0)[4.7]{}]{}
(0,0.1)[(1,0)[1]{}]{} (12,7.1)[(-1,0)[1.7]{}]{}
(0,0.1)[(0,1)[7]{}]{} (0,7.1)[(1,0)[1.5]{}]{} (12,0.1)[(-1,0)[1.9]{}]{} (12,0.1)[(0,1)[7]{}]{}
(13,4)[(2.6)]{}
In this diagram, (V) commutes thanks to the axiom I(1.3), (I) commutes thanks to the functorial property of ${\frak{L}};$ the outside and (II) commute thanks to the compatibility of the functors $R^{B{\oplus}C},R^B,R^C$ with the unitivity constraint $(0,g,d);$ (III) commutes thanks to the functorial property $v;$ (VI) commutes thanks to the coherence for the ACU-functor $(L^A,\breve{L}^A).$ So (IV) commutes. Note that $A(B{\oplus}C)$ is a regular object respect to the operation ${\oplus},$ so the diagram (2.5) commutes. We have K11.
Similarly, we have K12.
[**Proof for K13, K14**]{}
In an Ann-category, we have $$\widehat{L}^1=l_0,\widehat{R}^1=r_0.$$
We will prove the first equation, the second one is proved similarly. Let us consider the diagram (2.7). In this diagram, the outside commutes thanks to the compatibility of ${\oplus}$-functor $(L^1,\breve{L}^1)$ with the unitivity constraint $(0,g,d)$ respect to the operation ${\oplus};$ (I) commutes thanks to the functorial property of the isomorphism $l;$ (II) commutes thanks to the functorial property of $g;$ (III) obviously commutes; (IV) commutes thanks to the axiom I(1.4). So (V) commutes, i.e., $$\widehat{L}^1{\oplus}id_{1.0}=l_0{\oplus}id_{1.0}$$ Since 1.0 is a regular object respect to the operation ${\oplus},$ $\widehat{L}^1=l_0.$
(5.5,4)
(0,0)[$0{\oplus}(1.0)$]{} (4.5,0)[$(1.0){\oplus}(1.0)$]{} (1.5,1.3)[$0{\oplus}0$]{} (3.3,1.3)[$0{\oplus}0$]{} (3.3,2.5)[$0{\oplus}0$]{} (1.8,2.5)[$0$]{}
(0.4,3.5)[$1.0$]{} (4.7,3.5)[$1.(0{\oplus}0)$]{}
(4.3,0.1)[(-1,0)[3.1]{}]{} (4.5,3.6)[(-1,0)[3.6]{}]{} (0.5,0.3)[(0,1)[3]{}]{} (5.3,3.4)[(0,-1)[3]{}]{} (0.7,3.3)[(3,-2)[1]{}]{} (5,3.3)[(-3,-2)[1]{}]{} (0.7,0.3)[(1,1)[0.9]{}]{} (4.8,0.3)[(-1,1)[0.9]{}]{} (2.3,1.4)[(1,0)[0.8]{}]{} (3.6,1.6)[(0,1)[0.8]{}]{} (1.9,2.4)[(0,-1)[0.8]{}]{} (3.1,2.6)[(-1,0)[1]{}]{}
(2.3,-0.3)[$\widehat{L}^1{\oplus}id$]{} (1.3,0.6)[$id{\oplus}l_0$]{} (3.5,0.6)[$l_0{\oplus}l_0$]{} (2.5,0.6)[$(V)$]{} (2.5,1.1)[$id$]{} (-0.1,2)[$g_{1.0}$]{} (0.8,2)[$(II)$]{} (1.6,2)[$g_0$]{} (2.3,2)[$(III)$]{} (3.3,2)[$id$]{} (4.2,2)[$(IV)$]{} (5.4,2)[$\breve{L}^1$]{} (7.5,2)[$(2.7)$]{} (2.5,2.7)[$g_0$]{} (1.2,3.1)[$l_0$]{} (2.5,3.1)[$(I)$]{} (3.9,3.1)[$l_{0{\oplus}0}$]{} (1.6,3.7)[$L^1(g_0)=id{\otimes }g_0$]{}
We have K14.
Similarly, we have K13.
An Ann-category ${\mathcal{A}}$ is [*strong*]{} if $\widehat{L}^0=\widehat{R}^0.$
All the above results can be stated as follows
Each strong Ann-category is a ring category.
In our opinion, in the axiomatics of a [*ring category,*]{} the compatibility of the distributivity constraint with the unitivity constraint $(1, l, r)$ respect to the operation $\otimes$ is necessary, i.e., the diagrams of (Ann-3) should be added.
Moreover, if the symmetric monoidal structure of the operation $\oplus$ is replaced with the symmetric categorical groupoid structure, then each ring category is an Ann-category.
An open question: May the equation $\widehat{L}^0=\widehat{R}^0$ be proved to be independent in an Ann-category?
[99]{}
A. Frölich and C. T. C. Wall, [*Graded monoidal categories*]{}, Compositio Mathematica, tom 28, No 3 (1974), 229-285.
M. M. Kapranov and V. A. Voevodsky, *2-Categories and Zamolodchikov tetrahedra equations,* Proc. Symp Pure Math. 56 (1994), par 2,177-259.
M. L. Laplaza, *Coherence for distributivity,* Lecture Notes in Math, 281 (1972), 29-65.
M. L. Laplaza, *Coherence for categories with group structure: an antenative approach*, J. Algebra, 84 (1983), 305-323.
S. Mac Lane, *Homologie des anneaux et des modules,* Collque de Topologie algebrique. Louvain (1956), 55-80.
N. T. Quang, *Introduction to Ann-categories,* J. Math. Hanoi, No.15, 4 (1987), 14-24.(arXiv:math. CT/0702588v2 21 Feb 2007)
N. T. Quang, *Structure of Ann-categories and Mac Lane-Shukla cohomology of rings,*(Russian) Abelian groups and modues, No. 11,12, Tomsk. Gos. Univ., Tomsk (1994), 166-183.
N. T. Quang, *Cohomological classification of Ann-categories,* arXiv:math. 2009.
N. Saavedra Rivano, [*Categoryes Tannakiennes,*]{} Lecture Notes in Math.vol. 265, Spriger-Verlag, Berlin and New York, 1972.
U.Shukla, *Cohomologie des algebras associatives.* Ann.Sci.Ecole Norm.,Sup.,7 (1961), 163-209.
H. X. Sinh, *Gr-categories*, Universite Paris VII, Thèse de doctorat (1975).
| {
"pile_set_name": "ArXiv"
} |
[**Comment on “Quantum Phase Slips and Transport in Ultrathin Superconducting Wires”**]{}
In a recent Letter [@ZGVZ], Zaikin, Golubev, van Otterlo, and Zimanyi (ZGVZ) criticized the phenomenological time-dependent Ginzburg-Laudau (TDGL) model which I used to study the quantum phase slip (QPS) for superconducting wires [@duan]. They claimed that they developed a “microscopic” model, made [*qualitative*]{} improvement on my overestimate of the tunnelling barrier due to electromagnetic field (EM), and claimed agreements with the experiments by Giordano [@nick].
In this comment, I want to point out that, i), ZGVZ’s result on EM barrier is expected in [@duan]; ii), their work is also phenomenological; iii), their renormalization scheme is fundamentally flawed; iv), they underestimated the barrier for ultrathin wires; v), their comparison with experiments is incorrect. Details are given below.
i), In [@duan] I emphasized results on relatively thick wires and concluded that the observations on Giordano’s wires with thickness $\sqrt{\sigma}=$$410-505\AA$ must be due to weak links. Both kinetic inductance and Mooij-Schön mode have been included [@duan; @scot]. The ultrathin wire limit of the Eq.(8) in [@duan] gives an EM barrier, $$\begin{aligned}
\frac{S_{EM}}{\hbar} \cong \frac{\pi}{2}
\sqrt{2\ln(q_{upper}/q_{lower})} \frac{\sqrt{\sigma}}
{\lambda_L} \frac{\hbar c}{e^2},
\label{ultrathin}\end{aligned}$$ where the ratio of cutoffs $q_{upper}/q_{lower}\ge 10$. [*Eq. (\[ultrathin\]) is the main result in Ref.[@ZGVZ]*]{} except some underestimate (cf., below).
ii), QPS is a far-from-equilibrium process occuring on a time scale of the order of inverse gap of the superconductors. ZGVZ’s model contains nothing more than a saddle point plus Gaussian fluctuations and does not go beyond TDGL. Despite minor differences from my work, Ref.[@ZGVZ] does not present any new physics. I also want to point out that their electric screening for the superconducting charge is described by condensate fraction $n_s$ which vanishes at $T_c$. This is incorrect since Debye screening is nondiscriminating against superconducting or normal charge.
iii), ZGVZ’s renormalization scheme is wrong since EM field [*qualitatively*]{} changes the Kosterlitz renormalization flow [@scot; @fisher; @zhang]. Therefore their conclusions on the “metal-superconductor” transition and wire resistance were not based on a solid foundation.
iv), ZGVZ underestimated the EM barrier for ultrathin wires. For Giordano’s wire of $\sigma=\pi(80\AA)^2$, if we [*assume*]{} it is homogeneous with an effective London penetration depth $\lambda_L=1000\AA$, the EM barrier [*alone*]{} reduces the tunneling probability to $e^{-60}$. Taking into account the substrate dielectric constant [@scot] $\epsilon=10$ for $Ge$, then the EM suppression is $e^{-330}$. [*In order to observe QPS in homogeneous wires, the wire radius must be smaller than $10\AA$*]{}.
v), It needs to be emphasized that the results by Sharifi [*et al.*]{} [@dynes] are NOT similar to that of [@nick], but qualitatively different. The wires in [@dynes; @neg] are known to be homogeneous. In thinner wires ($[40\AA]^2$) than those in [@nick], no quantum phase-slip was observed. The wire resistance still obeys thermally activated behavior, albeit being larger than predicted by classical models [@neg]. As stated by ZGVZ, the theory is for a homogeneous wire. Hence they should compare with the experiments in [@dynes] instead of [@nick]. A random Josephson junctions model [@scot] is suitable for granular wires [@nick; @peng].
TDGL was used to calculate the EM barrier for the quantum decay of supercurrent [@duan] and the EM contribution to vortex mass [@mass]. There is a deep physical reason [@mass] for the second order time derivative term in TDGL: superfluids are compressible due to number-phase conjugation. [*Hence the qualitative physics in [@duan; @mass] is much more general than the formalism of the TDGL model*]{}.
This work was supported by NSF (DMR 91-13631).
Ji-Min Duan
Department of Physics,
University of California-San Diego,
La Jolla, California 92093-0319
A. D. Zaikin, D. S. Golubev, A. van Otterlo, and G. T. Zimanyi, Phys. Rev. Lett. [**78**]{}, 1552 (1997).
J.-M. Duan, Phys. Rev. Lett. [**74**]{}, 5128 (1995).
N. Giordano, Phys. Rev. Lett. [**61**]{}, 2137 (1988); Physica B [**203**]{}, 460 (1994).
S. R. Renn and J.-M. Duan, Phys. Rev. Lett. [**76**]{}, 3400 (1996).
M. P. A. Fisher and G. Grinstein, Phys. Rev. Lett.[**60**]{}, 208 (1988).
S. C. Zhang, Phys. Rev. Lett. [**59**]{}, 2111 (1987).
F. Sharifi, A. V. Herzog, and R. C. Dynes, Phys. Rev. Lett. [**71**]{}, 428 (1993).
P. Xiong, A. V. Herzog, and R. C. Dynes, Phys. Rev. Lett. [**78**]{}, 927 (1997).
A. V. Herzog, P. Xiong, F. Sharifi, and R. C. Dynes, Phys. Rev. Lett. [**76**]{}, 668 (1996).
J.-M. Duan, Phys. Rev. B [**48**]{}, 333 (1993); [*ibid*]{}, [**49**]{}, 12381 (1994). J.-M. Duan, and A. J. Leggett, Phys. Rev. Lett. [**68**]{}, 1216 (1992).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We investigate the thermopower due to the orbital Kondo effect in a single quantum dot system by means of the noncrossing approximation. It is elucidated how the asymmetry of tunneling resonance due to the orbital Kondo effect affects the thermopower under gate-voltage and magnetic-field control.'
address:
- 'Department of Applied Physics, Osaka University, Suita, Osaka 565-0871, Japan'
- 'Department of Physics, Kyoto University, Kyoto 606-8502, Japan'
author:
- 'R. Sakano'
- 'T. Kita'
- 'N. Kawakami'
title: Thermopower of Kondo Effect in Single Quantum Dot Systems with Orbital at Finite Temperatures
---
,
and
quantum dot ,Kondo effect ,transport 73.23.-b ,73.63.Kv ,71.27.+a ,75.30.Mb
Introduction
============
The Kondo effect due to magnetic impurity scattering in metals is a well known and widely studied phenomenon [@book:hewson]. The effect has recently received much renewed attention since it was found that the Kondo effect significantly influences the conductance in quantum dot (QD) systems [@pap:D.GG]. A lot of tunable parameters in QD systems have made it possible to systematically investigate electron correlations. In particular, high symmetry in shape of QDs gives rise to the orbital properties, which has stimulated extensive studies on the conductance due to the orbital Kondo effect [@pap:sasaki_st; @pap:st_Eto; @pap:Sasaki2; @pap:pjh; @pap:Choi; @pap:sakano]. The thermopower we study in this paper is another important transport quantity, which gives complementary information on the density of states to the conductance measurement: the thermopower can sensitively probe the asymmetric nature of the tunneling resonance around the Fermi level. So far, a few theoretical studies have been done on the thermopower in QD systems [@pap:Beenakker; @pap:boese; @pap:Turek; @pap:tskim; @pap:Matveev; @pap:BDong; @pap:Krawiec; @pap:Donabidowicz]. A recent observation of the thermopower due to the spin Kondo effect in a lateral QD system [@pap:Scheibner] naturally motivates us to theoretically explore this transport quantity in more detail. Here, we discuss how the asymmetry of tunneling resonance due to the orbital Kondo effect affects the thermopower under gate-voltage and magnetic-field control. By employing the noncrossing approximation (NCA) for the Anderson model with finite Coulomb repulsion, we especially investigate the Kondo effect of QD for several electron-charge regions.
Model and Calculation
=====================
Let us consider a single QD system with $N$-degenerate orbitals in equilibrium, as shown in Fig. \[fug:sch\].
![ Energy-level scheme of a single QD system with three orbitals coupled to two leads. []{data-label="fug:sch"}](pic/sch.eps){width="0.5\linewidth"}
The energy levels of the QD are assumed to be $$\begin{aligned}
&&\varepsilon_{\sigma l} = \varepsilon_d + l \Delta_{orb}, \\
&&l=-(N_{orb}-1)/2,-(N_{orb}-3)/2, \cdots, (N_{orb}-1)/2 \nonumber\end{aligned}$$ where $\varepsilon_d$ denotes the center of the energy levels and $\sigma$ ($l$) represents spin (orbital) index and $N_{orb}$ represents the degree of the orbital degeneracy. The energy-level splitting between the orbitals $\Delta_{orb}$ is induced in the presence of magnetic field $B$; $\Delta_{orb} \propto B$. In addition, the Zeeman splitting is assumed to be much smaller than the orbital splitting, so that we can ignore the Zeeman effect. In practice, this type of orbital splitting has been experimentally realized as Fock-Darwin states in vertical QD systems or clockwise and counterclockwise states in carbon nanotube QD systems. Our QD system is described by the multiorbital Anderson impurity model, $$\begin{aligned}
{\cal H} &=& {\cal H}_l + {\cal H}_d + {\cal H}_{t} \label{eq:hamiltonian} \\
%%%%%%%%
{\cal H}_l &=& \sum_{k \sigma l} \varepsilon_{k \sigma l} c^{\dagger}_{k \sigma l} c_{k \sigma l}, \\
%%%%%%%%%%
{\cal H}_d &=& \sum_{k \sigma l} \varepsilon_{\sigma l} d^{\dagger}_{\sigma l} d_{\sigma l}
+ U \sum_{\sigma l \neq \sigma' l'} n_{\sigma l} n_{\sigma' l'} \nonumber \\
&& \qquad -J \sum_{l \neq l'} \mbox{\boldmath$S$}_{dl} \cdot \mbox{\boldmath$S$}_{dl'} , \\
%%%%%%%%%
{\cal H}_{t} &=& V \sum_{k \sigma } \left( c^{\dagger}_{k \sigma l} d_{\sigma l} + \mbox{H. c.} \right),\end{aligned}$$ where $U$ is the Coulomb repulsion and $J(>0)$ represents the Hund coupling in the QD.
The non-equilibrium Green’s function technique allows us to study general transport properties, which gives the expression for the T-linear thermopower as [@pap:BDong], $$\begin{aligned}
S=-(1/eT) ({\cal L}_{12}/{\cal L}_{11}),\end{aligned}$$ with the linear response coefficients, $$\begin{aligned}
{\cal L}_{11} &=& \frac{\pi T}{h} \Gamma \sum_{\sigma l} \int d\varepsilon \, \rho_{\sigma l}(\varepsilon) \left( - \frac{\partial f(\varepsilon)}{\partial \varepsilon} \right), \\
{\cal L}_{12} &=& \frac{\pi T}{h} \Gamma \sum_{\sigma l} \int d\varepsilon \, \varepsilon \rho_{\sigma l} (\varepsilon) \left( - \frac{\partial f(\varepsilon)}{\partial \varepsilon} \right),\end{aligned}$$ where $\rho_{\sigma l}(\varepsilon)$ is the density of states for the electrons with spin $\sigma$ and orbital $l$ in the QD and $f(\varepsilon)$ is the Fermi distribution function. In order to obtain the thermopower it is necessary to evaluate $\rho_{\sigma l}(\varepsilon)$.
We exploit the NCA method to treat the Hamiltonian (\[eq:hamiltonian\]) [@pap:Bickers; @pap:Pruschke]. The NCA is a self-consistent perturbation theory, which summarizes a specific series of expansions in the hybridization $V$. This method is known to give physically sensible results at temperatures around or higher than the Kondo temperature. The NCA basic equations can be obtained in terms of coupled equations for the self-energies $\Sigma_m(z)$ of the resolvents $R_m(z)=1/[z-\varepsilon_m - \Sigma_m(z)]$, $$\begin{aligned}
\Sigma_m(z) &=& \frac{\Gamma}{\pi} \sum_{m'} \sum_{\sigma l} \left[ \left( M^{\sigma l}_{m' m} \right)^2 + \left( M^{\sigma l}_{m m'} \right)^2 \right] \nonumber \\
&& \qquad \times \int d\varepsilon R_{m'}(z+\varepsilon)f(\varepsilon),\end{aligned}$$ where the index $m$ specifies the eigenstates of ${\cal H}_d$ and the mixing width is $\Gamma=\pi \rho_c V^2$. The coefficients $M_{mm'}^{\sigma l}$ are determined by the expansion coefficients of the Fermion operator $d_{\sigma l}^{\dagger}=\sum_{mm'} M_{mm'}^{\sigma l} | m \rangle \langle m' |$. We compute the density of states by this method to investigate the thermopower.
Results
=======
Gate voltage control
--------------------
The thermopower for two orbitals is shown in Fig. \[fig:double\_vS\] as a function of the energy level $\varepsilon_d$ (gate-voltage control).
![ The thermopower for the two orbital QD system with finite Coulomb repulsion $U=8\Gamma$ as a function of the energy level of the QD. (a) The temperature dependence for $J=0$. The inset shows the conductance as a function of the dot level at $k_BT=0.20\Gamma$ (Coulomb resonance peaks). (b) The Hund-coupling dependence for $k_BT=0.04\Gamma$. []{data-label="fig:double_vS"}](pic/double_v.eps){width="6cm"}
There are four Coulomb peaks around $-\varepsilon_d/U \sim 0,1,2,3$ at high temperatures (see the inset of Fig. \[fig:double\_vS\](a)). As the temperature decreases, the thermopower in the region of $-1<\varepsilon_d/U < 0 (-3<\varepsilon_d/U < -2)$ with $n_d \sim 1 (3)$ is dominated by the [*SU*]{}(4) Kondo effect. The thermopower has negative values in the region $-1<\varepsilon_d/U < 0$, implying that the effective tunneling resonance, such as the Kondo resonance, is located above the Fermi level. At low enough temperatures, the [*SU*]{}(4) Kondo effect is enhanced with decrease of energy level down to $\varepsilon_d/U=-1/2$, which results in the enhancement of the thermopower. However, if the temperature of the system is larger than the [*SU*]{}(4) Kondo temperature, the Kondo effect is suppressed and the thermopower has a minimum in the regime $-1/2 <\varepsilon_d/U <0$. As the energy level further decreases, the [*SU*]{}(4) Kondo effect and the resulting thermopower are both suppressed. Note that the Hund coupling hardly affects the thermopower because of $n_d \sim 1$ in this regime, as shown in Fig. \[fig:double\_vS\] (b). Since the region of $-3<\varepsilon_d/U < -2$ can be related to $-1<\varepsilon_d/U < 0$ via an electron-hole transformation, we can directly apply the above discussions on the [*SU*]{}(4) Kondo effect to the former region by changing the sign of the thermopower.
Let us now turn to the region of $-2<\varepsilon_d/U<-1$, where $n_d \sim 2$. At $J=0$, the Kondo effect due to six-fold degenerate states occurs. Although the resulting Kondo effect is strongly enhanced around $\varepsilon_d/U=-3/2$ in this case, the thermopower is almost zero because the Kondo resonance is located just at the Fermi level. Therefore, when the dot level is changed, the position of the Kondo resonance is shifted across the Fermi level, which causes the sign change of the thermopower. Around $\varepsilon_d/U=-3/2$, even small perturbations could easily change the sign of the thermopower at low temperatures. Note that these properties are quite similar to those for the ordinary spin Kondo effect shown in Fig. \[fig:single\_vS\], because the filling is near half in both cases.
![The thermopower due to the ordinary spin Kondo effect as a function of the dot level. We set $U=6\Gamma$.[]{data-label="fig:single_vS"}](pic/single_vS.eps){width="5.5cm"}
For large Hund couplings $J$, the triplet Kondo effect is realized and the resulting Kondo temperature is very small, so that the thermopower shown in Fig. \[fig:double\_vS\](b) is dramatically suppressed.
Magnetic field control
----------------------
Let us now analyze the effects of orbital-splitting caused by magnetic fields. The computed thermopower for $\varepsilon_d/U=-1/2$ is shown in Fig. \[fig:double\_kS\] as a function of the orbital splitting $\Delta_{orb}$.
![ The thermopower for the two orbital QD system, in case of $\varepsilon_d=-U/2$, as a function of orbital splitting $\Delta_{orb}$. We set $U=8\Gamma$. []{data-label="fig:double_kS"}](pic/double_kS.eps){width="6cm"}
It is seen that magnetic fields dramatically suppress the thermopower, which is caused by the following mechanism. In the presence of magnetic fields, the Kondo effect changes from the [*SU*]{}(4) orbital type to the [*SU*]{}(2) spin type because the orbital degeneracy is lifted. As a consequence, the resonance peak approaches the Fermi level and the effective Kondo temperature is reduced, so that the thermopower at finite temperatures is reduced in the presence of magnetic fields.
Note that, in our model, magnetic fields change the lowest energy level $\varepsilon_{\sigma -\frac{1}{2}}$ from $-U/2$ to $-(U+\Delta_{orb})/2$. Accordingly, the peak position of the renormalized resonance shifts downward across the Fermi level (down to a little below the Fermi level). Thus, the large negative thermopower changes to a small positive one as the magnetic field increases at low temperatures. In strong fields, the effective Kondo resonance is located around the Fermi level with symmetric shape, so that even small perturbations could give rise to a large value of thermopower with either negative or positive sign.
Finally a brief comment is in order for other choices of the parameters. The thermopower for $\varepsilon_d/U=-5/2$ shows similar magnetic-field dependence to the $\varepsilon_d/U=-1/2$ case except that its sign is changed. For $\varepsilon_d/U=-3/2$, the thermopower is almost zero and independent of magnetic fields, because the Kondo resonance is pinned at the Fermi level and gradually disappears with increase of magnetic fields.
Summary
=======
We have studied the thermopower for the two-orbital QD system under gate-voltage and magnetic-field control. In particular, making use of the NCA method for the Anderson model with finite Coulomb repulsion, we have systematically investigated the low-temperature properties for several electron-charge regions. It has been elucidated how the asymmetric nature of the resonance due to the orbital Kondo effect controls the magnitude and the sign of the thermopower at low temperatures.
For $\varepsilon_d/U\sim-1/2$ ($\varepsilon_d/U\sim -5/2$), where $n_d \sim1 (3)$, the [*SU*]{}(4) Kondo effect is dominant and the corresponding thermopower is enhanced. These two regions are related to each other via an electron-hole transformation, which gives rise to an opposite sign of the thermopower. In addition, magnetic fields change the Kondo effect to an [*SU*]{}(2) type, resulting in two major effects: the effective resonance position approaches the Fermi level and the Kondo temperature is decreased. Therefore, the reduction of the thermopower occurs in the presence of magnetic fields.
For $\varepsilon_d/U\sim-3/2$, where $n_d \sim 2$, the Kondo effect due to six-fold degenerate states occurs for $J=0$. However, the thermopower is strongly reduced because the resonance peak is located near the Fermi level. When the Hund coupling is large, the triplet Kondo effect is dominant. The resulting small Kondo temperature suppresses the thermopower around $\varepsilon_d/U \sim -3/2$ at finite temperatures. In this region, magnetic fields do not affect the asymmetry of the resonance peak and the resulting thermopower remains almost zero because the filling is fixed.
Acknowledgement {#acknowledgement .unnumbered}
===============
We thank S. Tarucha, A. C. Hewson, A. Oguri and S. Amaha for valuable discussions. RS was supported by the Japan Society for the Promotion of Science.
[00]{} A. C. Hewson, [*The Kondo Problem to Heavy Fermions*]{} (Cambridge University Press, Cambridge, 1997).
D. Goldhaber-Gordon, [*et al.*]{}, Nature, [**391**]{} (1998) 156.
S. Sasaki, [*et al.*]{}, Nature, [**405**]{} (2000) 764.
M. Eto, [*et al.*]{}, Phys. Rev. Lett. [**85**]{} (2000) 1306.
S. Sasaki, [*et al.*]{}, Phys. Rev. Lett. **93** (2004) 17205.
P. Jarillo-Herrero, [*et al.*]{}, Nature, [**434**]{} (2005) 484.
M.-S. Choi, [*et al.*]{}, Phys. Rev. Lett. **95** (2005) 067204.
R. Sakano, [*et al.*]{}, Phys. Rev. B [**73**]{} (2006) 155332.
C. W. J. Beenakker, Phys. Rev. B [**46**]{} (1992) 9667.
D. Boese, [*et al.*]{}, Euro. Phys. Lett. [**56**]{} (2001) 576.
M. Turek, [*et al.*]{}, Phys. Rev. B [**65**]{} (2002) 115332.
T.-S. Kim, [*et al.*]{}, Phys. Rev. Lett. [**88**]{} (2002) 136601.
K. A. Matveev, [*et al.*]{}, Phys. Rev. B [**66**]{} (2002) 45301.
B. Dong, [*et al.*]{}, J. Phys. C [**14**]{} (2002) 11747.
M. Krawiec, [*et al.*]{}, Phys. Rev. B [**73**]{} (2006) 75307.
A. Donabidowicz, [*et al.*]{}, preprint, cond-mat/0701217, (2007).
R. Scheibner, [*et al.*]{}, Phys. Rev. Lett. [**95**]{} (2005) 176602.
N. E. Bickers, Rev. Mod. Phys. **59**, (1987) 845.
Th. Pruschke, [*et al.*]{}, Z. Phys. [**74**]{} (1989) 439.
W. Izumida, [*et al.*]{}, Phys. Rev. Lett. [**87**]{} (2001) 216803.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Dalitz-plot analyses of $B\rightarrow K\pi\pi$ decays provide direct access to decay amplitudes, and thereby weak and strong phases can be disentangled by resolving the interference patterns in phase space between intermediate resonant states. A phenomenological isospin analysis of $B\rightarrow K^*(\rightarrow K\pi)\pi$ decay amplitudes is presented exploiting available amplitude analyses performed at the , Belle and LHCb experiments. A first application consists in constraining the CKM parameters thanks to an external hadronic input. A method, proposed some time ago by two different groups and relying on a bound on the electroweak penguin contribution, is shown to lack the desired robustness and accuracy, and we propose a more alluring alternative using a bound on the annihilation contribution. A second application consists in extracting information on hadronic amplitudes assuming the values of the CKM parameters from a global fit to quark flavour data. The current data yields several solutions, which do not fully support the hierarchy of hadronic amplitudes usually expected from theoretical arguments (colour suppression, suppression of electroweak penguins), as illustrated from computations within QCD factorisation. Some prospects concerning the impact of future measurements at LHCb and Belle II are also presented. Results are obtained with the [[CKMfitter]{}]{} analysis package, featuring the frequentist statistical approach and using the Rfit scheme to handle theoretical uncertainties.'
author:
- |
J. Charles$^{\,a}$, S. Descotes-Genon$^{\,b}$, J. Ocariz$^{\,c,d}$, A. Pérez Pérez$^{\,e}$\
for the [[CKMfitter]{}]{} Group
title: 'Disentangling weak and strong interactions in $B\to K^*(\to K\pi)\pi$ Dalitz-plot analyses'
---
Introduction {#sec:Introduction}
============
Non-leptonic $B$ decays have been extensively studied at the $B$-factories and Belle [@Bevan:2014iga], as well at the LHCb experiment [@Bediaga:2012py]. Within the Standard Model (SM) some of these modes provide valuable information on the Cabibbo-Kobayashi-Maskawa (CKM) matrix and the structure of $CP$ violation [@Cabibbo:1963yz; @Kobayashi:1973fv], entangled with hadronic amplitudes describing processes either at the tree level or the loop level (the so-called penguin contributions). Depending on the transition considered, one may or may not get rid of hadronic contributions which are notoriously difficult to assess. For instance, in $b\rightarrow c\bar{c}s$ processes, the CKM phase in the dominant tree amplitude is the same as that of the Cabibbo-suppressed penguin one, so the only relevant weak phase is the $B_d$-mixing phase $2\beta$ (up to a very high accuracy) and it can be extracted from a $CP$ asymmetry out of which QCD contributions drop to a very high accuracy. For charmless $B$ decays, the two leading amplitudes often carry different CKM and strong phases, and thus the extraction of CKM couplings can be more challenging. In some cases, for instance the determination of $\alpha$ from $B\to\pi\pi$ [@Olivier], one can use flavour symmetries such as isospin in order to extract all hadronic contributions from experimental measurements, while constraining CKM parameters. This has provided many useful constraints for the global analysis of the CKM matrix within the Standard Model and the accurate determination of its parameters [@Charles:2004jd; @Charles:2015gya; @CKMfitterwebsite; @Koppenburg:2017mad], as well as inputs for some models of New Physics [@Deschamps:2009rh; @Lenz:2010gu; @Lenz:2012az; @Charles:2013aka].
The constraints obtained from some of the non-leptonic two-body $B$ decays can be contrasted with the unclear situation of the theoretical computations for these processes. Several methods (QCD factorisation [@Beneke:1999br; @Beneke:2000ry; @Beneke:2003zv; @Beneke:2006hg], perturbative QCD approach [@Li:2001ay; @Li:2002mi; @Li:2003yj; @Ali:2007ff; @Li:2014rwa; @Wang:2016rlo], Soft-Collinear Effective Theory [@Bauer:2002aj; @Beneke:2003pa; @Bauer:2004tj; @Bauer:2005kd; @Becher:2014oda]) were devised more than a decade ago to compute hadronic contributions for non-leptonic decays. However, some of their aspects remain debated at the conceptual level [@DescotesGenon:2001hm; @Ciuchini:2001gv; @Beneke:2004bn; @Manohar:2006nz; @Li:2009wba; @Feng:2009rp; @Beneke:2009az; @Becher:2011dz; @Beneke:2015wfa], and they struggle to reproduce some data on $B$ decays into two mesons, especially $\pi^0\pi^0$, $\rho^0\rho^0$, $K\pi$, $\phi K^*$, $\rho K^*$ [@Beneke:2015wfa]. Considering the progress performed meanwhile in the determination of the CKM matrix, it is clear that by now, most of these non-leptonic modes provide more a test of our understanding of hadronic process rather than competitive constraints on the values of the CKM parameters, even though it can be interesting to consider them from one point of view or the other.
Our analysis is focused on the study of $B\rightarrow K^*(\rightarrow K\pi)\pi$ decay amplitudes, with the help of isospin symmetry. Among the various $b\rightarrow u\bar{u}s$ processes, the choice of $B\rightarrow K^*\pi$ system is motivated by the fact that an amplitude (Dalitz-plot) analysis of the three-body final state $K\pi\pi$ provides access to several interference phases among different intermediate $K^*\pi$ states. The information provided by these physical observables highlights the potential of the $B\rightarrow K^*\pi$ system $(VP)$ compared with $B\rightarrow K\pi$ $(PP)$ where only branching ratios and $CP$ asymmetries are accessible. Similarly, the $B\rightarrow K^*\pi$ system leads to the final $K\pi\pi$ state with a richer pattern of interferences and thus a larger set of observables than other pseudoscalar-vector states, like, say, $B\to K\rho$ (indeed, $K\pi\pi$ exhibits $K^*$ resonances from either of the two combinations of $K\pi$ pairs, whereas the $\rho$ meson comes from the only $\pi\pi$ pair available). In addition, the study of these modes provides experimental information on the dynamics of pseudoscalar-vector modes, which is less known and more challenging from the theoretical point of view. Finally, this system has been studied extensively at the [@Aubert:2008bj; @Aubert:2009me; @BABAR:2011ae; @Lees:2015uun] and Belle [@Garmash:2006bj; @Dalseno:2008wwa] experiments, and a large set of observables is readily available.
Let us mention that other approaches, going beyond isospin symmetry, have been proposed to study this system. For instance, one can use $SU(3)$ symmetry and $SU(3)$-related channels in addition to the ones that we consider in this paper [@Bhattacharya:2013boa; @Bhattacharya:2015uua]. Another proposal is the construction of the fully SU(3)-symmetric amplitude [@Bhattacharya:2014eca] to which the spin-one intermediate resonances that we consider here do not contribute.
The rest of this article is organised in the following way. In Sec. \[sec:Dalitz\], we discuss the observables provided by the analysis of the $K\pi\pi$ Dalitz plot analysis. In Sec. \[sec:Isospin\], we recall how isospin symmetry is used to reduce the set of hadronic amplitudes and their connection with diagram topologies. In Sec. \[sec:CKM\], we discuss two methods to exploit these decays in order to extract information on the CKM matrix, making some assumptions about the size of specific contributions (either electroweak penguins or annihilation). In Sec. \[sec:Hadronic\], we take the opposite point of view. Taking into account our current knowledge of the CKM matrix from global analysis, we set constraints on the hadronic amplitudes used to describe these decays, and we make a brief comparison with theoretical estimates based on QCD factorisation. In Sec. \[sec:prospect\], we perform a brief prospective study, determining how the improved measurements expected from LHCb and Belle II may modify the determination of the hadronic amplitudes before concluding. In the Appendices, we discuss various technical aspects concerning the inputs and the fits presented in the paper.
Dalitz-plot amplitudes {#sec:Dalitz}
======================
Charmless hadronic $B$ decays are a particularly rich source of experimental information [@Bevan:2014iga; @Bediaga:2012py]. For $B$ decays into three light mesons (pions and kaons), the kinematics of the three-body final state can be completely determined experimentally, thus allowing for a complete characterisation of the Dalitz-plot (DP) phase space. In addition to quasi-two-body event-counting observables, the interference phases between pairs of resonances can also be accessed, and $CP$-odd (weak) phases can be disentangled from $CP$-even (strong) ones. Let us however stress that the extraction of the experimental information relies heavily on the so-called isobar approximation, widely used in experimental analyses because of its simplicity, and in spite of its known shortcomings [@Amato:2016xjv].
The $B\rightarrow K\pi\pi$ system is particularly interesting, as the decay amplitudes from intermediate $B\rightarrow PV$ resonances ($K^\star(892)$ and $\rho(770)$) receive sizable contributions from both tree-level and loop diagrams, and interfere directly in the common phase-space regions (namely the “corners” of the DP). The presence of additional resonant intermediate states further constrain the interference patterns and help resolving potential phase ambiguities. In the case of $B^0\rightarrow K^+\pi^-\pi^0$ and $B^+\rightarrow K^0_S\pi^+\pi^0$, two different $K^\star(892)$ states contribute to the decay amplitude, and their interference phases can be directly measured. For $B^0\rightarrow K^0_S\pi^+\pi^-$, the time-dependent evolution of the decay amplitudes for $B^0$ and $\overline{B^0}$ provides (indirect) access to the relative phase between the $B^0\rightarrow K^{\star+}\pi^-$ and $\overline{B^0}\rightarrow K^{\star-}\pi^+$ amplitudes.
In the isobar approximation [@Amato:2016xjv], the total decay amplitude for a given mode is a sum of intermediate resonant contributions, and each of these is a complex function of phase-space: ${\cal A}(DP)= \sum_i A_iF_i(DP)$, where the sum rolls over all the intermediate resonances providing sizable contributions, the $F_i$ functions are the “lineshapes” of each resonance, and the isobar parameters $A_i$ are complex coefficients indicating the strength of each intermediate amplitude. The corresponding relation is $\overline{{\cal A}}(DP)=\sum_i \overline{A_i}~\overline{F_i}(DP)$ for $CP$-conjugate amplitudes.
Any convention-independent function of isobar parameters is a physical observable. For instance, for a given resonance “$i$”, its direct $CP$ asymmetry $A_{CP}$ is expressed as $$A_{CP}^i = \frac{|\overline{A_i}|^2-|A_i|^2}{|\overline{A_i}|^2+|A_i|^2} ,$$ and its partial fit fraction $FF^i$ is $$FF^i = \frac{(|A_i|^2+|\overline{A_i}|^2) \int_{DP} |F_i(DP)|^2 d(DP)}
{\sum_{jk} (A_j A^*_k + \overline{A_j}~\overline{A^*_k}) \int_{DP} F_j(DP) F^*_k(DP) d(DP)}.$$ To obtain the partial branching fraction ${\cal B}^i$, the fit fraction has to be multiplied by the total branching fraction of the final state (e.g., $B^0\to K^0_S\pi^+\pi^-$), $${\cal B}^i = {\it FF}^i \times {\cal B}_{incl} .$$ A phase difference $\varphi_{ij}$ between two resonances “$i$” and “$j$” contributing to the same total decay amplitude (i.e., between resonances in the same DP) is $$\label{eq:phasediff1}
\varphi^{ij} = \arg(A_i/A_j) ,\qquad \overline{\varphi}_{ij} = \arg\left(\overline{A_i}/\overline{A_j}\right)\,,$$ and a phase difference between the two $CP$-conjugate amplitudes for resonance “$i$” is $$\label{eq:phasediff2}
\Delta\varphi^{i} = \arg\left(\frac{q}{p}\frac{\overline{A_i}}{A_i}\right)\,,$$ where $q/p$ is the $B^0-\overline{B^0}$ oscillation parameter.
For $B\rightarrow K^\star\pi$ modes, there are in total 13 physical observables. These can be classified as four branching fractions, four direct $CP$ asymmetries and five phase differences:
- The $CP$-averaged ${\cal B}^{+-}=BR(B^0\rightarrow K^{\star+}\pi^{-})$ branching fraction and its corresponding $CP$ asymmetry $A_{CP}^{+-}$. These observables can be measured independently in the $B^0\rightarrow K^0_S\pi^+\pi^-$ and $B^0\rightarrow K^+\pi^-\pi^0$ Dalitz planes.
- The $CP$-averaged ${\cal B}^{00}=BR(B^0\rightarrow K^{\star 0}\pi^{0})$ branching fraction and its corresponding $CP$ asymmetry $A_{CP}^{00}$. These observables can be accessed both in the $B^0\rightarrow K^+\pi^-\pi^0$ and $B^0\rightarrow K^0_S\pi^0\pi^0$ Dalitz planes.
- The $CP$-averaged ${\cal B}^{+0}=BR(B^+\rightarrow K^{\star+}\pi^{0})$ branching fraction and its corresponding $CP$ asymmetry $A_{CP}^{+0}$. These observables can be measured both in the $B^+\rightarrow K^0_S\pi^+\pi^0$ and $B^+\rightarrow K^+\pi^0\pi^0$ Dalitz planes.
- The $CP$-averaged ${\cal B}^{0+}=BR(B^+\rightarrow K^{\star 0}\pi^{+})$ branching fraction and its corresponding $CP$ asymmetry $A_{CP}^{0+}$. They can be measured both in the $B^+\rightarrow K^+\pi^+\pi^-$ and $B^+\rightarrow K^0_S\pi^0\pi^+$ Dalitz planes.
- The phase difference $\varphi^{00,+-}$ between $B^0\rightarrow K^{\star+}\pi^{-}$ and $B^0\rightarrow K^{\star 0}\pi^{0}$, and its corresponding $CP$ conjugate $\overline\varphi^{{00},-+}$. They can be measured in the $B^0\rightarrow K^+\pi^-\pi^0$ Dalitz plane and in its $CP$ conjugate DP $\overline{B^0}\rightarrow K^-\pi^+\pi^0$, respectively.
- The phase difference $\varphi^{+0,0+}$ between $B^+\rightarrow K^{\star+}\pi^{0}$ and $B^+\rightarrow K^{\star 0}\pi^{+}$, and its corresponding $CP$ conjugate $\overline\varphi^{{-0},0-}$. They can be measured in the $B^+\rightarrow K^0_S\pi^+\pi^0$ Dalitz plane and in its $CP$ conjugate DP $B^-\rightarrow K^0_S\pi^-\pi^0$, respectively.
- The phase difference $\Delta\varphi^{+-}$ between $B^0\rightarrow K^{\star+}\pi^-$ and its $CP$ conjugate $\overline{B^0}\rightarrow K^{\star -}\pi^+$. This phase difference can only be measured in a time-dependent analysis of the $K^0_S\pi^+\pi^-$ DP. As $K^{\star +}\pi^-$ is only accessible for $B^0$ and $K^{\star -}\pi^+$ to $\overline{B^0}$ only, the $B^0\rightarrow K^{\star+}\pi^-$ and $\overline{B^0}\rightarrow K^{\star -}\pi^+$ amplitudes do not interfere directly (they contribute to different DPs). But they do interfere with intermediate resonant amplitudes that are accessible to both $B^0$ and $\overline{B^0}$, like $\rho^0(770)K^0_S$ or $f_0(980)K^0_S$, and thus the time-dependent oscillation is sensitive to the combined phases from mixing and decay amplitudes.
Real-valued physical observables {#sec:Dalitz_new_Observbles}
--------------------------------
The set of physical observables described in the previous paragraph (branching fractions, $CP$ asymmetries and phase differences) has the advantage of providing straightforward physical interpretations. From a technical point of view though, the phase differences suffer from the drawback of their definition with a $2\pi$ periodicity. This feature becomes an issue when the experimental uncertainties on the phases are large and the correlations between observables are significant, since there is no straightforward way to properly implement their covariance into a fit algorithm. Moreover the uncertainties on the phases are related to the moduli of the corresponding amplitudes, leading to problems when the latter are not known precisely and can reach values compatible with zero. As a solution to this issue, a set of real-valued Cartesian physical observables is defined, in which the $CP$ asymmetries and phase differences are expressed in terms of the real and imaginary parts of ratios of isobar amplitudes scaled by the ratios of the corresponding branching fractions and $CP$ asymmetries. The new observables are functions of branching fractions, $CP$ asymmetries and phase differences, and are thus physical observables. The new set of observables, similar to the $U$ and $I$ observables defined in $B\to \rho\pi$ [@Olivier], are expressed as the real and imaginary parts of ratios of amplitudes as follows,
$$\begin{aligned}
{\mathcal Re}\left(A_i/A_j\right) = \sqrt{\frac{{\cal B}^i}{{\cal B}^j}\frac{A_{CP}^i - 1}{A_{CP}^j - 1}} \cos(\varphi_{ij})\,, \\
{\mathcal Im}\left(A_i/A_j\right) = \sqrt{\frac{{\cal B}^i}{{\cal B}^j}\frac{A_{CP}^i - 1}{A_{CP}^j - 1}} \sin(\varphi_{ij})\,, \\
{\mathcal Re}\left(\overline{A}_i/\overline{A}_j\right) = \sqrt{\frac{{\cal B}^i}{{\cal B}^j}\frac{A_{CP}^i + 1}{A_{CP}^j + 1}} \cos(\overline{\varphi}_{ij})\,, \\
{\mathcal Im}\left(\overline{A}_i/\overline{A}_j\right) = \sqrt{\frac{{\cal B}^i}{{\cal B}^j}\frac{A_{CP}^i + 1}{A_{CP}^j + 1}} \sin(\overline{\varphi}_{ij})\,.\end{aligned}$$
We see that some observables are not defined in the case $A_{CP}^j=\pm 1$, as could be expected from the following argument. Let us suppose that $A_{CP}^j=+1$ for the $j$-th resonance, i.e., we have the amplitude $A_j=0$: the quantities ${\mathcal Re}(A_i/A_j)$ and ${\mathcal Im}(A_i/A_j)$ are not defined, but neither is the phase difference between $A_i$ and $A_j$. Therefore, in both parametrisations (real and imaginary part of ratios, or branching ratios, $CP$ asymmetries and phase differences), the singular case $A_{CP}^j=\pm 1$ leads to some undefined observables. Let us add that this case does not occur in practice for our analysis.
For each $B\rightarrow K\pi\pi$ mode considered in this paper, the real and imaginary parts of amplitude ratios used as inputs are the following:
$$\begin{aligned}
&& B^0\rightarrow K^0_S\pi^+\pi^- : \label{eq:KsPiPi_inputs} \\ && \nonumber \qquad
{\mathcal B}(K^{*+}\pi^-)
\ ; \ \\&&\nonumber \qquad
{\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]
\ ; \
{\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]\ , \
\\
&& B^0\rightarrow K^+\pi^-\pi^0 : \label{eq:KPiPi0_inputs} \\ && \nonumber \qquad
\left\{ \begin{array}{l}\displaystyle {\mathcal B}(K^{*0}\pi^0)
\ ; \ \qquad
\left| \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right| \ ; \ \\
\displaystyle
{\mathcal Re}\left[ \frac{A(K^{*0}\pi^0)}{A(K^{*+}\pi^-)} \right]
\ ; \
{\mathcal Im}\left[ \frac{A(K^{*0}\pi^0)}{A(K^{*+}\pi^-)} \right]
\ ; \ \\
\displaystyle
{\mathcal Re}\left[ \frac{\overline{A}(\overline{K}^{*0}\pi^0)}{\overline{A}(K^{*-}\pi^+)} \right]
\ ; \
{\mathcal Im}\left[ \frac{\overline{A}(\overline{K}^{*0}\pi^0)}{\overline{A}(K^{*-}\pi^+)} \right]\ , \
\end{array}\right.
\\
&& B^+\rightarrow K^+\pi^-\pi^+ : \label{eq:KPiPi_inputs}\\&& \nonumber\qquad
{\mathcal B}(K^{*0}\pi^+)
\ ; \
\left| \frac{\overline{A}(\overline{K}^{*0}\pi^-)}{A(K^{*0}\pi^+)} \right|\ , \\
&& B^+\rightarrow K^0_S\pi^+\pi^0 : \label{eq:K0PiPi0_inputs}\\&& \nonumber\qquad
\left\{ \begin{array}{l}\displaystyle
{\mathcal B}(K^{*+}\pi^0)
\ ; \
\left| \frac{\overline{A}(K^{*-}\pi^0)}{A(K^{*+}\pi^0)} \right| \ ; \ \\
\displaystyle
{\mathcal Re}\left[ \frac{A(K^{*+}\pi^0)}{A(K^{*0}\pi^+)} \right]
\ ; \
{\mathcal Im}\left[ \frac{A(K^{*+}\pi^0)}{A(K^{*0}\pi^+)} \right]
\ ; \ \\ \displaystyle
{\mathcal Re}\left[ \frac{\overline{A}(K^{*-}\pi^0)}{\overline{A}(\overline{K}^{*0}\pi^-)} \right]
\ ; \
{\mathcal Im}\left[ \frac{\overline{A}(K^{*-}\pi^0)}{\overline{A}(\overline{K}^{*0}\pi^-)} \right]\ .
\end{array}\right.\end{aligned}$$
This choice of inputs is motivated by the fact that amplitude analyses are sensitive to ratios of isobar amplitudes. The sensitivity to phase differences leads to a sensitivity to the real and imaginary part of these ratios. It has to be said that the set of inputs listed previously is just one of the possible sets of independent observables that can be extracted from this set of amplitude analyses. In order to combine and Belle results, it is straightforward to express the experimental results in the above format, and then combine them as is done for independent measurements. Furthermore, experimental information from other analyses which are not amplitude and/or time-dependent, i.e., which are only sensitive to ${\mathcal B}$ and $A_{CP}$, can be also added in a straightforward fashion.
In order to properly use the experimental information in the above format it will be necessary to use the full covariance matrix, both statistical and systematic, of the isobar amplitudes. This will allow us to properly propagate the uncertainties as well as the correlations of the experimental inputs to the ones exploited in the phenomenological fit.
Isospin analysis of $B\rightarrow K^*\pi$ decays {#sec:Isospin}
================================================
The isospin formalism used in this work is described in detail in Ref. [@PerezPerez:2008gna]. Only the main ingredients are summarised below.
Without any loss of generality, exploiting the unitarity of the CKM matrix, the $B^0\rightarrow K^{*+}\pi^-$ decay amplitude $A^{+-}$ can be parametrised as $$\label{eq:BtoKstarPlusPiMinusAmplitude}
A^{+-} = V_{ub}^*V_{us} T^{+-} + V_{tb}^*V_{ts} P^{+-} ,$$ with similar expressions for the $CP$-conjugate amplitude $\bar{A}^{-+}$ (the CKM factors appearing as complex conjugates), and for the remaining three amplitudes $A^{ij}=A(B^{i+j}\rightarrow K^{*i}\pi^j)$, corresponding to the $(i,j)=(0,+)$, $(+,0)$, $(00)$ modes. The tree and penguin contributions are now defined through their CKM factors rather than their diagrammatic structure: they can include contributions from additional $c$-quark penguin diagrams due to the re-expression of $V_{cb}^*V_{cs}$ in Eq. (\[eq:BtoKstarPlusPiMinusAmplitude\]). In the following, $T^{ij}$ and $P^{ij}$ will be called hadronic amplitudes.
Note that the relative CKM matrix elements in Eq. (\[eq:BtoKstarPlusPiMinusAmplitude\]) significantly enhance the penguin contributions with respect to the tree ones, providing an improved sensitivity to the former. The isospin invariance imposes a quadrilateral relation among these four decay amplitudes, derived in Ref. [@Nir:1991cu] for $B\to K\pi$, but equivalently applicable in the $K^*\pi$ case: $$\label{eq:isospinRelations}
A^{0+} + \sqrt{2} A^{+0} = A^{+-} + \sqrt{2} A^{00},$$ and a similar expression for the $CP$-conjugate amplitudes. These can be used to rewrite the decay amplitudes in the “canonical” parametrisation, $$\begin{aligned}
\label{eq:canonicalParametrisation}
\begin{array}{cclclc}
A^{+-} & = & V_{us}V_{ub}^*T^{+-} & + & V_{ts}V_{tb}^*P^{+-} & ,
\\
A^{0+} & = & V_{us}V_{ub}^*N^{0+} & + & V_{ts}V_{tb}^*(-P^{+-}+P_{\rm EW}^{\rm C}) & ,
\\
\sqrt{2}A^{+0} & = & V_{us}V_{ub}^*T^{+0} & + & V_{ts}V_{tb}^*P^{+0} & ,
\\
\sqrt{2}A^{00} & = & V_{us}V_{ub}^*T^{00}_{\rm C} & + & V_{ts}V_{tb}^*(-P^{+-}+P_{\rm EW}) & ,
\end{array}\end{aligned}$$ with $$\begin{aligned}
T^{+0}&=&T^{+-}+T_{\rm C}^{00}-N^{0+}\,,\\
P^{+0}&=&P^{+-}+P_{\rm EW}-P_{\rm EW}^{\rm C}\,.\end{aligned}$$ This parametrisation is frequently used in the literature with various slightly different conventions, and is expected to hold up to a very high accuracy (see Refs. [@Gronau:2005pq; @Botella:2006zi] for isospin-breaking contributions to $B\to\pi\pi$ decays). The notation is chosen to illustrate the main diagram topologies contributing to the decay amplitude under consideration. $N^{0+}$ makes reference to the fact that the contribution to $B^+\rightarrow K^{*0}\pi^+$ with a $V_{us}V_{ub}^*$ term corresponds to an annihilation/exchange topology; $T^{00}_{\rm C}$ denotes the colour-suppressed $B^0\rightarrow K^{*0}\pi^0$ tree amplitude; the EW subscript in the $P_{\rm EW}$ and $P_{\rm EW}^{\rm C}$ terms refers to the $\Delta I=1$ electroweak penguin contributions to the decay amplitudes. We can also introduce the $\Delta I=3/2$ combination $T_{3/2}=T^{+-}+T^{00}_{\rm C}$.
One naively expects that colour-suppressed contributions will indeed be suppressed compared to their colour-allowed partner, and that electroweak penguins and annihilation contributions will be much smaller than tree and QCD penguins. These expectations can be expressed quantitatively using theoretical approaches like QCD factorisation [@Beneke:1999br; @Beneke:2000ry; @Beneke:2003zv; @Beneke:2006hg]. Some of these assumptions have been challenged by the experimental data gathered, in particular the mechanism of colour suppression in $B\to\pi\pi$ and the smallness of the annihilation part for $B\to K\pi$ [@Beneke:2006mk; @Bell:2009fm; @Bell:2015koa; @Beneke:2015wfa; @Li:2014rwa; @Olivier].
The complete set of $B\rightarrow K^*\pi$ decay amplitudes, constrained by the isospin relations described in Eq. (\[eq:isospinRelations\]) are fully described by 13 parameters, which can be classified as 11 hadronic and 2 CKM parameters following Eq. (\[eq:canonicalParametrisation\]). A unique feature of the $B\rightarrow K^*\pi$ system is that this number of unknowns matches the total number of physical observables discussed in Sec. \[sec:Dalitz\]. One could thus expect that all parameters (hadronic and CKM) could be fixed from the data. However, it turns out that the weak and strong phases can be redefined in such a way as to absorb in the CKM parameters any constraints on the hadronic ones. This property, known as [*reparametrisation invariance*]{}, is derived in detail in Refs. [@Botella:2005ks; @PerezPerez:2008gna] and we recall its essential aspects here. The decay amplitude of a $B$ meson into a final state can be written as: $$\begin{aligned}
\label{eq:Af1}
A_f&=&m_1 e^{i\phi_1}e^{i\delta_1}+m_2 e^{i\phi_2}e^{i\delta_2} \ , \\
\bar{A}_{\bar{f}}&=&m_1 e^{-i\phi_1}e^{i\delta_1}+m_2 e^{-i\phi_2}e^{i\delta_2} \ , \label{eq:Abarf1}\end{aligned}$$ where $\phi_i$ are $CP$-odd (weak) phases, $\delta_i$ are $CP$-even (strong) phases, and $m$ are real magnitudes. Any additional term $M_3e^{i\phi_3}e^{i\delta_3}$ can be expressed as a linear combination of $e^{i\phi_1}$ and $e^{i\phi_2}$ (with the appropriate properties under $CP$ violation), leading to the fact that the decay amplitudes can be written in terms of any other pair of weak phases $\{\varphi_1, \varphi_2\}$ as long as $\varphi_1\neq \varphi_2$ (mod $\pi$): $$\begin{aligned}
\label{eq:Af2}
A_f&=&M_1 e^{i\varphi_1}e^{i\Delta_1}+M_2 e^{i\varphi_2}e^{i\Delta_2} \ , \\
\bar{A}_{\bar{f}}&=&M_1 e^{-i\varphi_1}e^{i\Delta_1}+M_2 e^{-i\varphi_2}e^{i\Delta_2} \ , \label{eq:Abarf2}\end{aligned}$$ with $$\begin{aligned}
M_1e^{i\Delta_1} &=&[m_1e^{i\delta _1}\sin(\phi_1-\varphi_2)+m_2e^{i\delta_2}\sin(\phi_2-\varphi_2)]
\nonumber\\ &&\qquad\qquad/\sin(\varphi_2-\varphi_1) \ , \label{eq:m1}\\
M_2e^{i\Delta_2} &=&[m_1e^{i\delta _1}\sin(\phi_1-\varphi_1)+m_2e^{i\delta_2}\sin(\phi_2-\varphi_1)]
\nonumber\\ &&\qquad\qquad/\sin(\varphi_2-\varphi_1)\ . \label{eq:m2}\end{aligned}$$
This change in the set of weak basis does not have any physical implications, hence the name of re-parameterisation invariance. We can now take two different sets of weak phases $\{\phi_1, \phi_2\}$ and $\{\varphi_1, \varphi_2\}$ with $\phi_1=\varphi_1$ but $\phi_2\neq\varphi_2$. If an algorithm existed to extract $\phi_2$ as a function of physical observables related to these decay amplitudes, the similarity of Eqs. (\[eq:Af1\])-(\[eq:Abarf1\]) and Eqs. (\[eq:Af2\])-(\[eq:Abarf2\]) indicate that $\varphi_2$ would be extracted exactly using the same function with the same measurements as input, leading to $\varphi_2=\phi_2$, in contradiction with the original statement that we are free to express the physical observables using an arbitrary choice for the weak basis.
We have thus to abandon the idea of an algorithm allowing one to extract both CKM and hadronic parameters from a set of physical observables. The weak phases in the parameterisation of the decay amplitudes cannot be extracted without additional hadronic hypothesis. This discussion holds if the two weak phases used to describe the decay amplitudes are different (modulo $\phi$). The argument does not apply when only one weak phase can be used to describe the decay amplitude: setting one of the amplitudes to zero, say $m_2=0$, breaks reparametrisation invariance, as can be seen easily in Eqs. (\[eq:m1\])-(\[eq:m2\]). In such cases, weak phases can be extracted from experiment, e.g., the extraction of $\alpha$ from $B\to \pi\pi$, the extraction of $\beta$ from $J/\psi K_S$ or $\gamma$ from $B\to DK$. In each case, an amplitude is assumed to vanish, either approximately (extraction of $\alpha$ and $\beta$) or exactly (extraction of $\gamma$) [@Olivier; @Bevan:2014iga; @Bediaga:2012py].
In view of this limitation, two main strategies can be considered for the system considered here: either implementing additional constraints on some hadronic parameters in order to extract the CKM phases using the $B \rightarrow K^*\pi$ observables, or fix the CKM parameters to their known values from a global fit and use the $B \rightarrow K^*\pi$ observables to extract information on the hadronic contributions to the decay amplitudes. Both approaches are described below.
Constraints on CKM phases {#sec:CKM}
=========================
We illustrate the first strategy using two specific examples. The first example is similar in spirit to the Gronau-London method for extracting the CKM angle $\alpha$ [@Gronau:1990ka], which relies on neglecting the contributions of electroweak penguins to the $B\rightarrow\pi\pi$ decay amplitudes. The second example assumes that upper bounds on annihilation/exchange contributions can be estimated from external information.
The CPS/GPSZ method: setting a bound on electroweak penguins {#subsec:CPS}
------------------------------------------------------------
In $B\rightarrow\pi\pi$ decays, the electroweak penguin contribution can be related to the tree amplitude in a model-independent way using Fierz transformations of the relevant current-current operators in the effective Hamiltonian for $B\to \pi\pi$ decays [@Buras:1998rb; @Neubert:1998pt; @Neubert:1998jq; @Charles:2004jd]. One can predict the ratio $R=P_{\rm EW}/T_{3/2}\simeq -3/2 (C_9+C_{10})/(C_1+C_2)=(1.35\pm 0.12) \%$ only in terms of short-distance Wilson Coefficients, since long-distance hadronic matrix elements drop from the ratio (neglecting the operators $O_7$ and $O_8$ due to their small Wilson coefficients compared to $O_9$ and $O_{10}$). This leads to the prediction that there is no strong phase difference between $P_{\rm EW}$ and $T_{3/2}$ so that electroweak penguins do not generate a charge asymmetry in $B^+\to \pi^+\pi^0$ if this picture holds: this prediction is in agreement with the present experimental average of the corresponding asymmetry. Moreover, this assumption is crucial to ensure the usefulness of the Gronau-London method to extract the CKM angle $\alpha$ from an isospin analysis of $B\rightarrow\pi\pi$ decay amplitudes [@Charles:2004jd; @Olivier]: setting the electroweak penguin to zero in the Gronau-London breaks the reparametrisation invariance described in Sec. \[sec:Isospin\] and opens the possibility of extracting weak phases.
One may want to follow a similar approach and use some knowledge or assumptions on the electroweak penguin in the case of $B\to K\pi$ or $B\to K^*\pi$ in order to constrain the CKM factors. This approach is sometimes referred to as the CPS/GPSZ method [@Ciuchini:2006kv; @Gronau:2006qn]. Indeed, as shown in Eq. (\[eq:canonicalParametrisation\]), the penguins in $A^{00}$ and $A^{+-}$ differ only by the $P_{\rm EW}$ term. By neglecting its contribution to $A^{00}$, these two decay amplitudes can be combined so that their (now identical) penguin terms can be eliminated, $$\begin{aligned}
A^0 = A^{+-} + \sqrt{2}A^{00} = V_{us}V_{ub}^*(T^{+-}+T^{00}_{\rm C}),\end{aligned}$$ and then, together with its $CP$-conjugate amplitude $\bar{A}^0$, a convention-independent amplitude ratio $R^0$ can be defined as $$\begin{aligned}
\label{eq:Eq4}
R^0 = \frac{q}{p}\frac{\bar{A}^0}{A^0} = e^{-2i\beta}e^{-2i\gamma} = e^{2i\alpha}.\end{aligned}$$ The $A^0$ amplitude can be extracted using the decay chains $B^0\rightarrow K^{*+}(\rightarrow K^+\pi^0)\pi^-$ and $B^0\rightarrow K^{*0}(\rightarrow K^+\pi^-)\pi^0$ contributing to the same $B^0\rightarrow K^+\pi^-\pi^0$ Dalitz plot, so that both the partial decay rates and their interference phase can be measured in an amplitude analysis. Similarly, $\bar{A}^0$ can be extracted from the $CP$-conjugate $\bar{B}^0\rightarrow K^-\pi^+\pi^0$ DP using the same procedure. Then, the phase difference between $A^{+-}$ and $\bar{A}^{-+}$ can be extracted from the $B^0\rightarrow K^0_{\rm S}\pi^+\pi^-$ DP, considering the $B^0\rightarrow K^{*+}(\rightarrow K^0\pi^+)\pi^-$ decay chain, and its $CP$-conjugate $\bar{B}^0\rightarrow K^{*-}(\rightarrow \bar{K}^0\pi^-)\pi^+$, which do interfere through mixing. Let us stress that this method is a measurement of $\alpha$ rather than a measurement of $\gamma$, in contrast with the claims in Refs. [@Ciuchini:2006kv; @Gronau:2006qn].
However, the method used to bound $P_{\rm EW}$ for the $\pi\pi$ system cannot be used directly in the $K^*\pi$ case. In the $\pi\pi$ case, $SU(2)$ symmetry guarantees that the matrix element with the combination of operators $O_1-O_2$ vanishes, so that it does not enter tree amplitudes. A similar argument would hold for $SU(3)$ symmetry in the case of the $K\pi$ system, but it does not for the vector-pseudoscalar $K^*\pi$ system. It is thus not possible to cancel hadronic matrix elements when considering $P_{\rm EW}/T_{3/2}$, which becomes a complex quantity suffering from (potentially large) hadronic uncertainties [@Gronau:2003yf; @Ciuchini:2006kv]. The size of the electroweak penguin (relative to the tree contributions), is parametrised as $$\frac{P_{\rm EW}}{T_{3/2}} = R \frac{1-r_{\rm VP}}{1+r_{\rm VP}} ,
\label{eq:PEWfromCPS}$$ where $R\simeq (1.35\pm 0.12)\%$ is the value obtained in the $SU(3)$ limit for $B\to \pi K$ (and identical to the one obtained from $B\to\pi\pi$ using the arguments in Refs. [@Buras:1998rb; @Neubert:1998pt; @Neubert:1998jq]), and $r_{\rm VP}$ is a complex parameter measuring the deviation of $P/T_{3/2}$ from this value corresponding to $$r_{\rm VP}=\frac{\langle K^*\pi(I=3/2)|Q_1-Q_2|B\rangle}{\langle K^*\pi(I=3/2)|Q_1+Q_2|B\rangle}\,.$$ Estimates on factorisation and/or $SU(3)$ flavour relations suggest $|r_{\rm VP}|\leq 0.05$ [@Ciuchini:2006kv; @Gronau:2006qn]. However it is clear that both approximations can easily be broken, suggesting a more conservative upper bound $|r_{\rm VP}|\leq 0.30$.
The presence of these hadronic uncertainties have important consequences for the method. Indeed, it turns out that including a non-vanishing $P_{\rm EW}$ completely disturbs the extraction of $\alpha$. The electroweak penguin can provide a ${\cal O}(1)$ contribution to $CP$-violating effects in charmless $b\rightarrow s$ processes, as its CKM coupling amplifies its contribution to the decay amplitude: $P_{\rm EW}$ is multiplied by a large CKM factor $V_{ts}V_{tb}^*=O(\lambda^2)$ compared to the tree-level amplitudes multiplied by a CKM factor $V_{us}V_{ub}^*=O(\lambda^4)$. Therefore, unless $P_{\rm EW}$ is particularly suppressed due to some specific hadronic dynamics, its presence modifies the CKM constraint obtained following this method in a very significant way.
It would be difficult to illustrate this point using the current data, due to the experimental uncertainties described in the next sections. We choose thus to discuss this problem using a reference scenario described in Tab. \[tab:IdealCase\], where the hadronic amplitudes have been assigned arbitrary (but realistic) values and they are used to derive a complete set of experimental inputs with arbitrary (and much more precise than currently available) uncertainties. As shown in App. \[App:Exp\_inputs\] (cf. Tab. \[tab:IdealCase\]), the current world averages for branching ratios and $CP$ asymmetries in $B^0\rightarrow K^{*+}\pi^-$ and $B^0\rightarrow K^{*0}\pi^0$ agree broadly with these values, which also reproduce the expected hierarchies among hadronic amplitudes, if we set the CKM parameters to their current values from our global fit [@Charles:2004jd; @Charles:2015gya; @CKMfitterwebsite]. We choose a penguin parameter $P^{+-}$ with a magnitude $28$ times smaller than the tree parameter $T^{+-}$, and a phase fixed at $-7^\circ$. The electroweak $P_{\rm EW}$ parameter has a value $66$ times smaller in magnitude than the tree parameter $T^{+-}$, and its phase is arbitrarily fixed to $+15^\circ$ in order to get a good agreement with the current central values. Our results do not depend significantly on this phase, and a similar outcome occurs if we choose sets with a vanishing phase for $P_{\rm EW}$ (though the agreement with the current data will be less good).
We use the values of the observables derived with this set of hadronic parameters, and we perform a CPS/ GPSZ analysis to extract a constraint on the CKM parameters. Fig. \[fig:RhoEta\_CPS\] shows the constraints derived in the $\bar{\rho}-\bar{\eta}$ plane. If we assume $P_{\rm EW}=0$ (upper panel), the extracted constraint is equivalent to a constraint on the CKM angle $\alpha$, as expected from Eq. (\[eq:Eq4\]). However, the confidence regions in the $\bar{\rho}-\bar{\eta}$ plane are very strongly biased, and the true value of the parameters are far from belonging to the 95% confidence regions. On the other hand, if we fix $P_{\rm EW}$ to its true value (with a magnitude of $0.038$), the bias is removed but the constraint deviates from a pure $\alpha$-like shape (for instance, it does not include the origin point $\bar\rho=\bar\eta=0$). We notice that the uncertainties on $R$ and, more significantly, $r_{VP}$, have an important impact on the precision of the constraint on $(\bar\rho,\bar\eta)$.
![ Constraints in the $\bar{\rho}-\bar{\eta}$ plane from the amplitude ratio $R^0$ method, using the arbitrary but realistic numerical values for the input parameters, detailed in the text. In the top panel, the $P_{\rm EW}$ hadronic parameter is set to zero. In the bottom panel, the $P_{\rm EW}$ hadronic parameter is set to its true generation value with different theoretical errors on $R$ and $r_{VP}$ parameters (defined in Eq. (\[eq:PEWfromCPS\])), either zero (green solid-line contour), 10% and 5% (blue dashed-line contour), and 10% and 30% (red solid-dashed-line contour). The parameters $\bar{\rho}$ and $\bar{\eta}$ are fixed to their current values from the global CKM fit [@Charles:2004jd; @Charles:2015gya; @CKMfitterwebsite], indicated by the magenta point. []{data-label="fig:RhoEta_CPS"}](RhoEta_CPS_ZeroPew.eps "fig:"){width="7cm"} ![ Constraints in the $\bar{\rho}-\bar{\eta}$ plane from the amplitude ratio $R^0$ method, using the arbitrary but realistic numerical values for the input parameters, detailed in the text. In the top panel, the $P_{\rm EW}$ hadronic parameter is set to zero. In the bottom panel, the $P_{\rm EW}$ hadronic parameter is set to its true generation value with different theoretical errors on $R$ and $r_{VP}$ parameters (defined in Eq. (\[eq:PEWfromCPS\])), either zero (green solid-line contour), 10% and 5% (blue dashed-line contour), and 10% and 30% (red solid-dashed-line contour). The parameters $\bar{\rho}$ and $\bar{\eta}$ are fixed to their current values from the global CKM fit [@Charles:2004jd; @Charles:2015gya; @CKMfitterwebsite], indicated by the magenta point. []{data-label="fig:RhoEta_CPS"}](RhoEta_CPS_FixPew_and_Vary_Rreal_and_rVP.eps "fig:"){width="7cm"}
This simple illustration with our reference scenario shows that the CPS/GPSZ method is limited both in robustness and accuracy due to the assumption on a negligible $P_{\rm EW}$: a small non-vanishing value breaks the relation between the phase of $R^0$ and the CKM angle $\alpha$, and therefore, even a small uncertainty on the $P_{\rm EW}$ value would translate into large biases on the CKM constraints. It shows that this method would require a very accurate understanding of hadronic amplitudes in order to extract a meaningful constraint on the unitarity triangle, and the presence of non-vanishing electroweak penguins dilutes the potential of this method significantly.
![ Top: constraints in the $\bar{\rho}-\bar{\eta}$ plane from the annihilation/exchange method, using the arbitrary but realistic numerical values for the input parameters detailed in the text. The green solid-line contour is the constraint obtained by fixing the $N^{0+}$ hadronic parameter to its generation value; the blue dotted-line contour is the constraint obtained by setting an upper bound on the $\left|N^{0+}/T^{+-}\right|$ ratio at twice its generation value. The parameters $\bar{\rho}$ and $\bar{\eta}$ are fixed to their current values from the global CKM fit [@Charles:2004jd; @Charles:2015gya; @CKMfitterwebsite], indicated by the magenta point. Bottom: size of the $\beta - \beta_{\rm gen}$ 68% confidence interval vs the upper-bound on $|N^{0+}/T^{+-}|$ in units of its generation value. []{data-label="fig:RhoEta_All"}](RhoEta_All_N0pBound20Times.eps "fig:"){width="7cm"} ![ Top: constraints in the $\bar{\rho}-\bar{\eta}$ plane from the annihilation/exchange method, using the arbitrary but realistic numerical values for the input parameters detailed in the text. The green solid-line contour is the constraint obtained by fixing the $N^{0+}$ hadronic parameter to its generation value; the blue dotted-line contour is the constraint obtained by setting an upper bound on the $\left|N^{0+}/T^{+-}\right|$ ratio at twice its generation value. The parameters $\bar{\rho}$ and $\bar{\eta}$ are fixed to their current values from the global CKM fit [@Charles:2004jd; @Charles:2015gya; @CKMfitterwebsite], indicated by the magenta point. Bottom: size of the $\beta - \beta_{\rm gen}$ 68% confidence interval vs the upper-bound on $|N^{0+}/T^{+-}|$ in units of its generation value. []{data-label="fig:RhoEta_All"}](beta_voverage_vs_N0pOTpmBound.eps "fig:"){width="8cm"}
Setting bounds on annihilation/exchange contributions {#subsec:N}
-----------------------------------------------------
As discussed in the previous paragraphs, the penguin contributions for $B\rightarrow K^*\pi$ decays are strongly CKM-enhanced, impacting the CPS/GPSZ method based on neglecting a penguin amplitude $P_{\rm EW}$. This method exhibits a strong sensitivity to small changes or uncertainties in values assigned to the electroweak penguin contribution. An alternative and safer approach consists in constraining a tree amplitude, with a CKM-suppressed contribution. Among the various hadronic amplitudes introduced, it seems appropriate to choose the annihilation amplitude $N^{0+}$, which is expected to be smaller than $T^{+-}$, and which could even be smaller than the colour-suppressed $T^{00}_{\rm C}$. Unfortunately, no direct, clean constraints on $N^{0+}$ can be extracted from data and from the theoretical point of view, $N^{0+}$ is dominated by incalculable non-factorisable contributions in QCD factorisation [@Beneke:1999br; @Beneke:2000ry; @Beneke:2003zv; @Beneke:2006hg]. On the other hand, indirect upper bounds on $N^{0+}$ may be inferred from either the $B^+\to K^{*0} \pi^+$ decay rate or from the $U$-spin related mode $B^+\to K^{*0}K^+$.
This method, like the previous one, hinges on a specific assumption on hadronic amplitudes. Fixing $N^{0+}$ breaks the reparametrisation invariance in Sec. \[sec:Isospin\], and thus provides a way of measuring weak phases. We can compare the two approaches by using the same reference scenario as in Sec. \[subsec:CPS\], i.e., the values gathered in Tab. \[tab:IdealCase\]. We have an annihilation parameter $N^{0+}$ with a magnitude $18$ times smaller than the tree parameter $T^{+-}$, and a phase fixed at $108^\circ$. All $B\rightarrow K^*\pi$ physical observables are used as inputs. This time, all hadronic parameters are free to vary in the fits, except for the annihilation/exchange parameter $N^{0+}$, which is subject to two different hypotheses: either its value is fixed to its generation value, or the ratio $\left|N^{0+}/T^{+-}\right|$ is constrained in a range (up to twice its generation value).
The resulting constraints on the $\bar{\rho}-\bar{\eta}$ are shown on the upper plot of Fig. \[fig:RhoEta\_All\]. We stress that in this fit, the value of $N^{0+}$ is bound, but the other amplitudes (including $P_{\rm EW}$) are left free to vary. Using a loose bound on $\left|N^{0+}/T^{+-}\right|$ yields a less tight constraint, but in contrast with the CPS/GPSZ method, the CKM generation value is here included. One may notice that the resulting constraint is similar to the one corresponding to the CKM angle $\beta$. This can be understood in the following way. Let us assume that we neglect the contribution from $N^{0+}$. We obtain the following amplitude to be considered $$A'=A^{0+}=V_{ts}V_{tb}^*(-P^{+-}+P_{\rm EW}^{\rm C}),$$ and then, together with its $CP$-conjugate amplitude $\bar{A}'$, a convention-independent amplitude ratio $R'$ can be defined as $$R' = \frac{q}{p}\frac{\bar{A}'}{A} = e^{-2i\beta}\,,$$ in agreement with the convention used to fix the phase of the $B$-meson state. This justifies the $\beta$-like shape of the constraint obtained when fixing the value of the annihilation parameter. The presence of the oscillation phase $q/p$ here, starting from a decay of a charged $B$, may seem surprising. However, one should keep in mind that the measurement of $B^+\to K^{*0}\pi^+$ and its $CP$-conjugate amplitude are not sufficient to determine the relative phase between $A'$ and $\bar{A}'$: this requires one to reconstruct the whole quadrilateral equation Eq. (\[eq:isospinRelations\]), where the phases are provided by interferences between mixing and decay amplitudes in $B_0$ and $\bar{B}_0$ decays. In other words, the phase observables obtained from the Dalitz plot are always of the form Eq. (\[eq:phasediff1\])-(\[eq:phasediff2\]): their combination can only lead to a ratio of $CP$-conjugate amplitudes multiplied by the oscillation parameter $q/p$.
The lower plot of Fig. \[fig:RhoEta\_All\] describes how the constraint on $\beta$ loosens around its true value when the range allowed for $\left|N^{0+}/T^{+-}\right|$ is increased compared to its initial value ($0.143$). We see that the method is stable and keeps on including the true value for $\beta$ even in the case of a mild constraint on $\left|N^{0+}/T^{+-}\right|$.
Constraints on hadronic parameters using current data {#sec:Hadronic}
=====================================================
As already anticipated in Sec. \[sec:Isospin\], a second strategy to exploit the data consists in assuming that the CKM matrix is already well determined from the CKM global fit [@Charles:2004jd; @Charles:2015gya; @CKMfitterwebsite]. The measurements of $B\rightarrow K^\star\pi$ observables (isobar parameters) can then be used to extract constraints on the hadronic parameters in Eq. (\[eq:canonicalParametrisation\]).
Experimental inputs {#subsec:exp_inputs}
-------------------
For this study, the complete set of available results from the and Belle experiments is used. The level of detail for the publicly available results varies according to the decay mode in consideration. In most cases, at least one amplitude DP analysis of $B^0$ and $B^+$ decays is public [@Amhis:2016xyh], and at least one input from each physical observable is available. In addition, the conventions used in the various DP analyses are usually different. Ideally, one would like to have access to the complete covariance matrix, including statistical and systematic uncertainties, for all isobar parameters, as done for instance in Ref. [@Aubert:2009me]. Since such information is not always available, the published results are used in order to derive ad-hoc approximate covariance matrices, implementing all the available information (central values, total uncertainties, correlations among parameters). The inputs for this study are the following:
- Two three-dimensional covariance matrices, cf. Eq. (\[eq:KsPiPi\_inputs\]), from the time-dependent DP analysis of $B^0\rightarrow K^0_S\pi^+\pi^-$ in Ref. [@Aubert:2009me], and two three-dimensional covariance matrices from the Belle time-dependent DP analysis of $B^0\rightarrow K^0_S\pi^+\pi^-$ in Ref. [@Dalseno:2008wwa]. Both the and Belle analyses found two quasi-degenerate solutions each, with very similar goodness-of-fit merits. The combination of these solutions is described in App. \[App:comb\_inputs\], and is taken as input for this study.
- A five-dimensional covariance matrix, cf. Eq. (\[eq:KPiPi0\_inputs\]), from the $B^0\rightarrow K^+\pi^-\pi^0$ DP analysis [@BABAR:2011ae].
- A two-dimensional covariance matrix, cf. Eq. (\[eq:KPiPi\_inputs\]), from the $B^+\rightarrow K^+\pi^+\pi^-$ DP analysis [@Aubert:2008bj], and a two-dimensional covariance matrix from the Belle $B^+\rightarrow K^+\pi^+\pi^-$ DP analysis [@Garmash:2006bj].
- A simplified uncorrelated four-dimensional input, cf. Eq. (\[eq:K0PiPi0\_inputs\]), from the $B^+\rightarrow K^0_S\pi^+\pi^0$ preliminary DP analysis [@Lees:2015uun].
Besides the inputs described previously, there are other experimental measurements on different three-body final states performed in the quasi-two-body approach, which provide measurements of branching ratios and $CP$ asymmetries only. Such is the case of the result on the $B^+\to K^+\pi^0\pi^0$ final state [@Lees:2011aaa], where the branching ratio and the $CP$ asymmetry of the $B^+\to K^{*}(892)^+\pi^0$ contribution are measured. In this study, these two measurements are treated as uncorrelated, and they are combined with the inputs from the DP analyses mentioned previously.
These sets of experimental central values and covariance matrices are described in App. \[App:Exp\_inputs\], where the combinations of the results from and Belle are also described.
Finally, we notice that the time-dependent asymmetry in $B\to K_S\pi^0\pi^0$ has been measured [@Abe:2007xd; @Aubert:2007ub]. As these are global analyses integrated over the whole DP, we cannot take these measurements into account. In principle a time-dependent isobar analysis of the $K_S\pi^0\pi^0$ DP could be performed and it could bring some independent information on $B\to K^{*0}\pi^0$ intermediate amplitudes. Since this more challenging analysis has not been done yet, we will not consider this channel for the time being.
Selected results for $CP$ asymmetries and hadronic amplitudes
-------------------------------------------------------------
Using the experimental inputs described in Sec. \[subsec:exp\_inputs\], a fit to the complete set of hadronic parameters is performed. We discuss the fit results focusing on three aspects: the most significant direct $CP$ asymmetries, the significance of electroweak penguins, and the relative hierarchies of hadronic contributions to the tree amplitudes. As will be seen in the following, the fit results can be interpreted in terms of two sets of local minima, out of which one yields constraints on the hadronic parameters in better agreement with the expectations from CPS/GPSZ, the measured direct $CP$ asymmetries and the expected relative hierarchies of hadronic contributions.
### Direct $CP$ violation in $B^0\rightarrow K^{\star+}\pi^-$
The $B^0\rightarrow K^{\star+}\pi^-$ amplitude can be accessed both in the $B^0\rightarrow K^0_{\rm S}\pi^+\pi^-$ and $B^0\rightarrow K^+\pi^-\pi^0$ Dalitz-plot analyses. The direct $CP$ asymmetry $A_{\rm CP}(B^0\rightarrow K^{\star+}\pi^-)$ has been measured by in both modes [@BABAR:2011ae; @Aubert:2009me] and by Belle in the $B^0\rightarrow K^0_{\rm S}\pi^+\pi^-$ mode [@Dalseno:2008wwa]. All three measurements yield a negative value: incidentally, this matches also the sign of the two-body $B^0\rightarrow K^+\pi^-$ $CP$ asymmetry, for which direct $CP$ violation is clearly established.
Using the amplitude DP analysis results from these three measurements as inputs, the combined constraint on $A_{\rm CP}(B^0\rightarrow K^{\star+}\pi^-)$ is shown in Fig. \[fig:ACP\_KstpPim\]. The combined value is 3.0 $\sigma$ away from zero, and the 68% confidence interval on this $CP$ asymmetry is $0.21\pm 0.07$ approximately. This result is to be compared with the $0.23\pm 0.06$ value provided by HFLAV [@Amhis:2016xyh]. The difference is likely to come from the fact that HFLAV performs an average of the $CP$ asymmetries extracted from individual experiments, while this analysis uses isobar values as inputs which are averaged over the various experiments before being translated into values for the $CP$ parameters: since the relationships between these two sets of quantities are non-linear, the two steps (averaging over experiments and translating from one type of observables to another) yield the same central values only in the case of very small uncertainties. In the current situation, where sizeable uncertainties affect the determinations from individual experiments, it is not surprising that minor discrepancies arise between our approach and the HFLAV result.
As can be readily seen from Eq. (\[eq:BtoKstarPlusPiMinusAmplitude\]), a non-vanishing asymmetry in this mode requires a strong phase difference between the tree $T^{+-}$ and penguin $P^{+-}$ hadronic parameters that is strictly different from zero. Fig. \[fig:PpmOTpm\_BtoKstPi\] shows the two-dimensional constraint on the modulus and phase of the $P^{+-}/T^{+-}$ ratio. Two solutions with very similar $\chi^2$ are found, both incompatible with a vanishing phase difference. The first solution corresponds to a small (but non-vanishing) positive strong phase, with similar $\left|V_{ts}V_{tb}^\star P^{+-}\right|$ and $\left|V_{us}V_{ub}^\star T^{+-}\right|$ contributions to the total decay amplitude, and is called Solution I in the following. The other solution, denoted Solution II, corresponds to a larger, negative, strong phase, with a significantly larger penguin contribution. We notice that Solution I is closer to usual theoretical expectations concerning the relative size of penguin and tree contributions.
Let us stress that the presence of two solutions for $P^{+-}/T^{+-}$ is not related to the presence of ambiguities in the individual and Belle measurements for $B^+\to K^+\pi^+\pi^-$ and $B^0\to K^0_S\pi^+\pi^-$, since we have performed their combinations in order to select a single solution for each process. Therefore, the presence of two solutions in Fig. \[fig:PpmOTpm\_BtoKstPi\] is a global feature of our non-linear fit, arising from the overall structure of the current combined measurements (central values and uncertainties) that we use as inputs.
![ Constraint on the direct $CP$ asymmetry parameter $C(B^0\rightarrow K^{\star+}\pi^-) = -A_{\rm CP}(B^0\rightarrow K^{\star+}\pi^-)$ from data on $B^0\to K^0_S\pi^+\pi^-$ (red curve), Belle data on $B^0\to K^0_S\pi^+\pi^-$ (blue curve), data on $B^0\to K^+\pi^-\pi^0$ (green curve) and the combination of all these measurements (green shaded curve). The constraints are obtained using the observables described in the text. []{data-label="fig:ACP_KstpPim"}](Acp_B0toKstpPim_all.eps){width="8cm"}
![ Two-dimensional constraint on the modulus and phase of the $P^{+-}/T^{+-}$ ratio. For convenience, the modulus is multiplied by the ratio of CKM factors appearing in the tree and penguin contributions to the $B^0\rightarrow K^{\star+}\pi^-$ decay amplitude. []{data-label="fig:PpmOTpm_BtoKstPi"}](PpmOTpm_BtoKstPi_all_BaBarAndBelle.eps){width="8cm"}
### Direct $CP$ violation in $B^+\rightarrow K^{\star+}\pi^0$
The $B^+\rightarrow K^{\star+}\pi^0$ amplitude can be accessed in a $B^+\rightarrow K^0_{\rm S}\pi^+\pi^0$ Dalitz-plot analysis, for which only a preliminary result from is available [@Lees:2015uun]. A large, negative $CP$ asymmetry $A_{\rm CP}(B^+\rightarrow K^{\star+}\pi^0) = -0.52\pm 0.14\pm 0.04 ^{+0.04}_{-0.02}$ is reported there with a 3.4 $\sigma$ significance. This $CP$ asymmetry has also been measured by through a quasi-two-body analysis of the $B^+\rightarrow K^+\pi^0\pi^0$ final state [@Lees:2011aaa], obtaining $A_{\rm CP}(B^+\rightarrow K^{\star+}\pi^0) = -0.06\pm 0.24\pm 0.04$. The combination of these two measurement yields $A_{\rm CP}(B^+\rightarrow K^{\star+}\pi^0) = -0.39\pm 0.12\pm 0.03$, with a 3.2 $\sigma$ significance.
In contrast with the $B^0\rightarrow K^{\star+}\pi^-$ case, in the canonical parametrisation Eq. (\[eq:canonicalParametrisation\]), the decay amplitude for $B^+\rightarrow K^{\star+}\pi^0$ includes several hadronic contributions both to the total tree and penguin terms, namely $$\begin{aligned}
\sqrt{2}A^{+0} & = & V_{us}V_{ub}^*T^{+0} + V_{ts}V_{tb}^*P^{+0} \\ \nonumber
& = & V_{us}V_{ub}^*(T^{+-}+T_{\rm C}^{00}-N^{0+}) \\ \nonumber
& &+ V_{ts}V_{tb}^*(P^{+-}+P_{\rm EW}-P_{\rm EW}^{\rm C}) \ , \end{aligned}$$ and therefore no straightforward constraint on a single pair of hadronic parameters can be extracted, as several degenerate combinations can reproduce the observed value of the $CP$ asymmetry $A_{\rm CP}(B^+\rightarrow K^{\star+}\pi^0)$. This is illustrated in Fig. \[fig:POTp0\_BtoKstPi\], where six different local minima are found in the fit, all with similar $\chi^2$ values. The three minima with positive strong phases correspond to Solution I, while the three minima with negative strong phases correspond to Solution II. The relative size of the total tree and penguin contributions is bound within a relatively narrow range: we get $|P^{+0}/T^{+0}| \in (0.018,0.126)$ at $68\%$ C.L.
![ Constraint on the direct $CP$ asymmetry parameter $C(B^+\rightarrow K^{\star+}\pi^0) = -A_{\rm CP}(B^+\rightarrow K^{\star+}\pi^0)$ from data on $B^+\to K^0_S\pi^+\pi^0$ (red curve), data on $B^+\to K^+\pi^0\pi^0$ (blue curve) and the combination (green shaded curve). The constraints are obtained using the observables described in the text. []{data-label="fig:ACP_KstpPi0"}](Acp_BptoKstpPi0_all.eps){width="8cm"}
![ Top: two-dimensional constraint on the modulus and phase of the $(P^{+-}+P_{\rm EW}-P_{\rm EW}^{\rm C})/(T^{+-}+T_{\rm C}^{00}-N^{0+})$ ratio. For convenience, the modulus is multiplied by the ratio of CKM factors appearing in the tree and penguin contributions to the $B^+\rightarrow K^{\star+}\pi^0$ decay amplitude. Bottom: one-dimensional constraint on the modulus of the $(P^{+-}+P_{\rm EW}-P_{\rm EW}^{\rm C})/(T^{+-}+T_{\rm C}^{00}-N^{0+})$ ratio. []{data-label="fig:POTp0_BtoKstPi"}](POTp0_BtoKstPi_all_BaBarAndBelle.eps "fig:"){width="8cm"} ![ Top: two-dimensional constraint on the modulus and phase of the $(P^{+-}+P_{\rm EW}-P_{\rm EW}^{\rm C})/(T^{+-}+T_{\rm C}^{00}-N^{0+})$ ratio. For convenience, the modulus is multiplied by the ratio of CKM factors appearing in the tree and penguin contributions to the $B^+\rightarrow K^{\star+}\pi^0$ decay amplitude. Bottom: one-dimensional constraint on the modulus of the $(P^{+-}+P_{\rm EW}-P_{\rm EW}^{\rm C})/(T^{+-}+T_{\rm C}^{00}-N^{0+})$ ratio. []{data-label="fig:POTp0_BtoKstPi"}](ModPOTp0_BtoKstPi_all_BaBarAndBelle.eps "fig:"){width="8cm"}
### Hierarchy among penguins: electroweak penguins {#sec:hierarchypenguins}
In Sec. \[subsec:CPS\], we described the CPS/GPSZ method designed to extract weak phases from $B\to\pi K$ assuming some control on the size of the electroweak penguin. According to this method, the electroweak penguin is expected to yield a small contribution to the decay amplitudes, with no significant phase difference. We are actually in a position to test this expectation by fitting the hadronic parameters using the and Belle data as inputs. Fig. \[fig:PewOT3o2\] shows the two-dimensional constraint on $r_{VP}$, in other words, the ratio $P_{\rm EW}/T_{3/2}$ ratio, showing two local minima. The CPS/GPSZ prediction is also indicated in this figure. In Fig. \[fig:POTvsrVP\], we provide the regions allowed for $|r_{VP}|$ and the modulus of the ratio $|P^{+-}/T^{+-}|$, exhibiting two favoured values, the smaller one being associated with Solution I and the larger one with Solution II. The latter one corresponds to a significantly large electroweak penguin amplitude and it is clearly incompatible with the CPS/GPSZ prediction by more than one order of magnitude. A better agreement, yet still marginal, is found for the smaller minimum that corresponds to Solution I: the central value for the ratio is about a factor of three larger than CPS/GPSZ, and a small, positive phase is preferred. For this minimum, an inflation of the uncertainty on $\left| r_{\rm VP}\right|$ up to $30\%$ would be needed to ensure proper agreement. In any case, it is clear that the data prefers a larger value of $|r_{\rm VP}|$ than the estimates originally proposed.
![ Two-dimensional constraint on real and imaginary parts on the $r_{VP}$ parameter defined in Eq. (\[eq:PEWfromCPS\]). The area encircled with the solid (dashed) red line corresponds to the CPS/GPSZ prediction, with a $5\%$ ($30\%$) uncertainty on the $r_{\rm VP}$ parameter. \[fig:PewOT3o2\]](ReImrVP_BtoKstPi_all_BaBarAndBelle_v2.eps){width="8cm"}
![ Two-dimensional constraint on $|r_{VP}|$ defined in Eq. (\[eq:PEWfromCPS\]) and ${\rm Log}_{10}\left(|P^{+-}/T^{+-}|\right)$. The vertical solid (dashed) red line corresponds to the CPS/GPSZ prediction, with a $5\%$ ($30\%$) uncertainty. \[fig:POTvsrVP\] ](ModrVP_vs_Log10ModPpmOTpm_BtoKstPi_all_BaBarAndBelle.eps){width="8cm"}
Moreover, the contribution from the electroweak penguin is found to be about twice larger than the main penguin contribution $P^{+-}$. This is illustrated in Fig. \[fig:PewOP\], where only one narrow solution is found in the $P_{\rm EW}/P^{+-}$ plane, as both solutions I and II provide essentially the same constraint. The relative phase between these two parameters is bound to the interval $(-25,+10)^\circ$ at $95\%$ C.L. Additional tests allow us to demonstrate that this strong constraint on the relative $P_{\rm EW}/P^{+-}$ penguin contributions is predominantly driven by the $\varphi^{00,+-}$ phase differences measured in the Dalitz-plot analysis of $B^0\rightarrow K^+\pi^+\pi^0$ decays. The strong constraint on the $P_{\rm EW}/P^{+-}$ ratio is turned into a mild upper bound when removing the $\varphi^{00,+-}$ phase differences from the experimental inputs. The addition of these two observables as fit inputs increases the minimal $\chi^2$ by 7.7 units, which corresponds to a 2.6 $\sigma$ discrepancy. Since the latter is driven by a measurement from a single experiment, additional experimental results are needed to confirm such a large value for the electroweak penguin parameter.
![ Top: two-dimensional constraint on the modulus and phase of the complex $P_{\rm EW}/P^{+-}$ ratio. Bottom: constraint on the $\left|P_{\rm EW}/P^{+-}\right|$ ratio, using the complete set of experimental inputs (red curve), and removing the measurement of the $\varphi^{00,+-}$ phases from the $B^0\rightarrow K^+\pi^+\pi^0$ Dalitz-plot analysis (green shaded curve). \[fig:PewOP\]](PewOPpm_BtoKstPi_all_BaBarAndBelle.eps "fig:"){width="8cm"} ![ Top: two-dimensional constraint on the modulus and phase of the complex $P_{\rm EW}/P^{+-}$ ratio. Bottom: constraint on the $\left|P_{\rm EW}/P^{+-}\right|$ ratio, using the complete set of experimental inputs (red curve), and removing the measurement of the $\varphi^{00,+-}$ phases from the $B^0\rightarrow K^+\pi^+\pi^0$ Dalitz-plot analysis (green shaded curve). \[fig:PewOP\]](ModPewOPpm_BtoKstPi_all_BaBarAndBelle.eps "fig:"){width="8cm"}
In view of colour suppression, the electroweak penguin $P_{\rm EW}^{\rm C}$ is expected to yield a smaller contribution than $P_{\rm EW}$ to the decay amplitudes. This hypothesis is tested in Fig. \[fig:PewCOPew\], which shows that current data favours a similar size for the two contributions, and a small relative phase (up to $40^{\circ}$) between the colour-allowed and the colour-suppressed electroweak penguins. Both Solutions I and II show the same structure with four different local minima.
![ Top: two-dimensional constraint on the modulus and phase of the $P_{\rm EW}^{\rm C}/P_{\rm EW}$ ratio. Bottom: one-dimensional constraint on ${\rm Log}_{10}\left(\left|P_{\rm EW}^{\rm C}/P_{\rm EW}\right|\right)$, using the complete set of experimental inputs (red curve), and removing the measurement of the $\varphi^{00,+-}$ phases from the $B^0\rightarrow K^+\pi^+\pi^0$ Dalitz-plot analysis (green shaded curve). []{data-label="fig:PewCOPew"}](PewCOPew_BtoKstPi_all_BaBarAndBelle.eps "fig:"){width="8cm"} ![ Top: two-dimensional constraint on the modulus and phase of the $P_{\rm EW}^{\rm C}/P_{\rm EW}$ ratio. Bottom: one-dimensional constraint on ${\rm Log}_{10}\left(\left|P_{\rm EW}^{\rm C}/P_{\rm EW}\right|\right)$, using the complete set of experimental inputs (red curve), and removing the measurement of the $\varphi^{00,+-}$ phases from the $B^0\rightarrow K^+\pi^+\pi^0$ Dalitz-plot analysis (green shaded curve). []{data-label="fig:PewCOPew"}](Log10ModPewCOPew_BtoKstPi_all_BaBarAndBelle.eps "fig:"){width="8cm"}
### Hierarchy among tree amplitudes: colour suppression and annihilation
![ Two-dimensional constraint on the modulus and phase of the $T^{00}_{\rm C}/T^{+-}$ (top) and $N^{0+}/T^{+-}$ (bottom) ratios. []{data-label="fig:T00OTpm_N0pOTpm"}](T00OTpm_BtoKstPi_all_BaBarAndBelle.eps "fig:"){width="7cm"} ![ Two-dimensional constraint on the modulus and phase of the $T^{00}_{\rm C}/T^{+-}$ (top) and $N^{0+}/T^{+-}$ (bottom) ratios. []{data-label="fig:T00OTpm_N0pOTpm"}](N0pOTpm_BtoKstPi_all_BaBarAndBelle.eps "fig:"){width="7cm"}
As already discussed, the hadronic parameter $T^{00}_{\rm C}$ is expected to be suppressed with respect to the main tree parameter $T^{+-}$. Also, the annihilation topology is expected to provide negligible contributions to the decay amplitudes. These expectations can be compared with the extraction of these hadronic parameters from data in Fig. \[fig:T00OTpm\_N0pOTpm\].
For colour suppression, the current data provides no constraint on the relative phase between the $T^{00}_{\rm C}$ and $T^{+-}$ tree parameters, and only a mild upper bound on the modulus can be inferred; the tighter constraint is provided by Solution I that excludes values of $|T^{00}_{\rm C}/T^{+-}|$ larger than $1.6$ at $95\%$ C.L. The constraint from Solution II is more than one order of magnitude looser.
Similarly, for annihilation, Solution I provides slightly tighter constraints on its contribution to the total tree amplitude with the bound $|N^{0+}/T^{+-}|<2.5$ at $95\%$ C.L., while the bound from Solution II is much looser.
Comparison with theoretical expectations {#QCDFcomparison}
----------------------------------------
![image](ReImN0pOTpm_BtoKstPi_all_BaBarAndBelle.eps){width="6.5cm"} ![image](ReImPewCOPew_BtoKstPi_all_BaBarAndBelle.eps){width="6.5cm"} ![image](ReImPewCOPpm_BtoKstPi_all_BaBarAndBelle.eps){width="6.5cm"} ![image](ReImPewCOTpm_BtoKstPi_all_BaBarAndBelle.eps){width="6.5cm"} ![image](ReImPpmOPew_BtoKstPi_all_BaBarAndBelle.eps){width="6.5cm"} ![image](ReImPewOTpm_BtoKstPi_all_BaBarAndBelle.eps){width="6.5cm"} ![image](ReImPpmOTpm_BtoKstPi_all_BaBarAndBelle.eps){width="6.5cm"} ![image](ReImT00OTpm_BtoKstPi_all_BaBarAndBelle.eps){width="6.5cm"}
Quantity Fit result QCDF
-------------------------------------------------------------------- ----------------------------------------- -------------------------------
$\displaystyle {\mathcal Re}\frac{N^{0+}}{T^{+-}}$ $(-5.31, 4.73)$ $0.011 \pm 0.027$
$\displaystyle {\mathcal Im}\frac{N^{0+}}{T^{+-}}$ $(-9.59, 7.73)$ $0.003\pm 0.028$
$\displaystyle {\mathcal Re}\frac{P_{\rm EW}^{\rm C}}{P_{\rm EW}}$ $(0.69,1.14)$ $0.17\pm 0.19$
$\displaystyle {\mathcal Im}\frac{P_{\rm EW}^{\rm C}}{P_{\rm EW}}$ $(-0.48,-0.28)~\cup~(-0.13,0.22)~\cup$ $-0.08\pm0.14$
$(0.34,0.60)$
$\displaystyle {\mathcal Re}\frac{P_{\rm EW}^{\rm C}}{P^{+-}}$ $(1.29,2.08)$ $-$
$\displaystyle {\mathcal Im}\frac{P_{\rm EW}^{\rm C}}{P^{+-}}$ $(-1.09,-0.75)~\cup~(-0.51,-0.10)~\cup$ $-$
$(-0.08,0.16)~\cup~(0.47,0.83)$
$\displaystyle {\mathcal Re}\frac{P_{\rm EW}^{\rm C}}{T^{+-}}$ $(-0.12,0.34)$ $0.0027\pm 0.0031$
$\displaystyle {\mathcal Im}\frac{P_{\rm EW}^{\rm C}}{T^{+-}}$ $(-0.42,0.05)$ $-0.0015^{+0.0024}_{-0.0025}$
$\displaystyle {\mathcal Re}\frac{P^{+-}}{P_{\rm EW}}$ $(0.49,0.56)$ $3.9^{+3.2}_{-3.3}$
$\displaystyle {\mathcal Im}\frac{P^{+-}}{P_{\rm EW}}$ $(-0.03,0.16)$ $1.8\pm 3.3$
$\displaystyle {\mathcal Re}\frac{P_{\rm EW}}{T^{+-}}$ $(0.0, 0.25)$ $0.0154^{+0.0059}_{-0.0060}$
$\displaystyle {\mathcal Im}\frac{P_{\rm EW}}{T^{+-}}$ $(-0.40,-0.09)~\cup~(-0.02,0.02)$ $-0.0014^{+0.0023}_{-0.0022}$
$\displaystyle {\mathcal Re}\frac{P^{+-}}{T^{+-}}$ $( 0.023,0.140)$ $0.053\pm0.039$
$\displaystyle {\mathcal Im}\frac{P^{+-}}{T^{+-}}$ $(-0.20,-0.04)~\cup~(0.0, 0.01)$ $0.016\pm0.044$
$\displaystyle {\mathcal Re}\frac{T^{00}_{\rm C}}{T^{+-}}$ $(-0.26,2.24)$ $0.13\pm0.17$
$\displaystyle {\mathcal Im}\frac{T^{00}_{\rm C}}{T^{+-}}$ $(-3.28,0.74)$ $-0.11\pm0.15$
We have extracted the values of the hadronic amplitudes from the data currently available. It may prove interesting to compare these results with theoretical expectations. For this exercise, we use QCD factorisation [@Beneke:1999br; @Beneke:2000ry; @Beneke:2003zv; @Beneke:2006hg] as a benchmark point, keeping in mind that other approaches (discussed in the introduction) are available. In order to keep the comparison simple and meaningful, we consider the real and imaginary part of several ratios of hadronic amplitudes.
We obtain our theoretical values in the following way. We follow Ref. [@Beneke:2003zv] for the expressions within QCD factorisation, and we use the same model for the power-suppressed and infrared-divergent contributions coming from hard scattering and weak annihilation: these contributions are formally $1/m_b$-suppressed but numerically non negligible, and play a crucial role in some of the amplitudes. On the other hand, we update the hadronic parameters in order to take into account more recent determinations of these quantities, see App. \[app:QCDFinputs\]. We use the Rfit scheme to handle theoretical uncertainties [@Charles:2004jd; @Charles:2015gya; @CKMfitterwebsite; @Charles:2016qtt] (in particular for the hadronic parameters and the $1/m_b$ power-suppressed contributions), and we compute only ratios of hadronic amplitudes using QCD factorisation. We stress that we provide the estimates within QCD factorisation simply to compare the results of our experimental fit for the hadronic amplitudes with typical theoretical expectations concerning the same quantities. In particular we neglect Next-Next-to-Leading Order corrections that have been partially computed in Refs. [@BHWS; @Bell:2007tv; @Bell:2009nk; @Beneke:2009ek; @Bell:2015koa], and we do not attempt to perform a fully combined fit of the theoretical predictions with the experimental data, as the large uncertainties would make the interpretation difficult.
Our results for the ratios of hadronic amplitudes are shown in Fig. \[fig:QCDFcomparison2D\] and in Tab. \[tab:QCDFcomparison1D\]. We notice that for most of the ratios, a good agreement is found. The global fit to the experimental data has often much larger uncertainties than theoretical predictions: with better data in the future, we may be able to perform very non trivial tests of the non-leptonic dynamics and the isobar approximation. The situation for $P_{\rm EW}^{\rm C}/P_{\rm EW}$ is slightly different, since the two determinations (experiment and theory) exhibit similar uncertainties and disagree with each other, providing an interesting test for QCD factorisation, which however goes beyond the scope of this study.
There are two cases where the theoretical output from QCD factorisation is significantly less precise than the constraints from the combined fit. For $P_{\rm EW}^C/P^{+-}$, both numerator and denominator can be (independently) very small in QCD factorisation, and numerical instabilities in this ratio prevent us from having a precise prediction. For $P^{+-}/P_{\rm EW}$, the impressively accurate experimental determination, as discussed in Sec. \[sec:hierarchypenguins\], is predominantly driven by the $\varphi^{00,+-}$ phase differences measured in the Dalitz-plot analysis of $B^0\rightarrow K^+\pi^+\pi^0$ decays. Removing this input yields a much milder constraint on $P^{+-}/P_{\rm EW}$. On the other hand in QCD factorisation, the formally leading contributions to the $P^{+-}$ penguin amplitude are somewhat numerically suppressed, and compete with the model estimate of power corrections: due to the Rfit treatment used, the two contributions can either compensate each other almost exactly or add up coherently, leading to a $\sim\pm 100\%$ relative uncertainty, which is only in marginal agreement with the fit output. Thus we conclude that the $P^{+-}/P_{\rm EW}$ ratio is both particularly sensitive to the power corrections to QCD factorisation and experimentally well constrained, so that it can be used to provide an insight on non factorisable contributions, provided one assumes negligible effects from New Physics.
Prospects for LHCb and Belle II {#sec:prospect}
===============================
![image](PpmOTpm_All_LHCbAndBelleII_2023.eps){width="7cm"} ![image](PewOT3o2_All_LHCbAndBelleII_2023.eps){width="7cm"} ![image](N0pOTpm_All_LHCbAndBelleII_2023.eps){width="7cm"} ![image](PewCOTpm_All_LHCbAndBelleII_2023.eps){width="7cm"} ![image](T00OTpm_All_LHCbAndBelleII_2023.eps){width="7cm"}
In this section, we study the impact of improved measurements of $K\pi\pi$ modes from the LHCb and Belle II experiments. During the first run of the LHC, the LHCb experiment has collected large datasets of B-hadron decays, including charmless $B^0,B^+,B_s$ meson decays into tree-body modes. LHCb is currently collecting additional data in Run-2. In particular, due to the excellent performances of the LHCb detector for identifying charged long-lived mesons, the experiment has the potential for producing the most accurate charmless three-body results in the $B^+\rightarrow K^+\pi^-\pi^+$ mode, owing to high-purity event samples much larger than the ones collected by and Belle. Using $3.0\ {\rm fb}^{-1}$ of data recorded during the LHC Run 1, first results on this mode are already available [@Aaij:2014iva], and a complete amplitude analysis is expected to be produced in the short-term future. For the $B^0\rightarrow K^0_S\pi^+\pi^-$ mode, the event-collection efficiency is challenged by the combined requirements on reconstructing the $K^0_S\rightarrow \pi^+\pi^-$ decay and tagging the $B$ meson flavour, but nonetheless the $B^0\rightarrow K^0_S\pi^+\pi^-$ data samples collected by LHCb are already larger than the ones from and Belle. As it is more difficult to anticipate the reach of LHCb Dalitz-plot analyses for modes including $\pi^0$ mesons in the final state, the $B^0\rightarrow K^+\pi^+\pi^0$, $B^+\rightarrow K^0_S\pi^+\pi^0$ $B^+\rightarrow K^+\pi^0\pi^0$ and $B^0\rightarrow K_S^0\pi^0\pi^0$ channels are not considered here. In addition, LHCb has also the potential for studying $B_s$ decay modes, and LHCb can reach $B\to KK\pi$ modes with branching ratios out of reach for $B$-factories.
The Belle II experiment [@Urquijo:2015qsa], currently in the stages of construction and commissioning, will operate in an experimental environment very similar to the one of the and Belle experiments. Therefore Belle II has the potential for studying all modes accessed by the $B$-factories, with expected sensitivities that should scale in proportion to its expected total luminosity (i.e., $50\ {\rm ab}^{-1}$). In addition, Belle II has the potential for accessing the $B^+\rightarrow K^+\pi^0\pi^0$ and $B^0\rightarrow K_S^0\pi^0\pi^0$ modes (for which the $B$-factories could not produce Dalitz-plot results) but these modes will provide low-accuracy information, redundant with some of the modes considered in this paper: therefore they are not included here.
Since both the LHCb and Belle II have the potential for studying large, high-quality samples of $B^+\rightarrow K^+\pi^-\pi^+$, it is realistic to expect that the experiments will be able to extract a consistent, data-driven signal model to be used in all Dalitz-plot analysis, yielding systematic uncertainties significantly decreased with respect to the results from $B$-factories.
Finally for LHCb, since this experiment cannot perform $B$-meson counting as in a $B$-factory environment, the branching fractions need to be normalised with respect to measurements performed at and Belle, until the advent of Belle II. This prospective study therefore is split into two periods: a first one based on the assumption of new results from LHCb Run1+Run2 only, and a second one using the complete set of LHCb and Belle II results. The corresponding inputs are gathered in App. \[app:refprosp\]. We use the reference scenario described in Tab. \[tab:IdealCase\] for the central values, so that we can guarantee the self-consistency of the inputs and we avoid reducing the uncertainties artificially because of barely compatible measurements (which would occur if we used the central values of the current data and rescaled the uncertainties). The expected uncertainties, obtained from the extrapolations discussed previously, are described in Tab. \[tab:LHCbAndBelleII\].
The blue area in Fig. \[fig:LHCbAndBelleII2023\] illustrates the potential for the first step of our prospective study ($B$-factories and LHCb Run1+Run2). For the input values used in the prospective, the modulus of the $P^{+-}/T^{+-}$ ratio will be constrained with a relative $10\%$ accuracy, and its complex phase will be constrained within $3$ degrees (we discuss 68% C.L. ranges in the following, whereas Fig. \[fig:LHCbAndBelleII2023\] shows 95% C.L. regions). Slightly tighter upper bounds on the $|T^{00}_{\rm C}/T^{+-}|$ and $|N^{0+}/T^{+-}|$ ratios may be set, albeit the relative phases of these rations will remain very poorly constrained. Assuming that the electroweak penguin is in agreement with the CPS/GPSZ prediction, its modulus will be constrained within $45\%$ and its phase within $14$ degrees.
The addition of results from the Belle II experiment corresponds to the second step of this prospective study. As illustrated by the green area in Fig. \[fig:LHCbAndBelleII2023\], the uncertainties on the modulus and phase of the $P^{+-}/T^{+-}$ ratio will decrease by factors of $1.4$ and $2.5$, respectively. Owing to the addition of precision measurements by Belle II of the $B^0\to K^{*0}\pi^0$ Dalitz-plot parameters from the amplitude analysis of the $B^0\to K^+\pi^-\pi^0$ modes, the $T^{00}_{\rm C}/T^{+-}$ ratio can be constrained within a $22\%$ uncertainty for its modulus, and within $10$ degrees for its phase. Similarly, the uncertainties on the modulus and phase of the $P_{\rm EW}/T_{3/2}$ ratio will decrease by factors $2.7$ and $2.9$, respectively. Concerning the colour-suppressed electroweak penguin, for which only a mild upper bound on its modulus was achievable within the first step of the prospective, can now be measured within a $22\%$ uncertainty for its modulus, and within $8$ degrees for its phase. Finally, the less stringent constraint will be achieved for the annihilation parameter. While its modulus can nevertheless be constrained between 0.3 and 1.5, the phase of this ratio may remain unconstrained in value, with just the sign of the phase being resolved. We add that one can also expect Belle II measurements for $B^+\to K^+\pi^0\pi^0$ and $B^0\to K_S\pi^0\pi^0$, however with larger uncertainties, so that we have not taken into account these decays.
In total, precise constraints on almost all hadronic parameters in the $B\rightarrow K^\star\pi$ system will be achieved using the Dalitz-plot results from the LHCb and Belle II experiments, with a resolution of the current phase ambiguities. These constraints can be compared with various theoretical predictions, proving an important tool for testing models of hadronic contributions to charmless $B$ decays.
Conclusion
==========
Non-leptonic B meson decays are very interesting processes both as probes of weak interaction and as tests of our understanding of QCD dynamics. They have been measured extensively at $B$-factories as well as at the LHCb experiment, but this wealth of data has not been fully exploited yet, especially for the pseudoscalar-vector modes which are accessible through Dalitz-Plot analyses of $B\to K\pi\pi$ modes. We have focused on the $B\to K^*\pi$ system which exhibits a large set of observables already measured. Isospin analysis allows us to express this decay in terms of CKM parameters and 6 complex hadronic amplitudes, but reparametrisation invariance prevents us from extracting simultaneously information on the weak phases and the hadronic amplitudes needed to describe these decays. We have followed two different approaches to exploit this data: either we extracted information on the CKM phase (after setting a condition on some of the hadronic amplitudes), or we determined of hadronic amplitudes (once we set the CKM parameters to their value from the CKM global fit [@Charles:2004jd; @Charles:2015gya; @CKMfitterwebsite]).
In the first case, we considered two different strategies. We first reconsidered the CPS/GPSZ strategy proposed in Ref. [@Ciuchini:2006kv; @Gronau:2006qn], amounting to setting a bound on the electroweak penguin in order to extract an $\alpha$-like constraint. We used a reference scenario inspired by the current data but with consistent central values and much smaller uncertainties in order to probe the robustness of the CPS/GPSZ method: it turns out that the method is easily biased if the bound on the electroweak penguin is not correct, even by a small amount. Unfortunately, this bound is not very precise from the theoretical point of view, which casts some doubt on the potential of this method to constrain $\alpha$. We have then considered a more promising alternative, consisting in setting a bound on the annihilation contribution. We observed that we could obtain an interesting stable $\beta$-like constraint and we discussed its potential to extract confidence intervals according to the accuracy of the bound used for the annihilation contribution.
In a second stage, we discussed how the data constrain the hadronic amplitudes, assuming the values of the CKM parameters. We performed an average of and Belle data in order to extract constraints on various ratios of hadronic amplitudes, with the issue that some of these data contain several solutions to be combined in order to obtain a single set of inputs for the Dalitz-plot observables. The ratio $P^{+-}/T^{+-}$ is not very well constrained and exhibits two distinct preferred solutions, but it is not large and supports the expect penguin suppression. On the other hand, colour or electroweak suppression does not seem to hold, as illustrated by $|P_{\rm EW}/P^{+-}|$ (around 2), $|P_{\rm EW}^{\rm C}/P_{\rm EW}|$ (around 1) or $|T^{00}_{\rm C}/T^{+-}|$ (mildly favouring values around 1). We however recall that some of these conclusions are very dependent on the measurement on $\varphi^{00,+-}$ phase differences measured in $B^0\to K^+\pi^+\pi^0$: removing this input turns the ranges into mere upper bounds on these ratios of hadronic amplitudes.
For illustration purposes, we compared these results with typical theoretical expectations. We determined the hadronic amplitudes using an updated implementation of QCD factorisation. A good overall agreement between theory and experiment is found for most of the ratios of hadronic amplitudes, even though the experimental determinations remain often less accurate than the theoretical determinations in most instances. Nevertheless, two quantities still feature interesting properties. The ratio $P^{+-}/P_{\rm EW}$ could provide interesting constraints on the models used to describe power-suppressed contributions in QCD factorisation, keeping in mind the (precise) experimental determination of this ratio relies strongly on the $\varphi^{00,+-}$ phases measured by , as discussed in the previous paragraph. The ratio $P_{\rm EW}^C/P_{\rm EW}$ is determined with similar accuracies theoretically and experimentally, but the two determinations are not in good agreement, suggesting that this quantity could also be used to constrain QCD factorisation parameters.
Finally, we performed prospective studies, considering two successive stages based first on LHCb data from Run1 and Run2, then on the additional input from Belle II. Using our reference scenario and extrapolating the uncertainties of the measurements at both stages, we determined the confidence regions for the moduli and phases of the ratios of hadronic amplitudes. The first stage (LHCb only) would correspond to a significant improvement for $P^{+-}/T^{+-}$ and $P_{\rm EW}/T_{3/2}$, whereas the second stage (LHCb+Belle II) would yield tight constraints on $N^{0+}/T^{+-}$, $P_{\rm EW}^C/T^{+-}$ and $T^{00}_{\rm C}/T^{+-}$.
Non-leptonic $B$-meson decays remain an important theoretical challenge, and any contender should be able to explain not only the pseudoscalar-pseudoscalar modes but also the pseudoscalar-vector modes. Unfortunately, the current data do not permit such extensive tests, even though they hint at potential discrepancies with theoretical expectations concerning the hierarchies of hadronic amplitudes. However, our study suggests that a more thorough analysis of $B\to K\pi\pi$ Dalitz plots from LHCb and Belle II could allow for a precise determination of the hadronic amplitudes involved in $B\to K^*\pi$ decays thanks to the isobar approximation for three-body amplitudes. This will definitely shed some light on the complicated dynamics of weak and strong interaction at work in pseudo-scalar-vector modes, and it will provide important tests of our understanding of non-leptonic $B$-meson decays.
Acknowledgments
===============
We would like to thank all our collaborators from the CKMfitter group for useful discussions, and Reina Camacho Toro for her collaboration on this project at an early stage. This project has received funding from the European Union Horizon 2020 research and innovation programme under the grant agreements No 690575. No 674896 and No. 692194. SDG acknowledges partial support from Contract FPA2014-61478-EXP.
Current experimental inputs {#App:Exp_inputs}
===========================
The full set real-valued physical observables, derived from the experimental inputs from and Belle, is described in the following sections. The errors and correlation matrices include both statistical and systematic uncertainties.
[l|c|ccc]{} $B^0\rightarrow K^0_S\pi^+\pi^-$ & $~~~~~~~~~~$Global min$~~~~~~~~~~$ & ${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal B}(K^{*+}\pi^-)$\
${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & $0.428 \pm 0.473$ & 1.00 & 0.90 & 0.02\
${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & $-0.690 \pm 0.302$ & & 1.00 & -0.06\
${\mathcal B}(K^{*+}\pi^-) (\times 10^{-6})$ & $8.290 \pm 1.189$ & & & 1.00\
[l|c|ccc]{} $B^0\rightarrow K^0_S\pi^+\pi^-$ & Local min ($\Delta {\rm NLL} = 0.16$) & ${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal B}(K^{*+}\pi^-)$\
${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & $-0.819 \pm 0.116$ & 1.00 & -0.19 & -0.15\
${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & $-0.049 \pm 0.494$ & & 1.00 & -0.01\
${\mathcal B}(K^{*+}\pi^-) (\times 10^{-6})$ & $8.290 \pm 1.189$ & & & 1.00\
[l|c]{} $B^+\rightarrow K^+\pi^-\pi^+$ & Value\
$\left| \frac{\overline{A}(\overline{K}^{*0}\pi^-)}{A(K^{*0}\pi^+)} \right|$ & $1.033 \pm 0.047$\
${\mathcal B}(K^{*0}\pi^+) (\times 10^{-6})$ & $10.800 \pm 1.389$\
[l|c|cccccc]{} $B^0\rightarrow K^+\pi^-\pi^0$ & Value & $\left| \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right|$ & ${\mathcal Re}\left[ \frac{A(K^{*0}\pi^0)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal Im}\left[ \frac{A(K^{*0}\pi^0)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal Re}\left[ \frac{\overline{A}(\overline{K}^{*0}\pi^0)}{\overline{A}(K^{*-}\pi^+)} \right]$ & ${\mathcal Re}\left[ \frac{\overline{A}(\overline{K}^{*0}\pi^0)}{\overline{A}(K^{*-}\pi^+)} \right]$ & ${\mathcal B}(K^{*0}\pi^0)$\
$\left| \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right|$ & $0.742 \pm 0.091$ & 1.00 & 0.00 & 0.03 & -0.22 & -0.11 & -0.06\
${\mathcal Re}\left[ \frac{A(K^{*0}\pi^0)}{A(K^{*+}\pi^-)} \right]$ & $0.562 \pm 0.148$ & & 1.00 & 0.68 & 0.33 & -0.01 & 0.44\
${\mathcal Im}\left[ \frac{A(K^{*0}\pi^0)}{A(K^{*+}\pi^-)} \right]$ & $-0.227 \pm 0.296$ & & & 1.00 & -0.07 & 0.00 & -0.13\
${\mathcal Re}\left[ \frac{\overline{A}(\overline{K}^{*0}\pi^0)}{\overline{A}(K^{*-}\pi^+)} \right]$ & $0.701 \pm 0.126$ & & & & 1.00 & 0.25 & 0.55\
${\mathcal Im}\left[ \frac{\overline{A}(\overline{K}^{*0}\pi^0)}{\overline{A}(K^{*-}\pi^+)} \right]$ & $-0.049 \pm 0.376$ & & & & & 1.00 & -0.02\
${\mathcal B}(K^{*0}\pi^0) (\times 10^{-6})$ & $3.300 \pm 0.640$ & & & & & & 1.00\
[l|c|cccccc]{} $B^+\rightarrow K^0_S\pi^+\pi^0$ & Value & $\left| \frac{\overline{A}(K^{*-}\pi^0)}{A(K^{*+}\pi^0)} \right|$ & ${\mathcal Re}\left[ \frac{A(K^{*+}\pi^0)}{A(K^{*0}\pi^+)} \right]$ & ${\mathcal Im}\left[ \frac{A(K^{*+}\pi^0)}{A(K^{*0}\pi^+)} \right]$ & ${\mathcal Re}\left[ \frac{\overline{A}(K^{*-}\pi^0)}{\overline{A}(\overline{K}^{*0}\pi^-)} \right]$ & ${\mathcal Im}\left[ \frac{\overline{A}(K^{*-}\pi^0)}{\overline{A}(\overline{K}^{*0}\pi^-)} \right]$ & ${\mathcal B}(K^{*+}\pi^0)$\
$\left| \frac{\overline{A}(K^{*-}\pi^0)}{A(K^{*+}\pi^0)} \right|$ & $0.533 \pm 1.403$ & 1.00 & -0.26 & 0.01 & -0.70 & -0.22 & -0.16\
${\mathcal Re}\left[ \frac{A(K^{*+}\pi^0)}{A(K^{*0}\pi^+)} \right]$ & $1.415 \pm 6.952$ & & 1.00 & -0.23 & 0.12 & -0.51 & 0.90\
${\mathcal Im}\left[ \frac{A(K^{*+}\pi^0)}{A(K^{*0}\pi^+)} \right]$ & $-0.189 \pm 3.646$ & & & 1.00 & -0.39 & 0.23 & -0.28\
${\mathcal Re}\left[ \frac{\overline{A}(K^{*-}\pi^0)}{\overline{A}(\overline{K}^{*0}\pi^-)} \right]$& $-0.106 \pm 2.687$ & & & & 1.00 & 0.23 & 0.03\
${\mathcal Im}\left[ \frac{\overline{A}(K^{*-}\pi^0)}{\overline{A}(\overline{K}^{*0}\pi^-)} \right]$& $-0.851 \pm 4.278$ & & & & & 1.00 & -0.82\
${\mathcal B}(K^{*+}\pi^0) (\times 10^{-6})$ & $9.200 \pm 1.480$ & & & & & & 1.00\
[l|c]{} $B^+\to K^{*+}\pi^0$ in $B^+\to K^+\pi^0\pi^0$ & value\
${\mathcal B}(K^{*+}\pi^0)$ & $(8.2 \pm 1.5 \pm 1.1)\times10^{-6}$\
$A_{CP}(K^{*+}\pi^0)$ & $ -0.06 \pm 0.24 \pm 0.04$\
results {#App:babar_inputs}
--------
In this section, we describe the set of experimental inputs from the experiment.
- $B^0\rightarrow K^0_S\pi^+\pi^-$ [@Aubert:2009me]. Two almost degenerate solutions were found differing only by $0.16$ negative-log-likelihood ($\Delta {\rm NLL}$) units. The central values and correlation matrix of the measured observables for both solutions are shown in Tab. \[tab:KSPiPi\_babar\].
- $B^+\rightarrow K^+\pi^-\pi^+$ [@Aubert:2008bj]. The central values of the observables for this analysis are shown in Tab. \[tab:KPiPi\_babar\]. A linear correlation of $2\%$ was found between $\left| \frac{\overline{A}(\overline{K}^{*0}\pi^-)}{A(K^{*0}\pi^+)} \right|$ and ${\mathcal B}(K^{*0}\pi^+)$.
- $B^0\rightarrow K^+\pi^-\pi^0$ [@BABAR:2011ae]. The central values and correlation matrix of the measured observables for this analysis are shown in Tab. \[tab:KPiPi0\_babar\].
- $B^+\rightarrow K^0_S\pi^+\pi^0$ [@Lees:2015uun]. The central values and correlation matrix of the measured observables for this analysis are shown in Tab. \[tab:K0PiPi0\_babar\].
- $B^+\to K^{*+}(892)\pi^0$ quasi-two-body contribution to the $B^+\to K^+\pi^0\pi^0$ final state [@Lees:2011aaa]. The measured branching ratio and $CP$ asymmetry are shown in Tab. \[tab:KstPi0\] and they are used as uncorrelated inputs.
[l|c|ccc]{} $B^0\rightarrow K^0_S\pi^+\pi^-$ & $~~~~~~~~~$Global min$~~~~~~~~~$ & ${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal B}(K^{*+}\pi^-)$\
${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & $0.790 \pm 0.145$ & 1.00 & 0.62 & -0.04\
${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & $-0.206 \pm 0.398$ & & 1.00 & 0.00\
${\mathcal B}(K^{*+}\pi^-) (\times 10^{-6})$ & $8.400 \pm 1.449$ & & & 1.00\
[l|c|ccc]{} $B^0\rightarrow K^0_S\pi^+\pi^-$ & Local min ($\Delta {\rm NLL} = 7.5$) & ${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal B}(K^{*+}\pi^-)$\
${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & $0.808 \pm 0.110$ & 1.00 & 0.01 & -0.06\
${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & $0.010 \pm 0.439$ & & 1.00 & 0.00\
${\mathcal B}(K^{*+}\pi^-) (\times 10^{-6})$ & $8.400 \pm 1.449$ & & & 1.00\
$B^+\rightarrow K^+\pi^-\pi^+$ value
------------------------------------------------------------------------------ ---------------------
$\left| \frac{\overline{A}(\overline{K}^{*0}\pi^-)}{A(K^{*0}\pi^+)} \right|$ $0.861 \pm 0.059$
${\mathcal B}(K^{*0}\pi^+) (\times 10^{-6})$ $9.670 \pm 1.061$
: Central values of the observables from the Belle $B^+\rightarrow K^+\pi^-\pi^+$ analysis.[]{data-label="tab:KPiPi_belle"}
Belle results {#App:belle_inputs}
-------------
In this section, we describe the set of experimental inputs from the Belle experiment.
- $B^0\rightarrow K^0_S\pi^+\pi^-$ [@Dalseno:2008wwa]. Two solutions were found differing by $7.5$ $\Delta {\rm NLL}$. The central values and correlation matrix of the measured observables for both solutions are shown in Tab. \[tab:KSPiPi\_belle\].
- $B^+\rightarrow K^+\pi^-\pi^+$ [@Garmash:2006bj]. The central values of the observables for this analysis are shown in Tab. \[tab:KPiPi\_belle\]. A nearly vanishing correlation was found between $\left| \frac{\overline{A}(\overline{K}^{*0}\pi^-)}{A(K^{*0}\pi^+)} \right|$ and ${\mathcal B}(K^{*0}\pi^+)$.
Combined and Belle results {#App:comb_inputs}
--------------------------
The and Belle results for the $B^0\rightarrow K^0_S\pi^+\pi^-$ and $B^+\rightarrow K^+\pi^-\pi^+$ analyses shown previously have been combined in the usual way for sets of independent measurements. The combination for the $B^+\rightarrow K^+\pi^-\pi^+$ mode is straightforward as the results exhibit only one solution, as shown in Fig. \[fig:Combination\_babarbelle\_KPiPi\]. The resulting central values are shown in Tab. \[tab:KPiPiKSPiPi\_babarbelle\]. A vanishing linear correlation is found between $\left| \frac{\overline{A}(\overline{K}^{*0}\pi^-)}{A(K^{*0}\pi^+)} \right|$ and ${\mathcal B}(K^{*0}\pi^+)$.
The combination of the and Belle measurements for the $B^0\rightarrow K^0_S\pi^+\pi^-$ mode is more complicated as the results feature several solutions which are relatively close in units of $\Delta {\rm NLL}$. In order to combine this measurements we proceed as follows:
- We combine each solution of the analysis with each one of the Belle results.
- In the goodness of fit of the combination ($\chi^2_{\rm min}$), we add the $\Delta {\rm NLL}$ of each and Belle solution. In the case of the global minimum the corresponding $\Delta {\rm NLL}$ is zero.
- Finally, we take the envelope of the four combinations as the final result.
We find the following $\chi^2_{\rm min}$ for the four combinations: 1.1, 8.7, 9.5 and 98.3. As the closest combination from the global minimum differs by 7.6 units in $\chi^2_{\rm min}$, we have decided to focus on the global minimum for the phenomenological analysis. The combination for this global minimum is shown in Fig. \[fig:Combination\_babarbelle\_KSPiPi\]. The resulting central values and covariance matrix are shown in Tab. \[tab:KPiPiKSPiPi\_babarbelle\].
![Contours at 1 (solid) and 2 (dotted) $\sigma$ in the $\left| \frac{\overline{A}(\overline{K}^{*0}\pi^-)}{A(K^{*0}\pi^+)} \right|$ vs ${\mathcal B}(K^{*0}\pi^+)$ plane for the (black) and Belle (red) results, as well as the combination (blue).[]{data-label="fig:Combination_babarbelle_KPiPi"}](Inputs_BpToKpPimPip_Comb_BaBarAbdBelle.eps){width="8.5cm"}
![image](Inputs_B0ToK0PipPim_BaBar_Sol1_Belle_Sol1_1.eps){width="5.9cm"} ![image](Inputs_B0ToK0PipPim_BaBar_Sol1_Belle_Sol1_2.eps){width="5.9cm"} ![image](Inputs_B0ToK0PipPim_BaBar_Sol1_Belle_Sol1_3.eps){width="5.9cm"}
[l|c]{} $B^+\rightarrow K^+\pi^-\pi^+$ & Value\
$\left| \frac{\overline{A}(\overline{K}^{*0}\pi^-)}{A(K^{*0}\pi^+)} \right|$ & $0.965 \pm 0.037$\
${\mathcal B}(K^{*0}\pi^+) (\times 10^{-6})$ & $10.062 \pm 0.835$\
[l|c|ccc]{} $B^0\rightarrow K^0_S\pi^+\pi^-$ & Value & ${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & ${\mathcal B}(K^{*+}\pi^-)$\
${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & $0.698 \pm 0.120$ & 1.00 & 0.58 & -0.01\
${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ & $-0.506 \pm 0.146$ & & 1.00 & -0.09\
${\mathcal B}(K^{*+}\pi^-) (\times 10^{-6})$ & $8.340 \pm 0.910$ & & & 1.00\
These combined results for the $B^0\rightarrow K^0_S\pi^+\pi^-$ and $B^+\rightarrow K^+\pi^-\pi^-$ modes are used with the results for the $B^0\rightarrow K^+\pi^-\pi^0$ and $B^+\rightarrow K^0_S \pi^+\pi^0$ as inputs for the phenomenological analysis using the current experimental measurements.
Two-body non leptonic amplitudes in QCD factorisation {#app:QCDFinputs}
=====================================================
We compute the $B\to K^*\pi$ amplitudes in the framework of QCD factorisation, using the results of Ref. [@Beneke:2003zv]. We take the semileptonic $B\to\pi$ and $B\to K\pi$ form factors from computations based on Light-Cone Sum Rules [@Ball:2004rg; @Straub:2015ica]. The parameters for the light-meson distribution amplitudes that enter hard-scattering contributions are consistently taken from the last two references. On the other hand the first inverse moment of the $B$-meson distribution amplitude $\lambda_B$ is taken from Ref. [@Braun:2003wx]. Quark masses are taken from review by the FLAG group [@Aoki:2016frl]. Our updated inputs are summarised in Table \[tab:QCDFinputs\].
Input Value Input Value
----------------- ------------------------- ----------------------- -----------------------
$\alpha_1(K^*)$ $0.06\pm 0\pm 0.04$ $\alpha_1(K^*,\perp)$ $0.04\pm 0\pm 0.03$
$\alpha_2(K^*)$ $0.16\pm 0\pm 0.09$ $\alpha_2(K^*,\perp)$ $0.10\pm 0\pm 0.08$
$f_\perp(K^*)$ $0.159\pm 0\pm 0.006$ $A_0[B\to K^*](0)$ $0.356\pm 0\pm 0.046$
$\alpha_2(\pi)$ $0.062\pm 0\pm 0.054$ $F_0[B\to\pi](0)$ $0.258\pm 0\pm 0.031$
$\lambda_B$ $0.460\pm 0\pm 0.110$ $\bar m_b$ $4.17$
$\bar m_s$ $0.0939\pm 0\pm 0.0011$ $m_q/m_s$ $\sim 0$
: Input values for the hadronic parameters that enter QCD factorisation predictions: moments of the distribution amplitudes for mesons, decay constants, form factors and quark masses. Dimensionful quantities are in GeV. The $\pm 0$ in second position means that all uncertainties are considered as coming from a theoretical origin and they are treated according to the Rfit approach. See the text for references.[]{data-label="tab:QCDFinputs"}
We stress that the calculations of Ref. [@Beneke:2003zv] correspond to Next-to-Leading Order (NLO). Since then, some NNLO contributions have been computed [@BHWS; @Bell:2007tv; @Bell:2009nk; @Beneke:2009ek; @Bell:2015koa], that we neglect in view of the sizeable uncertainties on the input parameters: this is sufficient for our illustrative purposes (see Section \[QCDFcomparison\]).
Reference scenario and prospective studies {#app:refprosp}
==========================================
Some of the experimental results collected in App. \[App:Exp\_inputs\] are affected by large uncertainties, and the central values are not always fully consistent with SM expectations. This is not a problem when we want to extract values of the hadronic parameters from the data, but it makes rather unclear the discussion of the accuracy of specific models (say, for the extraction of weak angles) or the prospective studies assuming improved experimental measurements, see Secs. \[sec:CKM\] and \[sec:prospect\].
For this reason, we design a reference scenario described in Tab. \[tab:IdealCase\]. The values on hadronic parameters are chosen to reproduce the current best averages of branching fractions and $CP$ asymmetries in $B\rightarrow K^*\pi$ roughly. As most observable phase differences among these modes are poorly constrained by the results currently available, we do no attempt at reproducing their central values and we use the values resulting from the hadronic parameters. The hadronic amplitudes are constrained to respect the naive assumptions: $|P_{\rm EW}/T_{3/2}| \simeq 1.35\%$, $|P^C_{\rm EW}| < |P_{\rm EW}|$ and $|T^{00}_{\rm C}| < |T^{+-}|$. The best values of the hadronic parameters yield the values of branching ratios and $CP$ asymmetries gathered in Tab. \[tab:IdealCase\]. As can be seen, the overall agreement is fair, but it is not good for all observables. Indeed, as discussed in Sec. \[sec:Hadronic\], the current data do not favour all the hadronic hierarchies that we have imposed to obtain our reference scenario in Tab. \[tab:IdealCase\].
For the studies of different methods to extract CKM parameters described in Sec. \[sec:CKM\], we fit the values of hadronic parameters by assigning small, arbitrary, uncertainties to the physical observables: $\pm 5\%$ for branching ratios, $\pm 0.5\%$ for $CP$ asymmetries, and $\pm 5^\circ$ for interference phases.
For the prospective studies described in Sec. \[sec:prospect\], we estimate future experimental uncertainties at two different stages. We first consider a list of expected measurements from LHCb, using the combined Run1 and Run2 data. We then reassess the expected results including Belle II measurements. Our method to project uncertainties in the two stages is based on the statistical scaling of data samples ($1/\sqrt{N_{\rm evts}}$), corrected for additional factors due to particular detector performances and analysis technique features, as described below.
LHCb Run1 and Run2 data will significantly increase the statistics mainly for the fully charged final states $B^0\rightarrow K^0_S(\rightarrow \pi^+\pi^-)\pi^+\pi^-$ and $B^+\rightarrow K^+\pi^-\pi^+$, with an expected increase of about $3$ and $40$, respectively [@B0toKspipi_LHCb:2012iva; @BtoKpipi_LHCb:2013iva]. For these modes, we assume a signal-to-background ratio similar to the ones measured at $B$ factories (this may represent an underestimation of the potential sensitivity of LHCb data, but this assumption has a very minor impact on the results of our prospective study). The statistical scaling factor thus defined can be applied as such to direct $CP$ asymmetries, but some additional aspects must be considered in the scaling of uncertainties for other observables. For time-dependent $CP$ asymmetries, the difference in flavour-tagging performances (the effective tagging efficiency $Q$) should be taken into account. In the $B$-factory environment, a quality factor $Q_{\rm B-factories} \sim 30$ [@FavourTagging_BaBar:2009iva; @FavourTagging_Belle:2012iva] was achieved, while for LHCb a smaller value is used ($Q_{\rm LHCb} \sim 3$ [@FavourTagging_LHCb:2012iva]), which entails an additional factor $(Q_{\rm B-factories}/Q_{\rm LHCb})^{1/2} \sim 3.2$ in the scaling of uncertainties. For branching ratios, LHCb is not able to directly count the number of $B$ mesons produced, and it is necessary to resort to a normalisation using final states for which the branching ratio has been measured elsewhere (mainly at $B$-factories). This additional source of uncertainty is taken into account in the projection of the error. Finally, in our prospective studies, we adopt the pessimistic view of neglecting potential measurements from LHCb for modes with $\pi^0$ mesons in the final state (e.g., $B^0\rightarrow K^+\pi^-\pi^0$ and $B^+\rightarrow K^0_S\pi^+\pi^0$), as it is difficult to anticipate the evolution in the performances for $\pi^0$ reconstruction and phase space resolution.
Belle II [@Urquijo:2015qsa] expects to surpass by a factor of $\sim 50$ the total statistics collected by the $B$-factories. As the experimental environments will be very similar, we just scale the current uncertainties by this statistical factor.
Starting from the statistical uncertainties from Babar and scaling them according to the above procedure, we obtain our projections of uncertainties on physical observables, shown in Tab. \[tab:LHCbAndBelleII\], where the current uncertainties are compared with the projected ones for the first ($B$-factories combined with LHCb Run1 and Run2) and second (adding Belle II) stages described previously.
[lcc|lcc]{} Hadronic amplitudes & Magnitude & Phase ($^\circ$) & Observable & Measurement & Value\
$T^{+-}$ & 2.540 & 0.0 & ${\mathcal B}(B^0\rightarrow K^{*+}\pi^-)$ & $8.4 \pm 0.8$ & $7.1$\
$T^{00}_{\rm C}$ & 0.762 & 75.8 & ${\mathcal B}(B^0\rightarrow K^{*0}\pi^0)$ & $3.3 \pm 0.6$ & $1.6$\
$N^{0+}$ & 0.143 & 108.4 & ${\mathcal B}(B^+\rightarrow K^{*+}\pi^0)$ & $8.2 \pm 1.8$ & $8.5$\
$P^{+-}$ & 0.091 & -6.5 & ${\mathcal B}(B^+\rightarrow K^{*0}\pi^+)$ & $10.1^{+0.8}_{-0.9}$ & $10.9$\
$P_{\rm EW}$ & 0.038 & 15.2 & $A_{CP}(B^0\rightarrow K^{*+}\pi^-)$ & $-0.23 \pm 0.06$ & $-0.129$\
$P_{\rm EW}^{\rm C}$ & 0.029 & 101.9 & $A_{CP}(B^0\rightarrow K^{*0}\pi^0)$ & $-0.15 \pm 0.13$ & $+0.465$\
$\left|\frac{V_{ts}V_{tb}^*P^{+-}}{V_{us}V_{ub}^*T^{+-}}\right|$ & 1.809 & & $A_{CP}(B^+\rightarrow K^{*+}\pi^0)$ & $-0.39 \pm 0.12$ & $-0.355$\
$\left|T^{00}_{\rm C}/T^{+-}\right|$ & 0.300 & & $A_{CP}(B^+\rightarrow K^{*0}\pi^+)$ & $+0.038 \pm 0.042$ & $+0.039$\
$\left|N^{0+}/T^{00}_{\rm C}\right|$ & 0.187 & & & &\
$\left|P_{\rm EW}/P^{+-}\right|$ & 0.421 & & & &\
$\left|P_{\rm EW}/(T^{+-} + T^{00}_{\rm C})\right|/R$ & 1.009 & & & &\
$\left|P_{\rm EW}^{\rm C}/P_{\rm EW}\right|$ & 0.762 & & & &\
Observable Analysis Current uncertainty LHCb (Run1+Run2) LHCb+Belle II
------------------------------------------------------------------------------------------------------ ---------------------------------- --------------------- ------------------ ---------------
${\mathcal Re}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ $B^0\rightarrow K^0_S\pi^+\pi^-$ $0.11$ $0.04$ $0.01$
${\mathcal Im}\left[ \frac{q}{p} \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right]$ $B^0\rightarrow K^0_S\pi^+\pi^-$ $0.16$ $0.11$ $0.02$
${\mathcal B}(K^{*+}\pi^-)$ $B^0\rightarrow K^0_S\pi^+\pi^-$ $0.69$ $0.32$ $0.09$
$\left| \frac{\overline{A}(K^{*-}\pi^+)}{A(K^{*+}\pi^-)} \right|$ $B^0\rightarrow K^+\pi^-\pi^0$ $0.06$ $0.06$ $0.01$
${\mathcal Re}\left[ \frac{A(K^{*0}\pi^0)}{A(K^{*+}\pi^-)} \right]$ $B^0\rightarrow K^+\pi^-\pi^0$ $0.11$ $0.11$ $0.02$
${\mathcal Im}\left[ \frac{A(K^{*0}\pi^0)}{A(K^{*+}\pi^-)} \right]$ $B^0\rightarrow K^+\pi^-\pi^0$ $0.23$ $0.23$ $0.03$
${\mathcal Re}\left[ \frac{\overline{A}(\overline{K}^{*0}\pi^0)}{\overline{A}(K^{*-}\pi^+)} \right]$ $B^0\rightarrow K^+\pi^-\pi^0$ $0.10$ $0.10$ $0.01$
${\mathcal Im}\left[ \frac{\overline{A}(\overline{K}^{*0}\pi^0)}{\overline{A}(K^{*-}\pi^+)} \right]$ $B^0\rightarrow K^+\pi^-\pi^0$ $0.30$ $0.30$ $0.04$
${\mathcal B}(K^{*0}\pi^0) $ $B^0\rightarrow K^+\pi^-\pi^0$ $0.35$ $0.35$ $0.05$
$\left| \frac{\overline{A}(\overline{K}^{*0}\pi^-)}{A(K^{*0}\pi^+)} \right|$ $B^+\rightarrow K^+\pi^-\pi^+$ $0.04$ $0.005$ $0.004$
${\mathcal B}(K^{*0}\pi^+)$ $B^+\rightarrow K^+\pi^-\pi^+$ $0.81$ $0.50$ $0.11$
$\left| \frac{\overline{A}(K^{*-}\pi^0)}{A(K^{*+}\pi^0)} \right|$ $B^+\rightarrow K^0_S\pi^+\pi^0$ $0.15$ $0.15$ $0.02$
${\mathcal Re}\left[ \frac{A(K^{*+}\pi^0)}{A(K^{*0}\pi^+)} \right]$ $B^+\rightarrow K^0_S\pi^+\pi^0$ $0.16$ $0.16$ $0.02$
${\mathcal Im}\left[ \frac{A(K^{*+}\pi^0)}{A(K^{*0}\pi^+)} \right]$ $B^+\rightarrow K^0_S\pi^+\pi^0$ $0.30$ $0.30$ $0.04$
${\mathcal Re}\left[ \frac{\overline{A}(K^{*-}\pi^0)}{\overline{A}(\overline{K}^{*0}\pi^-)} \right]$ $B^+\rightarrow K^0_S\pi^+\pi^0$ $0.21$ $0.21$ $0.03$
${\mathcal Im}\left[ \frac{\overline{A}(K^{*-}\pi^0)}{\overline{A}(\overline{K}^{*0}\pi^-)} \right]$ $B^+\rightarrow K^0_S\pi^+\pi^0$ $0.13$ $0.13$ $0.02$
${\mathcal B}(K^{*+}\pi^0)$ $B^+\rightarrow K^0_S\pi^+\pi^0$ $0.92$ $0.92$ $0.13$
[99]{} A. J. Bevan [*et al.*]{} \[ and Belle Collaborations\], Eur. Phys. J. C [**74**]{} (2014) 3026 doi:10.1140/epjc/s10052-014-3026-9 \[arXiv:1406.6311 \[hep-ex\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], Eur. Phys. J. C [**73**]{} (2013) no.4, 2373 doi:10.1140/epjc/s10052-013-2373-2 \[arXiv:1208.3355 \[hep-ex\]\]. N. Cabibbo, Phys. Rev. Lett. [**10**]{} (1963) 531. doi:10.1103/PhysRevLett.10.531 M. Kobayashi and T. Maskawa, Prog. Theor. Phys. [**49**]{} (1973) 652. doi:10.1143/PTP.49.652 O. Deschamps [*et al.*]{}, work in progress.
J. Charles [*et al.*]{} \[CKMfitter Group\], Eur. Phys. J. C [**41**]{} (2005) 1 \[arXiv:hep-ph/0406184\]. Updates and numerical results on the CKMfitter group web site: `http://ckmfitter.in2p3.fr/`
J. Charles [*et al.*]{}, Phys. Rev. D [**91**]{} (2015) no.7, 073007 doi:10.1103/PhysRevD.91.073007 \[arXiv:1501.05013 \[hep-ph\]\]. P. Koppenburg and S. Descotes-Genon, arXiv:1702.08834 \[hep-ex\]. J. Charles, S. Descotes-Genon, Z. Ligeti, S. Monteil, M. Papucci and K. Trabelsi, Phys. Rev. D [**89**]{} (2014) no.3, 033016 doi:10.1103/PhysRevD.89.033016 \[arXiv:1309.2293 \[hep-ph\]\]. A. Lenz, U. Nierste, J. Charles, S. Descotes-Genon, H. Lacker, S. Monteil, V. Niess and S. T’Jampens, Phys. Rev. D [**86**]{} (2012) 033008 doi:10.1103/PhysRevD.86.033008 \[arXiv:1203.0238 \[hep-ph\]\]. A. Lenz [*et al.*]{}, Phys. Rev. D [**83**]{} (2011) 036004 doi:10.1103/PhysRevD.83.036004 \[arXiv:1008.1593 \[hep-ph\]\]. O. Deschamps, S. Descotes-Genon, S. Monteil, V. Niess, S. T’Jampens and V. Tisserand, Phys. Rev. D [**82**]{} (2010) 073012 doi:10.1103/PhysRevD.82.073012 \[arXiv:0907.5135 \[hep-ph\]\].
M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda, Phys. Rev. Lett. [**83**]{} (1999) 1914 doi:10.1103/PhysRevLett.83.1914 \[hep-ph/9905312\]. M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda, Nucl. Phys. B [**591**]{} (2000) 313 doi:10.1016/S0550-3213(00)00559-9 \[hep-ph/0006124\]. M. Beneke and M. Neubert, Nucl. Phys. B [**675**]{} (2003) 333 doi:10.1016/j.nuclphysb.2003.09.026 \[hep-ph/0308039\]. M. Beneke, J. Rohrer and D. Yang, Nucl. Phys. B [**774**]{} (2007) 64 doi:10.1016/j.nuclphysb.2007.03.020 \[hep-ph/0612290\]. H. n. Li, Phys. Rev. D [**66**]{} (2002) 094010 doi:10.1103/PhysRevD.66.094010 \[hep-ph/0102013\]. H. n. Li and K. Ukai, Phys. Lett. B [**555**]{} (2003) 197 doi:10.1016/S0370-2693(03)00049-2 \[hep-ph/0211272\]. H. n. Li, Prog. Part. Nucl. Phys. [**51**]{} (2003) 85 doi:10.1016/S0146-6410(03)90013-5 \[hep-ph/0303116\]. A. Ali, G. Kramer, Y. Li, C. D. Lu, Y. L. Shen, W. Wang and Y. M. Wang, Phys. Rev. D [**76**]{} (2007) 074018 doi:10.1103/PhysRevD.76.074018 \[hep-ph/0703162 \[HEP-PH\]\]. H. n. Li, CERN Yellow Report CERN-2014-001, pp.95-135 doi:10.5170/CERN-2014-001.95 \[arXiv:1406.7689 \[hep-ph\]\]. W. F. Wang and H. n. Li, Phys. Lett. B [**763**]{} (2016) 29 doi:10.1016/j.physletb.2016.10.026 \[arXiv:1609.04614 \[hep-ph\]\]. C. W. Bauer, D. Pirjol and I. W. Stewart, Phys. Rev. D [**67**]{} (2003) 071502 doi:10.1103/PhysRevD.67.071502 \[hep-ph/0211069\]. M. Beneke and T. Feldmann, Nucl. Phys. B [**685**]{} (2004) 249 doi:10.1016/j.nuclphysb.2004.02.033 \[hep-ph/0311335\]. C. W. Bauer, D. Pirjol, I. Z. Rothstein and I. W. Stewart, Phys. Rev. D [**70**]{} (2004) 054015 doi:10.1103/PhysRevD.70.054015 \[hep-ph/0401188\]. C. W. Bauer, I. Z. Rothstein and I. W. Stewart, Phys. Rev. D [**74**]{} (2006) 034010 doi:10.1103/PhysRevD.74.034010 \[hep-ph/0510241\]. T. Becher, A. Broggio and A. Ferroglia, Lect. Notes Phys. [**896**]{} (2015) doi:10.1007/978-3-319-14848-9 \[arXiv:1410.1892 \[hep-ph\]\]. S. Descotes-Genon and C. T. Sachrajda, Nucl. Phys. B [**625**]{} (2002) 239 doi:10.1016/S0550-3213(02)00017-2 \[hep-ph/0109260\]. M. Ciuchini, E. Franco, G. Martinelli, M. Pierini and L. Silvestrini, Phys. Lett. B [**515**]{} (2001) 33 doi:10.1016/S0370-2693(01)00700-6 \[hep-ph/0104126\]. M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda, Phys. Rev. D [**72**]{} (2005) 098501 doi:10.1103/PhysRevD.72.098501 \[hep-ph/0411171\]. A. V. Manohar and I. W. Stewart, Phys. Rev. D [**76**]{} (2007) 074002 doi:10.1103/PhysRevD.76.074002 \[hep-ph/0605001\]. H. n. Li and S. Mishima, Phys. Rev. D [**83**]{} (2011) 034023 doi:10.1103/PhysRevD.83.034023 \[arXiv:0901.1272 \[hep-ph\]\]. F. Feng, J. P. Ma and Q. Wang, arXiv:0901.2965 \[hep-ph\]. M. Beneke, G. Buchalla, M. Neubert and C. T. Sachrajda, Eur. Phys. J. C [**61**]{} (2009) 439 doi:10.1140/epjc/s10052-009-1028-9 \[arXiv:0902.4446 \[hep-ph\]\]. T. Becher and G. Bell, Phys. Lett. B [**713**]{} (2012) 41 doi:10.1016/j.physletb.2012.05.016 \[arXiv:1112.3907 \[hep-ph\]\]. M. Beneke, Nucl. Part. Phys. Proc. [**261-262**]{} (2015) 311 doi:10.1016/j.nuclphysbps.2015.03.021 \[arXiv:1501.07374 \[hep-ph\]\]. B. Aubert [*et al.*]{} \[ Collaboration\], Phys. Rev. D [**80**]{} (2009) 112001 doi:10.1103/PhysRevD.80.112001 \[arXiv:0905.3615 \[hep-ex\]\]. B. Aubert [*et al.*]{} \[ Collaboration\], Phys. Rev. D [**78**]{}, 012004 (2008) doi:10.1103/PhysRevD.78.012004 \[arXiv:0803.4451 \[hep-ex\]\]. J. P. Lees [*et al.*]{} \[ Collaboration\], Phys. Rev. D [**83**]{}, 112010 (2011) doi:10.1103/PhysRevD.83.112010 \[arXiv:1105.0125 \[hep-ex\]\]. J. P. Lees [*et al.*]{} \[ Collaboration\], arXiv:1501.00705 \[hep-ex\]. J. P. Lees [*et al.*]{} \[ Collaboration\], Phys. Rev. D [**84**]{}, 092007 (2011) doi:10.1103/PhysRevD.84.092007 arXiv:1109.0143 \[hep-ex\].
A. Garmash [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Letters [**96**]{} (2006) 251803 doi:10.1103/PhysRevD.79.072004 \[arXiv:0811.3665 \[hep-ex\]\]. J. Dalseno [*et al.*]{} \[Belle Collaboration\], Phys. Rev. D [**79**]{} (2009) 072004 doi:10.1103/PhysRevD.79.072004 \[arXiv:0811.3665 \[hep-ex\]\].
B. Bhattacharya, M. Gronau and J. L. Rosner, Phys. Lett. B [**726**]{} (2013) 337 doi:10.1016/j.physletb.2013.08.062 \[arXiv:1306.2625 \[hep-ph\]\]. B. Bhattacharya and D. London, JHEP [**1504**]{} (2015) 154 doi:10.1007/JHEP04(2015)154 \[arXiv:1503.00737 \[hep-ph\]\].
B. Bhattacharya, M. Gronau, M. Imbeault, D. London and J. L. Rosner, Phys. Rev. D [**89**]{} (2014) no.7, 074043 doi:10.1103/PhysRevD.89.074043 \[arXiv:1402.2909 \[hep-ph\]\], and references therein J. H. Alvarenga Nogueira [*et al.*]{}, arXiv:1605.03889 \[hep-ex\]. K. Abe [*et al.*]{} \[Belle Collaboration\], arXiv:0708.1845 \[hep-ex\]. B. Aubert [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. D [**76**]{} (2007) 071101 doi:10.1103/PhysRevD.76.071101 \[hep-ex/0702010\]. L. A. Pérez Pérez, “Time-Dependent Amplitude Analysis of $B^0 \to K_s \pi^+ \pi^-$ decays with the Experiment and constraints on the [CKM]{} matrix using the $B \to K^*\pi$ and $B \to \rho K$ modes,” TEL-00379188. Y. Nir and H. R. Quinn, Phys. Rev. Lett. [**67**]{} (1991) 541. doi:10.1103/PhysRevLett.67.541 M. Gronau and J. Zupan, Phys. Rev. D [**71**]{} (2005) 074017 doi:10.1103/PhysRevD.71.074017 \[hep-ph/0502139\]. F. J. Botella and J. P. Silva, Phys. Rev. D [**71**]{} (2005) 094008 doi:10.1103/PhysRevD.71.094008 \[hep-ph/0503136\]. M. Beneke and S. Jäger, Nucl. Phys. B [**768**]{} (2007) 51 doi:10.1016/j.nuclphysb.2007.01.016 \[hep-ph/0610322\].
G. Bell and V. Pilipp, Phys. Rev. D [**80**]{} (2009) 054024 doi:10.1103/PhysRevD.80.054024 \[arXiv:0907.1016 \[hep-ph\]\]. G. Bell, M. Beneke, T. Huber and X. Q. Li, Phys. Lett. B [**750**]{} (2015) 348 doi:10.1016/j.physletb.2015.09.037 \[arXiv:1507.03700 \[hep-ph\]\].
F. J. Botella, D. London and J. P. Silva, Phys. Rev. D [**73**]{} (2006) 071501 doi:10.1103/PhysRevD.73.071501 \[hep-ph/0602060\].
M. Gronau and D. London, Phys. Rev. Lett. [**65**]{} (1990) 3381. doi:10.1103/PhysRevLett.65.3381 A. J. Buras and R. Fleischer, Eur. Phys. J. C [**11**]{} (1999) 93 doi:10.1007/s100529900201, 10.1007/s100520050617 \[hep-ph/9810260\]. M. Neubert and J. L. Rosner, Phys. Lett. B [**441**]{} (1998) 403 doi:10.1016/S0370-2693(98)01194-0 \[hep-ph/9808493\]. M. Neubert and J. L. Rosner, Phys. Rev. Lett. [**81**]{} (1998) 5076 doi:10.1103/PhysRevLett.81.5076 \[hep-ph/9809311\]. M. Gronau, Phys. Rev. Lett. [**91**]{} (2003) 139101 doi:10.1103/PhysRevLett.91.139101 \[hep-ph/0305144\]. M. Ciuchini, M. Pierini and L. Silvestrini, Phys. Rev. D [**74**]{} (2006) 051301 doi:10.1103/PhysRevD.74.051301 \[hep-ph/0601233\]. M. Gronau, D. Pirjol, A. Soni and J. Zupan, Phys. Rev. D [**75**]{} (2007) 014002 doi:10.1103/PhysRevD.75.014002 \[hep-ph/0608243\]. Y. Amhis [*et al.*]{}, arXiv:1612.07233 \[hep-ex\]. J. Charles, S. Descotes-Genon, V. Niess and L. Vale Silva, arXiv:1611.04768 \[hep-ph\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. D [**90**]{}, no. 11, 112004 (2014) doi:10.1103/PhysRevD.90.112004 \[arXiv:1408.5373 \[hep-ex\]\].
P. Urquijo, Nucl. Part. Phys. Proc. [**263-264**]{}, 15 (2015). doi:10.1016/j.nuclphysbps.2015.04.004. Belle II web page: https://www.belle2.org/
R. Aaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett. [**111**]{} (2013) 101801 \[arXiv:1306.1246 \[hep-ex\]\].
LHCb-CONF-2012-023, July 6, 2012 LHCb-CONF-2012-026, July 11, 2012 B. Aubert [*et al.*]{} \[Collaboration\], Phys. Rev. D [**79**]{}, (2009) 072009 doi:10.1103/PhysRevD.79.072009 \[arXiv:0902.1708 \[hep-ex\]\]. I. Adachi [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett. [**108**]{}, 171802 (2012) doi:10.1103/PhysRevLett.108.171802 \[arXiv:1201.4643 \[hep-ex\]\]. P. Ball and R. Zwicky, Phys. Rev. D [**71**]{} (2005) 014029 doi:10.1103/PhysRevD.71.014029 \[hep-ph/0412079\]. A. Bharucha, D. M. Straub and R. Zwicky, JHEP [**1608**]{} (2016) 098 doi:10.1007/JHEP08(2016)098 \[arXiv:1503.05534 \[hep-ph\]\].
V. M. Braun, D. Y. Ivanov and G. P. Korchemsky, Phys. Rev. D [**69**]{} (2004) 034014 doi:10.1103/PhysRevD.69.034014 \[hep-ph/0309330\].
S. Aoki [*et al.*]{}, Eur. Phys. J. C [**77**]{} (2017) no.2, 112 doi:10.1140/epjc/s10052-016-4509-7 \[arXiv:1607.00299 \[hep-lat\]\].
G. Bell, talk given at the workshop *Future Challenges in Non-Leptonic B Decays: Theory and Experiment*, Bad Honnef (Germany), 10-12 February 2016 \[\].
G. Bell, Nucl. Phys. B [**795**]{} (2008) 1 doi:10.1016/j.nuclphysb.2007.09.006 \[arXiv:0705.3127 \[hep-ph\]\]. G. Bell, Nucl. Phys. B [**822**]{} (2009) 172 doi:10.1016/j.nuclphysb.2009.07.012 \[arXiv:0902.1915 \[hep-ph\]\]. M. Beneke, T. Huber and X. Q. Li, Nucl. Phys. B [**832**]{} (2010) 109 doi:10.1016/j.nuclphysb.2010.02.002 \[arXiv:0911.3655 \[hep-ph\]\].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We admit that the vacuum is not empty but is filled with continuously appearing and disappearing virtual fermion pairs. We show that if we simply model the propagation of the photon in vacuum as a series of transient captures within the virtual pairs, we can derive the finite light velocity $c$ as the average delay on the photon propagation. We then show that the vacuum permittivity $\epsilon_0$ and permeability $\mu_0$ originate from the polarization and the magnetization of the virtual fermions pairs. Since the transit time of a photon is a statistical process within this model, we expect it to be fluctuating. We discuss experimental tests of this prediction. We also study vacuum saturation effects under high photon density conditions.'
address:
-
- 'LAL, Univ Paris-Sud, CNRS/IN2P3, Orsay, France.'
author:
- 'Marcel Urban[^1], François Couchot, Xavier Sarazin'
title: 'A mechanism giving a finite value to the speed of light, and some experimental consequences'
---
Introduction
============
The speed of light in vacuum $c$, the vacuum permittivity $\epsilon_0$ and the vacuum permeability $\mu_0$ are widely considered as being fundamental constants and their values, escaping any physical explanation, are commonly assumed to be invariant in space and time. In this paper, we propose a mechanism based upon a “natural” quantum vacuum description which leads to sensible estimations of these three electromagnetic constants, and we start drawing some consequences of this perspective.
The idea that the vacuum is a major partner of our world is not new. It plays a part for instance in the Lamb shift [@lamb], the variation of the fine structure constant with energy [@bare-charge], and the electron [@magnetic-moment] and muon [@Davier] anomalous magnetic moments. But these effects, coming from the so-called vacuum polarization, are second order corrections. Quantum Electrodynamics is a perturbative approach to the electromagnetic quantum vacuum. This paper is concerned with the description of the fundamental, non perturbed, vacuum state.
While we were writing this paper, Ref. [@Leuchs] proposed a similar approach to give a physical origin to $\epsilon_0$ and $\mu_0$. Although this derivation is different from the one we propose in this paper, the original idea is the same: [“The physical electromagnetic constants, whose numerical values are simply determined experimentally, could emerge naturally from the quantum theory” [@Leuchs]]{}. We do not know of any other paper proposing a direct derivation of $\epsilon_0$ and $\mu_0$ or giving a mechanism based upon the quantum vacuum leading to $c$.
The most important consequence of our model is that $c$, $\epsilon_0$ and $\mu_0$ are not fundamental constants but are observable parameters of the quantum vacuum: they can vary if the vacuum properties vary in space or in time.
The paper is organized as follows. First we describe our model of the quantum vacuum filled with virtual charged fermion pairs and we show that, by modeling the propagation of the photon in this vacuum as a series of interactions with virtual pairs, we can derive its velocity. Then we show how $\epsilon_0$ and $\mu_0$ might originate from the electric polarization and the magnetization of these virtual pairs. Finally we present two experimental consequences that could be at variance with the standard views and in particular we predict statistical fluctuations of the transit time of photons across a fixed vacuum path.
An effective description of quantum vacuum {#sec:model}
==========================================
The vacuum is assumed to be filled with virtual charged fermion pairs (particle-antiparticle). The other vacuum components are assumed not to be connected with light propagation (we do not consider intermediate bosons, nor supersymmetric particles). All known species of charged fermions are taken into account: the three families of charged leptons $e$, $\mu$ and $\tau$ and the three families of quarks ($u$, $d$), ($c$, $s$) and ($t$, $b$), including their three color states. This gives a total of $21$ pair species, noted $i$.
A virtual pair is assumed to be the product of the fusion of two virtual photons of the vacuum. Thus its total electric charge and total color are null, and we suppose also that the spins of the two fermions of a pair are antiparallel. The only quantity which is not conserved is therefore the energy and this is, of course, the reason for the limited lifetime of the pairs. We assume that first order properties can be deduced assuming that pairs are created with an average energy, not taking into account a full probability density of the pair kinetic energy. Likewise, we will neglect the total momentum of the pair.
We describe this vacuum in terms of five quantities for each pair species: average energy, lifetime, density, size of the pairs and cross section with photons.
We use the notation $Q_i = q_i/e$, where $q_i$ is the modulus of the $i$-kind fermion electric charge and $e$ the modulus of the electron charge.
The average energy $W_i$ of a pair is taken proportional to its rest mass energy $2W_i^0$, where $W_i^0$ is the fermion $i$ rest mass energy: $$\begin{aligned}
\label{eq:energy}
W_i\ = K_W\ 2 W_i^0 ,\end{aligned}$$ where $K_W$ is an unknown constant, assumed to be independent from the fermion type. We take $K_W$ as a free parameter, greater than unity. The value of $K_W$ could be calculated if we knew the energy spectrum of the virtual photons together with their probability to create virtual pairs.
The pair lifetime $\tau_i$ follows from the Heisenberg uncertainty principle $(W_i\tau_i=\hbar/2)$. So $$\begin{aligned}
\label{eq:tau}
\tau_i = \frac{1}{K_W}\frac{\hbar}{4 W_i^0} .\end{aligned}$$ We assume that the virtual pair densities $N_i$ are driven by the Pauli Exclusion Principle. Two pairs containing two identical virtual fermions in the same spin state cannot show up at the same time at the same place. However at a given location we may find 21 pairs since different fermions can superpose spatially. In solid state physics the successful determination of Fermi energies [@Kittel] implies that one electron spin state occupies a hyper volume $h^3$. We assume that concerning the Pauli principle, the virtual fermions are similar to the real ones. Noting $\Delta x_i$ the spacing between identical virtual $i-$type fermions and $p_i$ their average momentum, the one dimension hyper volume is $p_i\Delta x_i$ and dividing by $h$ should give the number of states which we take as one per spin degree of freedom. The relation between $p_i$ and $\Delta x_i$ reads $p_i\Delta x_i/h = 1$, or: $$\begin{aligned}
\label{eq:deltax}
\Delta x_i=\frac{2 \pi \hbar}{p_i}\, .\end{aligned}$$
We can express $\Delta x_i$ as a function of $W_i$ if we suppose the relativity to hold for the virtual pairs $$\begin{aligned}
\label{eq:dx}
\Delta x_i =\frac{2\pi \hbar c}{\sqrt{(W_i/2)^2-(W_i^0)^2}} = \frac{{2\pi\lambda_C}_i}{\sqrt{K_W^2-1}} ,\end{aligned}$$ where ${\lambda_C}_i$ is the Compton length associated to fermion $i$.
We write the density as $$\begin{aligned}
\label{eq:density}
N_i \approx \frac{1}{\Delta x_i^3} = \left(\frac{\sqrt{K_W^2-1}}{{2\pi\lambda_C}_i}\right)^3 .\end{aligned}$$
Each pair can only be produced in two fermion-antifermion spin combinations: up-down and down-up. We define $N_i$ as the density of pairs for a given spin combination. It is very sensitive to $K_W$, being zero for pairs having no internal kinetic energy.
The separation between the fermion and the antifermion in a pair is noted $\delta_i$. This parameter has to do with the physics of the virtual pairs. We assume it does not depend upon the fermion momentum. We will use the Compton wavelength of the fermion $\lambda_{C_i}$ as this scale: $$\begin{aligned}
\label{eq:compton-length}
\delta_i \approx {{\lambda}_C}_i .\end{aligned}$$
The interaction of a real photon with a virtual pair must not exchange energy or momentum with the vacuum. For instance, Compton scattering is not possible. To estimate this interaction probability, we start from the Thomson cross-section $\sigma_{Thomson}= {8 \pi}/{3}\ \alpha^2 {\lambda^2_{C}}_i$ which describes the interaction of a photon with a free electron. The factor $\alpha^2$ corresponds to the probability $\alpha$ that the photon is temporarily absorbed by the real electron times the probability $\alpha$ that the real electron releases the photon. However, in the case of the interaction of a photon with a virtual pair, the second $\alpha$ factor must be ignored since the photon is released with a probability equal to 1 as soon as the virtual pair disappears. Therefore the cross-section $\sigma$ for a real photon to interact and to be trapped by a virtual pair of fermions will be expressed as $$\begin{aligned}
\label{eq:sigma}
\sigma_i \approx \left(\frac{8 \pi}{3} \alpha\ Q_i^2 {\lambda^2_C}_i\right)\times 2 .\end{aligned}$$ The photon interacts equally with the fermion and the antifermion, which explains the factor $2$. A photon of helicity $1$ ($-1$ respectively) can interact only with a fermion or an antifermion with helicity $-1/2$ ($+1/2$ respectively) to flip temporarily its spin to helicity $+1/2$ ($-1/2$ respectively). During such a photon capture by a pair, both fermions are in the same helicity state and cannot couple to another incoming photon in the same helicity state as the first one.
Derivation of the light velocity in vacuum {#sec:speedoflight}
==========================================
We propose in this section a mechanism which leads to a **finite** speed of light. The propagation of the photon in vacuum is modeled as a series of interactions with the virtual fermions or antifermions present in the pairs. When a real photon propagates inside the vacuum, it interacts and is temporarily captured by a virtual pair during a time of the order of the lifetime $\tau_i$ of the virtual pair. As soon as the virtual pair disappears, it releases the photon to its initial energy and momentum state. The photon continues to propagate with a **bare** velocity $c_0$ which is assumed to be much greater than $c$. Then it interacts again with a virtual pair and so on. The delay on the photon propagation produced by these successive interactions implies that the velocity of light is finite.
The mean free path of the photon between two successive interactions with a $i-$type pair is: $$\begin{aligned}
\label{eq:freepath}
\Lambda_i = \frac{1}{\sigma_i N_i}\ ,\end{aligned}$$ where $\sigma_i$ is the cross-section for the photon capture by the virtual $i-$type pair and $N_i$ is the numerical density of virtual $i-$type pairs.
Travelling a distance $L$ in vacuum leads on average to $N_{stop,i}$ interactions on the $i-$kind pairs. One has: $$\begin{aligned}
\label{eq:Nstop}
N_{stop,i} = \frac{L}{\Lambda} = L{\sigma_i N_i}\ .\end{aligned}$$
Each kind of fermion pair contributes in reducing the speed of the photon. So, if the mean photon stop time on a $i-$type pair is $\tau_i$, the mean time $\overline{T}$ for a photon to cross a length $L$ is assumed to be: $$\begin{aligned}
\label{eq:Tbar}
\overline{T} = L/c_0 + \sum_{i}{N_{stop,i}\tau_i}\ .\end{aligned}$$
The bare velocity $c_0$ is the velocity of light in an **empty** vacuum with no virtual particles. We assume that $c_0$ is infinite, which is equivalent to say that the time does not flow in an empty vacuum (with a null zero point energy). There are no “natural” time or distance scales in such an empty vacuum, whereas the $\tau_i$ and $\Delta x_i$ scales allow to build a speed scale. So, the total delay reduces to: $$\begin{aligned}
\label{eq:Tbar2}
\overline{T} = \sum_{i}{N_{stop,i}\tau_i}\ .\end{aligned}$$ So, a photon, although propagating at the speed of light, is at any time resting on one fermion pair.
Using Eq. (\[eq:Nstop\]), we obtain the photon velocity $\tilde{c}$ as a function of three parameters of the vacuum model: $$\begin{aligned}
\label{eq:c-1}
\tilde{c} = \frac{L}{\overline{T} }= \frac{1}{\sum_{i}{\sigma_i N_i \tau_i}} .\end{aligned}$$
We notice that the cross-section $\sigma_i$ in (\[eq:sigma\]) does not depend upon the energy of the photon. It implies that the vacuum is not dispersive as it is experimentally observed. Using Eq. (\[eq:tau\]), (\[eq:density\]) and (\[eq:sigma\]), we get the final expression: $$\begin{aligned}
\label{eq:c-2}
\tilde{c} = \frac{K_W}{\left(K_W^2-1\right)^{3/2}}\ \frac{\ 6\pi^2}{\alpha \hbar \sum_{i}{Q_i^2/({\lambda_C}_iW_i^0)}} .\end{aligned}$$
${\lambda_C}_iW_i^0/\hbar$ is equal to the speed of light: $$\begin{aligned}
\label{eq:lcwi}
{{\lambda_C}_iW_i^0}/{\hbar}=\frac{\hbar}{m_i c}\ m_i c^2 \frac{1}{\hbar} = c .\end{aligned}$$ So $$\begin{aligned}
\label{eq:c-3}
\tilde{c} = \frac{K_W}{\left(K_W^2-1\right)^{3/2}}\ \frac{\ 6\pi^2}{\alpha \sum_{i}{Q_i^2}}\ c .\end{aligned}$$
The photon velocity depends only on the electrical charge units $Q_i$ of the virtual charged fermions present in vacuum. It depends neither upon their masses, nor upon the vacuum energy density.
The sum in Eq. (\[eq:c-3\]) is taken over all pair types. Within a generation the absolute values of the electric charges are 1, 2/3 and 1/3 in units of the positron charge. Thus for one generation the sum writes $(1+3 \times(4/9+1/9))$. The factor 3 is the number of colours. Each generation contributes equally, hence for the three families of the standard model: $$\begin{aligned}
\label{eq:sommeq2}
\sum_{i}{Q_i^2} = 8 .\end{aligned}$$
One obtains $$\begin{aligned}
\label{eq:c-4}
\tilde{c} = \frac{K_W}{(K_W^2-1)^{3/2}}\frac{3\pi^2} {4 \alpha}\ c .\end{aligned}$$
The calculated light velocity $\tilde{c}$ is equal to the observed value $c$ when $$\begin{aligned}
\label{eq:cadoublev}
\frac{K_W}{(K_W^2-1)^{3/2}}=\frac{4\alpha}{3\pi^2} ,\end{aligned}$$ which is obtained for $K_W \approx 31.9\,$, greater than one as required.
The average speed of the photon in our medium being $c$, the photon propagates, on average, along the light cone. As such, the effective average speed of the photon is independent of the inertial frame as demanded by relativity. This mechanism relies on the notion of an absolute frame for the vacuum at rest. It satisfies special relativity only in the Lorentz-Poincaré sense.
Derivation of the vacuum permittivity {#sec:permittivity}
=====================================
Consider a parallel-plate capacitor with a gas inside. When the pressure of the gas decreases the capacitance decreases too until there is no more molecules in between the plates. The strange thing is that the capacitance is not zero when we hit the vacuum. In fact the capacitance has a very sizeable value as if the vacuum were a usual material body. The dielectric constant of a medium is coming from the existence of opposite electric charges that can be separated under the influence of an applied electric field $\vec{E}$. Furthermore the opposite charges separation stays finite because they are bound in a molecule. These opposite translations result in opposite charges appearing on the dielectric surfaces in regard of the metallic plates. This leads to a decrease of the effective charge, which implies a decrease of the voltage across the dielectric slab and finally to an increase of the capacitance. In our model of the vacuum the virtual pairs are the pairs of opposite charges and the separation stays finite because the electric field acts only during the lifetime of the pairs. In an absolute **empty** vacuum the induced charges would be null because there would be no charges to be separated and the capacitance of our parallel-plate capacitor would go to zero when we would remove all molecules of the gas. We will see in this section that introducing our vacuum filled by virtual fermions will cause its electric charges to be separated and to appear at the level of $5.10^7$ electron charges per $m^2$ under an electric stress $E = 1\ V/m$.
We assume that every fermion-antifermion virtual pair of the $i$-kind bears a mean electric dipole $d_i$ given by: $$\begin{aligned}
\label{eq:elecdipole}
\vec{d_i} = Q_i e \vec{\delta_i} .\end{aligned}$$ where $\delta_i$ is the average size of the pairs. If no external electric field is present, the dipoles point randomly in any direction and their resulting average field is zero. We propose to give a physical interpretation of the observed vacuum permittivity $\epsilon_0$ as originating from the mean polarization of these virtual fermions pairs in presence of an external electric field $\vec{E}$. This polarization would show up due to the dipole lifetime dependence on the electrostatic coupling energy of the dipole to the field. In a field homogeneous at the $\delta_i$ scale, this energy is $d_i E \cos \theta$ where $\theta$ is the angle between the virtual dipole and the electric field $\vec{E}$. The electric field modifies the pair lifetimes according to their orientation: $$\begin{aligned}
\label{eq:taudipel}
\tau_i(\theta)= \frac{\hbar/2} {W_i - d_i E \cos \theta} .\end{aligned}$$
Since it costs less energy to produce such an elementary dipole aligned with the field, this configuration lasts a bit longer than the others, leading to an average dipole different from zero. This average dipole $\langle D_i \rangle$ is aligned with the electric field $\vec{E}$. Its value is obtained by integration over $\theta$ with a weight proportional to the pair lifetime: $$\begin{aligned}
\label{eq:D}
\langle D_i \rangle = \frac{\int_0^{\pi} d_i\ \cos\theta\ \tau_i(\theta)\ 2\pi \sin\theta\ d\theta}{\int_0^{\pi} \tau_i(\theta)\ 2\pi \sin\theta\ d\theta} .\end{aligned}$$
To first order in $E$, one gets: $$\begin{aligned}
\label{eq:polar}
\langle D_i \rangle = d_i \frac{d_i E}{3 W_i} = {{ d_i^2}\over {3W_i}} E .\end{aligned}$$
We estimate the permittivity $\tilde{\epsilon}_{0,i}$ due to $i$-type fermions using the relation $P_i=\tilde{\epsilon}_{0,i}E$, where the polarization $P_i$ is equal to the dipole density $P_i=2 N_i \langle D_i \rangle$, since the two spin combinations contribute. Thus: $$\begin{aligned}
\label{eq:epsi}
\tilde{\epsilon}_{0,i} =2 N_i \frac{\langle D_i \rangle}{E} =2 N_i \frac{d_i^2}{3W_i} =2 N_i e^2 \frac{Q_i^2 \delta_i^2}{3W_i} .\end{aligned}$$
Each species of fermions increases the induced polarization and therefore the vacuum permittivity. By summing over all pair species, one gets the estimation of the vacuum permittivity: $$\begin{aligned}
\label{eq:epsi0}
\tilde{\epsilon}_{0} = e^2 \sum_{i}{2 N_i Q_i^2 \frac{\delta_i^2}{3W_i}} .\end{aligned}$$
We can write that permittivity as a function of our units, using Eq. (\[eq:energy\]), (\[eq:density\]), (\[eq:compton-length\]) and (\[eq:lcwi\]): $$\begin{aligned}
\label{eq:epsi0bis}
\tilde{\epsilon}_{0} = \frac{(K_W^2-1)^{3/2}}{K_W}\frac{e^2}{24 \pi^3\hbar c} \sum_{i}{Q_i^2} .\end{aligned}$$
The sum is again taken over all pair types. From Eq. (\[eq:sommeq2\]) one gets: $$\begin{aligned}
\label{eq:permittivity}
\tilde{\epsilon}_0 = \frac{(K_W^2-1)^{3/2}}{K_W} \frac{e^2}{3\pi^3 \hbar c}\, .\end{aligned}$$
And, from Eq. (\[eq:cadoublev\]) one gets: $$\begin{aligned}
\label{eq:permittivity}
\tilde{\epsilon}_0 = {\left(\frac{4\alpha}{3\pi^2}\right)}^{-1} \frac{e^2}{3\pi^3 \hbar c}=\frac{e^2}{4\pi\hbar c\alpha} =8.85\, 10^{-12} F/m\end{aligned}$$
It is remarkable that Eq. (\[eq:cadoublev\]) obtained from the derivation of the speed of light leads to a calculated permittivity $\tilde{\epsilon}_0$ exactly equal to the observed value of $\epsilon_0$.
Derivation of the vacuum permeability {#sec:permeability}
=====================================
The vacuum acts as a highly paramagnetic substance. When a torus of a material is energized through a winding carrying a current $I$, there is a resulting magnetic flux density $B$ which is expressed as: $$\begin{aligned}
\label{eq:mu-1}
B = \mu_0 n I + \mu_0 M .\end{aligned}$$ where $n$ is the number of turns per unit of length, $nI$ is the magnetic intensity in $A/m$ and $M$ is the corresponding magnetization induced in the material and is the sum of the induced magnetic moments divided by the corresponding volume. In an experiment where the current $I$ is kept a constant and where we lower the quantity of matter in the torus, $B$ decreases. As we remove all matter, $B$ gets to a non zero value: $B = \mu_0 n I$ showing experimentally that the vacuum is paramagnetic with a vacuum permeability $\mu_0 = 4\pi\ 10^{-7} {N/A^2}$.
We propose to give a physical interpretation to the observed vacuum permeability as originating from the magnetization of the charged virtual fermions pairs under a magnetic stress, following the same procedure as in the former section.
Each charged virtual fermion carries a magnetic moment proportional to the Bohr magneton: $$\begin{aligned}
\label{eq:magneton}
\mu_i = \frac{eQ_i\ c{\lambda_C}_i}{2} .\end{aligned}$$
Since the total spin of the pair is zero, and since fermion and antifermion have opposite charges, each pair carries twice the magnetic moment of one fermion. The coupling energy of a $i$-kind pair to an external magnetic field $\vec{B}$ is then $-2 \mu_i B \cos \theta$ where $\theta$ is the angle between the magnetic moment and the magnetic field $\vec{B}$. The pair lifetime is therefore a function of the orientation of its magnetic moment with respect to the applied magnetic field: $$\begin{aligned}
\label{eq:taumag}
\tau_i(\theta)= \frac{\hbar/2}{W_i - 2 \mu_i B \cos \theta} .\end{aligned}$$
As in the electrostatic case, pairs with a dipole moment aligned with the field last a bit longer than anti-aligned pairs. This leads to a non zero average magnetic moment $<\mathcal{M}_i>$ for the pair, aligned with the field and given, to first order in $B$, by: $$\begin{aligned}
\label{eq:magnet}
<\mathcal{M}_i> = \frac{4\mu_i^2}{3W_i} B .\end{aligned}$$
The volume magnetic moment is $M_i = {2 N_i <\mathcal{M}_i>}$, since one takes into account the two spin states per cell.
The contribution $\tilde{\mu}_{0,i}$ of the $i$-type fermions to the vacuum permeability is given by $ B=\tilde{\mu}_{0,i}M_i $ or ${1}/{\tilde{\mu}_{0,i}}={M_i}/{B}$.
This leads to the estimation of the vacuum permeability $$\begin{aligned}
\label{eq:permeability-1}
\frac{1}{\tilde{\mu}_0}=\sum_{i}{\frac{M_i}{B}} = \sum_{i}{\frac{8 N_i\mu_i^2}{3W_i}}= c^2 e^2\sum_{i}{\frac{2 N_iQ_i^2{\lambda^2_C}_i}{3W_i}} .\end{aligned}$$
Using Eq. (\[eq:energy\]), (\[eq:density\]) and (\[eq:lcwi\]) and summing over all pair types, one obtains $$\begin{aligned}
\label{eq:permeability-3}
\tilde{\mu}_0 = \frac{K_W}{(K_W^2-1)^{3/2}} \frac{24\pi^3\hbar}{c\,e^2 \sum_{i}{Q_i^2}}= \frac{K_W}{(K_W^2-1)^{3/2}} \frac{3\pi^3 \hbar}{ c\,e^2}\, .\end{aligned}$$
Using the $K_W$ value constrained by the calculus of $c$ (\[eq:cadoublev\]), we end up with: $$\begin{aligned}
\label{eq:permeability-4}
\tilde{\mu}_0 = \frac{4\pi\alpha}{3} \frac{3\ \hbar}{c\,e^2} =\frac{4\pi\alpha\hbar}{c\,e^2} = 4\pi 10^{-7}N/A^2 .\end{aligned}$$
It is again remarkable that Eq. (\[eq:cadoublev\]) obtained from the derivation of the speed of light leads to a calculated permeability $\tilde{\mu}_0$ equal to the right $\mu_0$ value.
We notice that the permeability and the permittivity do not depend upon the masses of the fermions, as in Ref. [@Leuchs]. The electric charges and the number of species are the only important parameters. This is at variance with the common idea that the energy density of the vacuum is the dominant factor [@Latorre].
This expression, combined with the expression (\[eq:permittivity\]) of the calculated permittivity, verifies the Maxwell relation, typical of wave propagation, $ \tilde{\epsilon}_0 \tilde{\mu}_0 = 1/c^2$, although our mechanism for a finite $c$ is purely corpuscular.
A generalized model
====================
We have shown that our model of vacuum leads to coherent calculated values of $c$, $\epsilon_0$ and $\mu_0$ equal to the observed values if we assume that the virtual fermion pairs are produced with an average energy which is about 30 times their rest mass.
This solution corresponds to some **natural** hypotheses for the density and the size of the virtual pair, the cross-section with real photons and their capture time. We can generalize the model by introducing the free parameters $K_N$, $K_{\delta}$, $K_{\sigma}$ and $K_\tau$ in the expressions (\[eq:density\]), (\[eq:compton-length\]), (\[eq:sigma\]) and (\[eq:Tbar\]) of the physical quantities: $$\begin{aligned}
\label{eq:1}
N_i = K_N \left( \frac{\sqrt{K_W^2-1}}{2\pi{\lambda_C}_i} \right)^3\end{aligned}$$ $$\begin{aligned}
\label{eq:2}
\delta_i = K_{\delta}\, {\lambda_C}_i\end{aligned}$$ $$\begin{aligned}
\label{eq:3}
\sigma_i = K_{\sigma} \frac{16 \pi}{3} \alpha\, Q_i^2 {\lambda^2_C}_i\end{aligned}$$ $$\begin{aligned}
\label{eq:4}
\overline{T} = \sum_{i}{N_{stop,i}K_\tau\tau_i}\ .\end{aligned}$$ $K_N$, $K_{\delta}$, $K_{\sigma}$ and $K_\tau$ are assumed to be universal factors independent of the fermion species. Their values are expected to stay close to $1$. Among the model parameters, $K_W$ is the only unconstrained unknown.
The general solutions of the calculation of $c$ (\[eq:c-4\]), $\epsilon_0$ (\[eq:permittivity\]) and $\mu_0$ (\[eq:permeability-4\]) read now: $$\begin{aligned}
\label{eq:5}
\tilde{c} = \frac{1}{K_N K_{\sigma}K_\tau } \frac{K_W}{(K_W^2-1)^{3/2}}\frac{3\pi^2} {4 \alpha}\ c\, ,\end{aligned}$$ $$\begin{aligned}
\label{eq:6}
\tilde{\epsilon}_0 = K_N K_{\delta}^2 \frac{(K_W^2-1)^{3/2}}{K_W} \frac{e^2}{3\pi^3 \hbar c}\, ,\end{aligned}$$ $$\begin{aligned}
\label{eq:7}
\tilde{\mu}_0 = \frac{1}{K_N} \frac{K_W}{(K_W^2-1)^{3/2}} \frac{3\pi^3 \hbar}{c\,e^2}\, ,\end{aligned}$$ from which one can get, for instance:
- first $\tilde{\epsilon}_0\tilde{\mu}_0=K_{\delta}^2/c^2$ which implies $K_{\delta} = 1$ ,
- then either $\tilde{\epsilon}_0$ or $\tilde{\mu}_0$ fixes $\frac{1}{K_N} \frac{K_W}{(K_W^2-1)^{3/2}} = \frac{4 \alpha}{3\pi^2}\ $
- which applied to $\tilde{c}$ gives $K_{\sigma}K_\tau = 1$.
So, the model parameters satisfy: $$\begin{aligned}
\label{eq:9}
K_{\delta} = 1\ , K_{\sigma}K_\tau = 1\ ,
\frac{1}{K_N} \frac{K_W}{(K_W^2-1)^{3/2}} = \frac{4\alpha}{3\pi^2}\ .
$$
$K_{\delta}$ is precisely constrained to its first guess value. More relations or observables are required to extract the other quantities and check this vacuum model. A measurement of the expected fluctuations of the speed of light, and a measurement of speed of light variations with light intensity, as discussed in the following sections, could bring such relations.
Transit time fluctuations {#sec:prediction transit}
=========================
Prediction
----------
Quantum gravity theories including stochastic fluctuations of the metric of compactified dimensions, predict a fluctuation $\sigma_t$ of the propagation time of photons [@Yu-Ford]. However observable effects are expected to be too small to be experimentally tested. It has been also recently predicted that the non commutative geometry at the Planck scale should produce a spatially coherent space-time jitter [@Hogan].
In our model we also expect fluctuations of the speed of light $c$. Indeed in the mechanism proposed here $c$ is due to the effect of successive interactions and transient captures of the photon with the virtual particles in the vacuum. Thus statistical fluctuations of $c$ are expected, due to the statistical fluctuations of the number of interactions $N_{stop}$ of the photon with the virtual pairs and to the capture time fluctuations.
The propagation time of a photon which crosses a distance $L$ of vacuum is $$\begin{aligned}
\label{eq:transit}
t = \sum_{i,k}{ t_{i,k}} ,\end{aligned}$$ where $t_{i,k}$ is the duration of the $k^{th}$ interaction on an $i$-kind pair. As in section \[sec:speedoflight\], let $ N_{stop,i}$ be the mean number of such interactions. The variance of $t$ due to the statistical fluctuations of $N_{stop,i}$ is: $$\begin{aligned}
\label{eq:sig1}
\sigma_{t,N}^2 = \sum_{i}{N_{stop,i} K_\tau^2\tau_i^2} .\end{aligned}$$ The photon may arrive on a virtual pair any time between its birth and its death. If we assume a flat probability distribution between $0$ and $\tau_i$, the mean value of $t_{i,k}$ is $\tau_i/2$, so one has $K_\tau=1/2$. The variance of the stop time is $(K_\tau\tau_i)^2/3$: $$\begin{aligned}
\label{eq:sig2}
\sigma_{t,\tau}^2 = \sum_{i}{N_{stop,i} \frac{(K_\tau\tau_i)^2}{3}} .\end{aligned}$$ Then $$\begin{aligned}
\label{eq:fluctu}
\sigma_t^2 = \sum_{i}{N_{stop,i} (K_\tau\tau_i)^2(1+\frac{1}{3}})= \frac{4\,K_\tau^2}{3}\sum_{i}{N_{stop,i} \tau_i^2}.\end{aligned}$$ And, using Eq. (\[eq:Nstop\]): $$\begin{aligned}
\label{eq:fluctuation-0}
\sigma_t^2 = \frac{4 K_\tau^2L}{3} \sum_{i}{\sigma_i N_i \tau_i^2} . \end{aligned}$$ Once reduced, the current term of the sum is proportional to ${\lambda_C}_i$. Therefore the fluctuations of the propagation time are dominated by virtual $e^+e^-$ pairs. Neglecting the other fermion species, and using $\sigma_e N_eK_\tau\tau_e=1/(8c)$, one gets : $$\begin{aligned}
\label{eq:formulesigmat2}
\sigma_t^2 = \frac{K_\tau\tau_eL}{6c}= \frac{K_\tau{\lambda_C}_eL}{24 K_W c^2} .\end{aligned}$$ So $$\begin{aligned}
\label{eq:fluctuation}
{\sigma_t} = \sqrt{\frac{L}{c}}\sqrt{\frac{{\lambda_C}_e}{c}}\sqrt{\frac{K_\tau}{K_W}}\frac{1}{\sqrt{24}} .\end{aligned}$$
For the simple solution of the vacuum model where $K_W=31.9$ and $K_\tau=1/2$ the predicted fluctuation is: $$\begin{aligned}
\label{eq:fluctuation-2}
\sigma_t \approx 53 \ as\ m^{-1/2} \sqrt{L(m)} .\end{aligned}$$ This corresponds for instance on a $1\ m$ long travel, to an average of $8.\, 10^{13} $ stops by $e^+e^-$ pairs during which the photon stays on average $5\, 10^{-24} s$ (it spends $7/8$ of its time trapped on other species pairs). Fluctuations of both quantities lead to this $50\ as$ expected dispersion on the photon transit time which represents a $1.5\,10^{-8}$ relative fluctuation over a meter.
This prediction must be modulated by the remaining degree of freedom on $K_N$ or $K_W$, but the mechanism would loose its physical basis if $\sigma_t$ would not have that order of magnitude.
A positive measurement of $\sigma_t$, apart from being a true revolution, would tighten our understanding of the fundamental constants in the vacuum, by fixing the ratio $K_\tau/K_W$.
The experimental way to test fluctuations is to measure a possible time broadening of a light pulse travelling a distance $L$ of vacuum. This may be done using observations of brief astrophysical events, or dedicated laboratory experiments.
Constraints from astrophysical observations
-------------------------------------------
The very bright GRB 090510, detected by the Fermi Gamma-ray Space Telescope [@Abdo], at MeV and GeV energy scale, presents short spikes in the $8~keV - 5~MeV$ energy range, with the narrowest widths of the order of $10\,ms$. Observation of the optical after glow, a few days later by ground based spectroscopic telescopes gives a common redshift of $z = 0.9$. This corresponds to a distance, using standard cosmological parameters, of about $2\, 10^{26} m$. Translated into our model, this sets a limit of about $0.7\, fs\, m^{-1/2}$ on $c$ fluctuations. It is important to notice that there is no expected dispersion of the bursts in the interstellar medium at this energy scale.
If we move six orders of magnitude down in distances we arrive to kpc and pulsars. Short microbursts contained in main pulses from the Crab pulsar have been recently observed at the Arecibo Observatory telescope at 5 GHz [@Crab-pulsar-2010]. The frequency-dependent delay caused by dispersive propagation through the interstellar plasma is corrected using a coherent dispersion removal technique. The mean time width of these microbursts after dedispersion is about 1 $\mu$s, much larger than the expected broadening caused by interstellar scattering. If this unknown broadening would not be correlated to the emission properties, it could come from $c$ fluctuations of about $0.2\, fs\, m^{-1/2}$.
In these observations of the Crab pulsar, some very sporadic pulses with a duration of less than $1 ns$ have been observed at 9 GHz [@Crab-pulsar-2007]. This is 3 orders of magnitude smaller than the usual pulses. These nanoshots can occasionally be extremely intense, exceeding $2\, MJy$, and have an unresolved duration of less than $0.4\, ns$ which corresponds to a light-travel size $c\delta t \approx 12\, cm$. From this the implied brightness temperature is $2\ 10^{41} K$. Alternatively we might assume the emitting structure is moving outward with a Lorentz factor $\gamma_b \approx 10^2 - 10^3$. In that case, the size estimate increases to $10^3 - 10^5\, cm$, and the brightness temperature decreases to $10^{35} - 10^{37}\, K$. We recall that the Compton temperature is $10^{12}\, K$ and that the Planck temperature is $10^{32}\, K$ so the phenomenon, if real, would be way beyond known physics. We emphasize also two features. Firstly, these nanoshots are contained in a single time bin (2 ns at 5 GHz and 0.4 ns at 9 GHz) corresponding to a time width less than $2/\sqrt{12} \approx 0.6\, ns$ at $5~\,GHz$ and $0.4/\sqrt{12} \approx 0.1\,ns$ at $9~\,GHz$, below the expected broadening caused by interstellar scattering. Secondly, their frequency distributions appear to be almost monoenergetic and very unusual, since the shorter the pulse the narrower its reconstructed energy spectrum.
Constraints from Earth bound experiments
----------------------------------------
The very fact that the predicted statistical fluctuations should go like the square root of the distance implies the exciting idea that experiments on Earth do compete with astrophysical constraints, since going from the kpc down to a few hundred meters, which means a distance reduction by a $10^{17}$ factor, we expect fluctuations in the $fs$ range.
An experimental setup using femtosecond lasers pulses sent to a rather long multi-pass cavity equipped with metallic mirrors could be able to detect such a phenomenon.
The attosecond laser pulse generation and characterization, by itself, might allow already to set the best limit on $\sigma_t$. This limit would be of the order of our predicted value in the simplest version of the model [@atto-1]. However, for the time being, the unambiguous measurement of the pulse time spread is available only for $fs$ pulses through the correlation of the short pulses in a non linear crystal.
Modification of the light speed in extremely intense light pulses {#sec:vacmod}
=================================================================
The vacuum, considered as a peculiar medium, should be able to undergo changes. This is suspected to be the case in the Casimir effect which predicts a pressure to be present between electrically neutral conducting surfaces [@Casimir]. This force has been observed in the last decade by several experiments [@Casimir-exp-1] and is interpreted as arising from the modification of the zero-point energies of the vacuum due to the presence of material boundaries.
The vacuum can also be seen as the triggering actor in the spontaneous decay of excited atomic states through a virtual photon stimulating the emission [@Purcell]. In that particular case experimentalists were able to change the vacuum, producing a huge increase [@Goy] or a decrease [@Hulet] of the spontaneous emission rate by a modification of the virtual photon density.
This model predicts that the local vacuum is also modified by a light beam because of photon capture by virtual pairs, which in a sense pumps the vacuum.
Let us apply the mechanism exposed in section \[sec:speedoflight\] to the propagation of a pulse when photon densities are not negligibly small compared to the $e^+e^-$ pair density.
If the pulse is fully circularly polarized, all its photons bear the same helicity. So, a photon cached by a pair makes it transparent to the other incoming photons, till it jumps on another one.
If the photon density is $N_\gamma$, the fraction of $i-$type species masked this way is, to first order in $N_\gamma$, equal to: $$\begin{aligned}
\label{eq:masked}
\Delta N_i/N_i=N_\gamma K_\sigma \sigma_i c K_\tau\tau_i .\end{aligned}$$ So, from (\[eq:9\]) $$\begin{aligned}
\label{eq:masked-2}
\Delta N_i=N_\gamma N_i \sigma_i c\tau_i .\end{aligned}$$ The remaining densities available to interact with photons are $N_i-\Delta N_i$. So the speed of light is given by : $$\begin{aligned}
\label{eq:cstar}
\tilde{c}^* = \frac{1}{\sum_{i}{\sigma_i (N_i-\Delta N_i)\tau_i}} =\frac{1}{\sum_{i}{\sigma_i N_i
\tau_i(1-N_\gamma\sigma_i c\tau_i )}} .\end{aligned}$$
$\sigma_i \tau_i$ being proportional to ${{\lambda^3_C}_i}$, we keep only the $e^+e^-$ contribution in the corrective term. Using (\[eq:c-1\]), it comes $$\begin{aligned}
\label{eq:cstar-2}
\tilde{c}^* =\frac{ c}{1- N_\gamma N_e\sigma_e^2 c^2\tau_e^2}\, .\end{aligned}$$
Noticing that $N_e\sigma_e \tau_e = 1/8c$, this reduces to: $$\begin{aligned}
\label{eq:cstar-2}
\tilde{c}^* =\frac{ c}{1-N_\gamma/(64 N_e)}\, .\end{aligned}$$
So, one ends up with: $$\begin{aligned}
\label{dcc}
\frac{\delta{c}}{c} =\frac{N_\gamma}{64 N_e}\, ,\end{aligned}$$ which shows that $c$ would be an increasing function of the photon densities. This anti Kerr effect is directly related to the $e^+e^-$ pair density. One can express it as a function of $K_W$, using (\[eq:1\]) and (\[eq:9\]): $$\begin{aligned}
\label{dcckw}
\frac{\delta{c}}{c} =\frac{N_\gamma\pi\alpha{{\lambda^3_C}_e}}{6K_W}\, .\end{aligned}$$
This prediction could in principle be tested in a dedicated laboratory experiment where one huge intensity pump pulse would be used to stress the vacuum and change the transit time of a small probe pulse going in the same direction and having the same circular polarization (or going in the opposite direction with the opposite polarization). Other helicity combinations would give no effect on the transit time.
Let us convert Eq. \[dcckw\] into numbers. Using $K_W=31.9$, $N_e$ amounts to: $$\begin{aligned}
\label{eq:densnum-e+e-}
N_e \approx 2\ 10^{39}\ e^+e^-/m^3 \,.\end{aligned}$$
Now, the photon density in a pulse of power $P$, frequency $\omega$ and section $S$ is: $$\begin{aligned}
\label{eq:densphot}
N_\gamma = \frac{P}{\hbar\omega S c} = \frac{P\lambda}{2\pi\hbar S c^2}\,.\end{aligned}$$ A petawatt source at $\lambda\approx .5\,\mu m$, such as the one of Ref. [@Bayramian], allows to reach focused irradiances of the order of $P/S=10^{23}W/cm^2$ in a few $\lambda^3$ volume. This means photons densities in the range: $$\begin{aligned}
N_\gamma = \frac{10^{27}\ .5 10^{-6}}{6.6\, 10^{-34}\ 9\, 10^{16}}= 10^{37}\ \gamma/ m^{3} \,,\label{eq:densphot}\end{aligned}$$ leading to: $$\begin{aligned}
\label{eq:ratio}
N_\gamma/N_e \approx 5\,10^{-3}\,,\end{aligned}$$ and a relative effect on $c$ of $8\,10^{-5}$ over a short length. This would affect the time transit of a probe pulse focused through the same volume. Creating a $1\ fs$ advance on that probe pulse would need a common travel with such a pump pulse over a distance of $4\ mm$. This seems a difficult challenge, but we think that this matter deserves a specific study to build a real proposal for testing this prediction.
Conclusions
===========
We describe the ground state of the unperturbed vacuum as containing a finite density of charged virtual fermions. Within this framework, the finite speed of light is due to successive transient captures of the photon by these virtual particles. $\epsilon_0$ and $\mu_0$ originate also simply from the electric polarization and from the magnetization of these virtual particles when the vacuum is stressed by an electrostatic or a magnetostatic field respectively. Our calculated values for $c$, $\epsilon_0$ and $\mu_0$ are equal to the measured values when the virtual fermion pairs are produced with an average energy of about $30$ times their rest mass. This model is self consistent and it proposes a quantum origin to the three electromagnetic constants. The propagation of a photon being a statistical process, we predict fluctuations of the speed of light. It is shown that this could be within the grasp of nowadays experimental techniques and we plan to assemble such an experiment. Another prediction is a light propagation faster than $c$ in a high density photon beam.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors thank N. Bhat, J.P. Chambaret , I. Cognard, J. Haïssinski, P. Indelicato, J. Kaplan, P. Wolf and F. Zomer for fruitful discussions, and J. Degert, E. Freysz, J. Oberl' e and M. Tondusson for their collaboration on the experimental aspects. This work has benefited from a GRAM[^2] funding.
[00]{} W.E. Lamb and R.C. Retherford, Phys. Rev. 72 (1947) 241-243. I. Levine et al., Phys. Rev. Lett. 78 (1997) 424-427. J. Schwinger, Phys. Rev. 73 (1948) 416-417. M. Davier, A. Hoecker, B. Malaescu and Z. Zhang, Eur.Phys.J. C71 (2011) 1515. Ch. Kittel, Elementary Solid State Physics, John Wiley & Sons (1962) 120. G. Leuchs, A.S. Villar and L.L. Sanchez-Soto, Appl. Phys. B 100 (2010) 9-13. Yu H. and Ford L.H., Phys. Lett. B 496 (2000) 107-112. C. J. Hogan, ArXiv.org 1002.4880 (2011). Abdo A.A. et al., Nature 462 (2009) 331-334. Crossley J.H. et al., The Astrophysical Journal 722 (2010) 1908-1920. Hankins T.H. and Eilek J.A., The Astrophysical Journal 670 (2007) 693-701. H. Casimir, Phys. Rev., 73 (1948) 360. S.K. Lamoreaux, Rep. Prog. Phys. 68 (2005) 201. E.M. Purcell, Phys. Rev. 69 (1946) 681. P. Goy, J.M. Raymond, M. Gross and S. Haroche Phys. Rev. Lett. 50 (1983) 1903. R. G. Hulet, E.S. Hilfer and D. Kleppner, Phys. Rev. Lett. 55 (1985) 2137. J. I. Latorre et al., Nuclear Physics B437 (1995) 60. E. Goulielmakis et al., Science 320, 1614 (2008). A. Bayramian et al., JOSA B 25 - Issue 7 (2008) B57-B61.
[^1]: urban@lal.in2p3.fr
[^2]: CNRS INSU/INP program with CNES & ONERA participations (Action Sp' ecifique “Gravitation, Références, Astronomie, Métrologie”)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We study the process of single ionization of Li in collisions with H$^+$ and O$^{8+}$ projectile ions at 6 MeV and 1.5-MeV/amu impact energies, respectively. Using the frameworks of the independent-electron model and the impact parameter picture, fully (FDCS) and doubly (DDCS) differential cross sections are evaluated in the continuum distorted-wave with eikonal initial-state approximation. Comparisons are made with the recent measurements of LaForge *et al* \[J. Phys. B **46** 031001 (2013)\] for the DDCS and Hubele *et al* \[Phys. Rev. Lett. **110** 133201 (2013)\] for the FDCS, respectively. For O$^{8+}$ impact inclusion of the heavy particle (NN) interaction in the calculations is crucial and effects of polarization due to the presence of the projectile ion have also to be taken into account for getting very good agreement with the measured data. Our calculation reproduces the satellite peak structure seen in the FDCS for the Li(2s) measurement, which we explain as being formed by a combination of the binary and NN interactions.'
author:
- 'L. Gulyás'
- 'S. Egri'
- 'T. Kirchner'
bibliography:
- '/home/gulyasl/TEX/cikkek/BIB/correlation.bib'
- '/home/gulyasl/TEX/cikkek/BIB/Li-FDCS.bib'
title: 'Differential cross sections for single ionization of Li in collisions with fast protons and O$^{8+}$ ions'
---
Introduction
============
The study of single and multiple ionization of simple atoms by fast bare ion impact provides an excellent opportunity to explore mechanisms leading to the break up of few-body Coulomb systems [@sto97; @M97; @COLTRIM]. In the last few decades, thanks to the development of cold-target recoil-ion momentum spectroscopy (COLTRIMS) [@U03], there have been very intense efforts to explore the different mechanisms in fine details, see [@schulz05; @schulz09; @Shoffler09] and references therein. The very recent implementation of laser cooling in a magneto-optical trap combined with a reaction microscope (MOTReMi) has opened up the possibility of studying collision processes with state-prepared target atoms [@Fischer2012].
Studying simple collision systems has the advantage of being able to get complete kinematic information on the processes experimentally. Fully differential cross sections can be determined, whose interpretation offer a real challenge for theoretical modelling. In recent years, intense discussions have been generated e.g. on the role of the nucleus-nucleus (NN) interaction or on projectile coherence effects which remain hidden in most of the less differential measurements [@Schulz07; @Egodapitiya11; @Sharma14]. The decisive role of the NN interaction has also been demonstrated in a recent kinematically complete experiment for single ionization in an initial-state selective study of O$^{8+}$-Li(2s),Li(2p) collisions [@Hubele2013; @LaForge2013]. Significant initial state dependence has been reported for the doubly differential cross section as function of electron energy and transverse momentum transfer. The experimental data were confronted with predictions from continuum distorted wave with eikonal initial state (CDW-EIS) calculations and, surprisingly, a classical description of the NN interaction provided the best agreement. In subsequent theoretical studies based on the close-coupling approach (CC) [@Ciappina13] and the coupled-pseudostate approximation (CP) [@WW2014] noticeable effects of the NN interaction have been confirmed and reasonable agreement between experiment and theory was concluded.
Distortion or polarization of an atomic electron orbital by other target electron(s) or by the incident charged particle is one of the most difficult task to deal with in the theoretical descriptions. The problem can be addressed in the distorted wave formalism [@crorev92] or in term of a polarization potential [@Joachain75]. To describe the polarization potential, several functional forms have been suggested [@Nakanishi86; @Zhang1992; @Mitroy2010]. All of them have the $V_{pol}\approx \alpha / 2r^4$ asymptotic behaviour at large distances $r$ from the target, where $\alpha$ is the dipole polarizability constant. However, at shorter distances, the exact from of the potential is not available, which is the main reason for the existence of several analytical expressions. They are usually obtained by fits to experimental data or by using some reasonable assumption on the behaviour of the potential at short distances. A large number of theoretical studies have been devoted to selecting and testing polarization potentials in the area of electron-atom scattering, see e.g. [@Nakanishi86; @Zhang1992; @Mitroy2010; @Yurova2014] and references therein. However, only little is known about the effects of $V_{pol}$ in heavy particle collisions, especially at high projectile energies.
In this contribution we apply the CDW-EIS method [@GFS95; @gul08] to calculate the differential cross sections for single ionization of Li in collisions with 6 MeV H$^+$ and 1.5 MeV/amu O$^{8+}$ projectile ions and discuss results in comparison with experimental data of LaForge *et al* [@LaForge2013] and Hubele *et al.* [@Hubele2013] and with available theoretical results. Using the independent electron picture the one-electron transition amplitudes are determined in the CDW-EIS model. Having the impact parameter dependent transition amplitudes the effects of the NN interaction are taken into account by a phase factor. No exact form of this factor is available, however, different assumptions imposed on it proved to be useful for exploring the mechanisms of the underlining processes. Special attention is paid to the role of $V_{pol}$. Different analytical forms are applied and a characteristic role of $V_{pol}$ is found at large momentum transfer values.
The article is organized as follows. In Sec \[sec:theory\] we summarize the main points of our theoretical description. In Sec \[sec:res.\] the results are discussed. A summarizing discussion is provided in Sec. \[sec:conc.\]. Atomic units characterized by $h = m_e = e = 4\pi \epsilon_0 = 1$ are used unless otherwise stated.
Theory {#sec:theory}
======
In order to reduce the very challenging many-electron treatment to a much simpler one-electron description we consider only one electron in Li as active over the course of the collision while the others remain bound to their initial state. The non-active electrons are taken into account by an effective potential $V_{\mathrm{Li}}$ that represents the interactions in the (1s$^2$2s) ground state configuration. This potential is obtained from the exchange-only version of the optimized potential method (OPM) of density functional theory, i.e. it includes the electron nucleus Coulomb interaction, screening, and exchange terms exactly and exhibits the correct -1/r$_T$ behaviour, but it neglects electron correlation ($\mathrm{r}_T$ denotes the position vector of the electron relative to the target nucleus) [@Engel-opm]. The above assumption on the description of the target is the essential point in the application of the independent electron model (IEM) where electrons are considered to evolve independently and it enables to simplify the treatment of a many-electron collision problem to a three-body system [@M97].
In the following we consider a three body-collision, where a bare projectile ionises a target initially consisting of a bound system of an electron and a core represented by the $V_{\mathrm{Li}}$ interaction potential. Furthermore, we apply the impact parameter method, where the projectile follows a straight line trajectory ${\mathbf{R}}={\boldsymbol{\rho}}+ {\mathbf{v}}t$, characterized by the constant velocity ${\mathbf{v}}$ and the impact parameter ${\boldsymbol{\rho}}\equiv (\rho,\varphi_\rho)$ [@mcd70]. The one electron Hamiltonian has the form $$h(t)= -\frac{1}{2} \Delta_{\mathbf{r}_T} + V_{\mathrm{Li}}(|\mathbf{r}_T|) - \frac{Z_P}{\mathbf{r}_P},
\label{el-ham}$$ where $\mathbf{r}_P$ denotes the position vector of the electron relative to the projectile nucleus having nuclear charge $Z_P$. The single particle scattering equation is solved within the framework of the CDW-EIS approximation, where unperturbed atomic orbitals in both the incoming and outgoing channels have been evaluated on the same $V_{\mathrm{Li}}$ potential, see refs. [@crorev92; @GFS95; @fain96; @gul08] for more details. Here we note that a similar formalism, including transition amplitudes from a basis generator method calculation, was used in our recent study of excitation and ionization in the 1.5-MeV/amu O$^{8+}$ - Li collision system [@gul14].
The doubly differential cross section (DDCS) differential in energy of the emitted electron $E_e$ (=$k_e^2/2$; ${\mathbf{k}}_e \equiv (k_e,\theta_e, \varphi_e)$ is the electron momentum) and in the transverse component (${\boldsymbol{\eta}}\equiv (\eta, \varphi_{\eta})$) of the projectile’s momentum transfer ${\mathbf{q}}= {\mathbf{k}}_i - {\mathbf{k}}_f = -{\boldsymbol{\eta}}+ \Delta E /v$ is given as $$ \frac{ \mathrm{d} \sigma^{2}}{ \mathrm{d} E_e \mathrm{d}\eta} = k_e \eta
\int_{-1}^{+1} \mathrm{d} (\cos \theta_e) \int_{0}^{2\pi} \mathrm{d} \varphi_e
\int_{0}^{2\pi} \mathrm{d} \varphi_{\eta}
|\mathcal{R}_{i {\mathbf{k}}}({\boldsymbol{\eta}})|^2,
\label{ddcs}$$ where $\Delta E= E_e-\varepsilon_i$, $\varepsilon_i$ is the binding energy of the electron in the initial state, ${\mathbf{k}}_{i (f)}$ stands for the projectile momentum before (after) the collision and $\mathcal{R}_{i {\mathbf{k}}}({\boldsymbol{\eta}})$ is the transition matrix.
In equation (\[ddcs\]) the projectile’s momentum transfer and consequently the projectile scattering is defined by the interaction of the projectile with the active electron. However, the scattering of the projectile also depends on its interaction with the target core, (so-called NN interaction). We approximate this interaction by using the potential $$V_{\mathrm{NN}}(R)=Z_P Z_T/R + V_s(R) + V_{pol}(R),
\label{VZZ}$$ where $$V_s(R)= Z_P \sum_{i=1}^{2}\left< \psi^i_{1s} |- 1/|{\bf R} - {\bf r_i}| |\psi^i_{1s} \right>.
\label{scren}$$ describes the interaction between the projectile and the passive electrons. In (\[scren\]) $\psi^i_{1s}$ is approximated by a hydrogenlike wave function ($\psi^i_{1s}=N_ie^{-z_e^i r_T}$) with effective charge $z^i_e(=2.65)$ (Slaters’s rule, [@Slater30]). Taking $z^1_e=z^2_e$, we obtain $$ V_s(R) = - 2 Z_P \left[ 1/R - ((1+z_e R)/R)e^{-2z_eR} \right].
\label{screnz}$$ On the accuracy of (\[screnz\]) we note that we have evaluated $V_s(R)$ with the Li(1s) OPM orbital and have compared it with (\[screnz\]). The difference is very small, which is supported by the fact that the OPM Hartree potential $V_{H}^{OPM}$ for the Li$^+$(1s$^2$) configuration has the limit: $\lim_{R \to 0} V_H^{OPM}(R) \to$ 5.375 which corresponds to $z_e$=2.687. It can be checked that $\lim_{R \to 0} V_{NN}(R) \to 3 Z_P/R$ and $\lim_{R \to \infty} V_{NN}(R) \to Z_P/R$ for $Z_T$=3.
$V_{pol}(R)$ in (\[VZZ\]) accounts for the (adiabatic) polarization or distortion of the core electrons by the incident charged particle [@Joachain75; @Yurova2014]. [ Its use is based on the idea that the electric field of the projectile at distance R gives rise to an instantaneous (first-order) distortion of the core-electron orbitals, thereby modifying the interaction of those electrons with the projectile. Polarization potentials have been used in many studies up to fairly high projectile energies [@Mitroy2010; @Yurova2014; @Nakanishi86; @Schrader79]; in particular in [@WW2014], in which the CP method was applied to the collision system considered in this work. ]{} Different types of analytical approximations are available for $V_{pol}(R)$ and we consider the following frequently used three forms: $$V_{pol}(R)=-\frac{\alpha Z_P^2}{2(R^2+d^2)^2},
\label{VP}$$ where $\alpha$ is the atomic dipole polarizability parameter [@Mitroy2010] and $d$ is a “cut-off” parameter whose value is taken as the average radius of the Li$^+$(1s$^2$) ion (d=0.57 a.u. for the present case) [@Zhang1992]; $$V_{pol}(R)=-\frac{\alpha Z_P^2}{2R^4}(1-\exp (-(R/r_d)^6) ),
\label{VPExp}$$ with $r_d=$0.47 [@Bottcher1971]; and $$V_{pol}(R)=-\frac{\alpha Z_P^2}{2R^4}(1-e_n({\xi})\exp(\xi))^m,
\label{VPExp-e}$$ with $\xi=R/r_o$ where $e_n({\xi})$ is the truncated exponential function $e_n=\sum_{i=0}^{n}(\xi)^i/i!$, $r_o$=0.116, $m=$6 and $n$=2 [@Nakanishi86]. All these polarization potentials have the form $V_{pol}(R)\approx - \alpha /2R^4$ at large distances and differ in the short-range limit due to the “cut-off” parameters or functions which contain parameters estimated on some reasonable assumptions.
Effects of the NN interaction on the scattering process can be investigated by solving the Schrödinger equation for the Hamiltonian (\[el-ham\]) with inclusion of the potential (\[VZZ\]). However, the solution simplifies remarkably if one considers that (\[VZZ\]) depends on $R$ alone and so $V_{NN}$ can be removed from (\[el-ham\]) by a phase transformation. The transition matrix $\mathcal{R}_{i {\mathbf{k}}}({\boldsymbol{\eta}})$ that takes the internuclear interaction into account can then be expressed as [@mcd70] $$\begin{array}{c}
\mathcal{R}_{i {\mathbf{k}}}({\boldsymbol{\eta}}) =\frac{1}{2 \pi} \int {\mathrm{d}{\boldsymbol{\rho}}\;}e^{i {\boldsymbol{\eta}}\cdot {\boldsymbol{\rho}}}
a_{i {\mathbf{k}}}({\boldsymbol{\rho}})
\label{rifn}
\end{array}$$ with $ a_{i {\mathbf{k}}}({\boldsymbol{\rho}})= e^{i \delta(\rho)} \mathcal{A}_{i {\mathbf{k}}}({\boldsymbol{\rho}})$, where $\mathcal{A}_{i {\mathbf{k}}}({\boldsymbol{\rho}})$ is the transition amplitude calculated without the internuclear interaction, and the phase due to (\[VZZ\]) is expressed as $$ \delta(\rho) =- \int_{-\infty}^{+\infty} \mathrm{d} t V_{NN}(R(t)).
\label{nnphase}$$
Results {#sec:res.}
=======
Doubly differential cross sections for the ionization of Li (2s) and Li(2p)
---------------------------------------------------------------------------
In Figures (\[fig1\])-(\[fig6\]) we compare our CDW-EIS results of $\mathrm{d} \sigma^{2} / \mathrm{d} E_e \mathrm{d}\eta$ for proton and $O^{8+}$ impact on Li(2s) and Li(2p) with measurements of LaForge $\textit{et al}$ [@LaForge2013] and Hubele *et al* [@Hubele2013] for the electron ejection energies of 2, 10 and 20 eV. The experimental data of [@LaForge2013] and [@Hubele2013] are not on the absolute scale, only the relative normalisation was fixed for different $E_e$ and for Li(2p) relative to Li(2s). Following previous works [@LaForge2013; @Ciappina13; @WW2014] we have fixed the absolute scale in the Figures by normalising the data to our CDW-EIS cross sections for 2eV ejection from Li(2s) at $\eta$=0.65 a.u.
### H$^+$ projectile impact at 6 MeV
In is clear from Figs (\[fig1\]) -(\[fig3\]) (a) and (b) that the role of the internuclear interaction is negligible in the whole range of $\eta$ where the measurements have been taken when the projectile is a 6 MeV proton. This has also been noted in [@LaForge2013; @WW2014]. We have also evaluated cross sections with $V_{NN}=Z_P Z_{eff}/R$, and $Z_{eff}$=3.0; 1.0; and 1.35. The first and second choice correspond to a close and a distant collision of the heavy particles respectively and the last one is an intermediate situation in which $Z_{eff}$ is obtained from Slater’s screening rule. DDCS’s evaluated with these NN interactions are not presented in the Figures (\[fig1\])-(\[fig3\]) (a) and (b) as they are almost the same as those obtained with or without (\[VZZ\]). The indifference of the DDCS to the form of NN interaction is explained by the almost constant character of the internuclear phase over the region of $\rho$ (see (\[nnphase\])), where $\mathcal{A}_{i {\mathbf{k}}}({\boldsymbol{\rho}})$ has significant values. Panels (a) and (b) of Figures (\[fig1\]) -(\[fig3\]) also show results of CDW-EIS calculations by LaForge $\textit{et al}$ [@LaForge2013]. Their results are almost the same as the present ones for all $E_e$’s when the shapes of the curves are considered. Slight differences appear mostly at small $\eta$ values and for those $\eta$’s where the DDCS’s have maxima. As noted above the NN interaction is negligible for proton impact, so the difference between the two CDW-EIS calculations lies in the description of the target orbitals. However, it must be noted that the above discrepancies are within the experimental uncertainties.
### O$^{8+}$ projectile impact at 1.5 MeV/amu
The picture becomes more complicated for O$^{8+}$ ion impact, see the lower panels ((c) and (d)) in Figures (\[fig1\]) -(\[fig3\]). The role of the internuclear interaction is more evident for this projectile than for the H$^+$ ion. DDCS’s evaluated with and without the NN interaction differ considerably. $\delta(\rho)$, see (\[nnphase\]), oscillates rapidly with $\rho$ when $Z_p$=8 and the DDCS is sensitive to the form of the NN interaction. Let us first consider results evaluated with the NN interaction where the polarization potential is set to zero, $V_{pol}$=0 in (\[VZZ\]). Compared to the measurements, calculations with the more sophisticated screening potential (\[scren\]) present reasonable agreement. For all $E_e$ and for both 2s and 2p the agreement is very good from medium to low $\eta$ values and discrepancies appear only in the large $\eta$ region where the calculations overestimate the measurements. Calculations with $V_{NN}=Z_P Z_T/R$ (not shown in the Figures) fail in almost the entire range of $\eta$ showing the important screening role of the passive electrons. At the same time when the internuclear interaction is taken into account by $V_{NN}=Z_P/R$ the calculations result in DDCS which are very similar to those obtained with (\[VZZ\]), for the case of $V_{pol}$=0. This becomes clear if we consider the range of $\rho$ over which $ \mathcal{A}_{i {\mathbf{k}}}({\boldsymbol{\rho}})$ has significant values. This range extends to $\rho \approx$ 25-30 a.u., however, $V_{NN}(R)$ of (\[VZZ\]) reaches its asymptotic limit and behaves as $Z_P/R$ when $\rho \geq$ 1-2 a.u. So the DDCS is governed by the asymptotic form of the NN interaction in almost the entire range of impact parameters. [ Here we note that similar findings on the role $V_s$ in (\[VZZ\]) were reported in [@Voitkiv09] for the case of 100 MeV/amu C$^{6+}$ and 1 GeV/amu U$^{92+}$ impact on helium. ]{}
Let us now consider results obtained with (\[VZZ\]), i.e. the full form of the NN interaction potential, in which $V_{pol}$ is also taken into account. It is clear from the lower panels of Figures (\[fig1\]) -(\[fig3\]) that $V_{pol}$ has a negligible effect at low $\eta$ values, however, it plays a drastic role in the large $\eta$ region. Very good results are found in the whole $\eta$ region for all electron energies when $V_{pol}$ takes the form (\[VP\]). Taking (\[VPExp\]) for $V_{pol}$ also results in good agreement below $\eta \leq$ 2 a.u., while above this value these calculations overestimate the experiment. We note that a $V_{pol}$ same or similar as in Eq. (\[VPExp\]) was probably used in [@WW2014], which is supported by the fact that the results reported in that work are consistent with the present ones. Results with $V_{pol}$ of (\[VPExp-e\]) are very close to those obtained with $V_{pol}$=0.
As for the case of H$^+$ impact, panels (c) and (d) of Figs. (\[fig1\])-(\[fig3\]) also present the CDW-EIS results of La Forge *et. al.* [@LaForge2013]. Their CDW-EIS calculations, in which the NN interaction was taken into account classically, fail to reproduce the measured data mostly at low $\eta$ values. La Forge *et. al.* [@LaForge2013] also presented results where the NN interaction was accounted for quantum mechanically in terms of the eikonal approximation. However, they found that the classical treatment was more adequate especially at large $\eta$ with increasing E$_{e}$. Given the good results of the present calculations we see no reason to perform similar calculations with a classical inclusion of the NN interaction.
Hitherto, we have discussed DDCS results by considering only their shapes. Obviously, confronting a calculation with a measurement that is lacking an absolute normalization might influence the assessment of the validity of the theory. In Figures (\[fig1\])-(\[fig3\]), we have normalized the measured DDCS to our calculation at $\eta$=0.65 a.u. for Li(2s) at $E_e$=2 eV. Normalization at a different $\eta$ might modify the judgement of the theory. This is especially true when relative cross sections for the different ejection energies are considered. These comments mainly apply to the case of O$^{8+}$ impact, see Figs. (\[fig2\]) and (\[fig3\]), where shifted experimental data showing the “best visual fit” are also presented with open symbols. The shifts correspond to factors of about 2-5 depending on the collision parameters. A similar drift in the relative normalisation of the DDCS for O$^{8+}$ impact has also been noted in [@WW2014].
### Exploring the role of the polarization potential
Deviations of DDCS’s at large $\eta$ obtained with the different forms of $V_{pol}$ can be explored by considering the potentials and the internuclear phases ($\delta_{pol}$) evaluated only on them (see (\[nnphase\])). Figure (\[fig4\]) (a) shows $V_{pol}$ of (\[VP\])- (\[VPExp-e\]) as a function of $t$ for three different $\rho$ values, note that $\rho$ and $t$ are related by ${\mathbf{R}}={\boldsymbol{\rho}}+{\mathbf{v}}t$. This Figure indicates that considerable differences among the potentials appear only for $\rho \leq$ 1 a.u.. $V_{pol}$ of (\[VP\]) contains a “cut-off” parameter for which we take the 1s shell radius d=0.57 a.u.. Different criteria for d have also been proposed in [@Yurova2014] and [@Zhang1992]. They result in d=0.3 and 0.93, respectively. $V_{pol}$ evaluated with these values of d are not represented in (\[fig4\]) (a). However, deviations related to the use of different d parameters in (\[VP\]) can be assessed from Figure (\[fig4\]) (b). Figure (\[fig4\]) (b) presents $\delta_{pol}$ evaluated with $V_{pol}$ of (\[VP\]) using three different values for d in comparison with those obtained from $V_{pol}$ of (\[VPExp\]) and (\[VPExp-e\]). It is clear that the phases are the same for all forms of polarization potential when $\rho \geq$ 1, however, the deviations are significant at small $\rho$ due to different cutting procedures in (\[VP\]) and different forms of $V_{pol}$. Apart from the very low $\rho$ region, which is unimportant for the DDCS, $V_{pol}$ of (\[VP\]) with d=0.3 provides nearly the same $\delta_{pol}$ as that of (\[VPExp\]). At the same time $\delta_{pol}$ provided by $V_{pol}$ of (\[VP\]) with d=0.93 and of (\[VPExp-e\]) have very small values. This fact explains the small differences between DDCS’s with (\[VPExp-e\]) and without polarization potential observed in Figures (\[fig1\])-(\[fig3\]) (c) and (d). It is also obvious that deviations in $\delta_{pol}$ and so in $V_{pol}$ are manifested in the DDCS in the large $\eta$ region. This can be observed in Figure (\[fig5\]) where the DDCS evaluated with different forms of $V_{pol}$ is presented for O$^{8+}$ + Li(2s) collsions for $E_e$=2 eV.
In [@WW2014] ionization of Li was discussed within the framework of the coupled pseudostate (CP) model. Their results show similar good agreement with the measurement as the present ones at low and medium $\eta$ values when the shapes of the DDCS’s are considered. A similar study within the framework of the time-dependent close-coupling (TDCC) method was performed for $O^{8+}$ impact only in [@Ciappina13]. Reasonable agreement, especially for the Li(2p) target was observed. Results of these calculations are not included in Figures (\[fig1\])-(\[fig3\])) for the sake of clarity. However, in Figures (\[fig5-a\]) and (\[fig6\]) we give a comparison of the present, the CC, and the TDCC results for some selected collision parameters. In Figure (\[fig5-a\]) we compare the present DDCS results for O$^{8+}$ + Li(2s) collisions at $E_e$=2 eV with the CP and the TDCC results. The DDCS from the TDCC calculation is normalized to the present data at $\eta$=0.65 a.u. while those from the CP is shown on an absolute scale. In [@WW2014] the polarization potential was given by (\[VPExp\]) and the agreement between their and our results obtained with $V_{pol}$ of (\[VPExp\]) is good except around $\eta$=2 a.u. In [@Ciappina13] the NN interaction was taken into account by the Coulomb repulsion with an effective charge and those results seem to be comparable to our results without $V_{pol}$.
Similarly good agreement between our and the CP results of [@WW2014] can be observed in Figure (\[fig6\]) showing the DDCS for 1.5 MeV/amu O$^{8+}$ Li(2p$_{0,1}$) collisions at $E_e$=20 eV. It is seen that CP and CDW-EIS calculations agree well in the binary region and slight discrepancies appear at low and high $\eta$ values (see Figure (\[fig6\]) (a)). It is important to note the good agreement on the absolute scale. Figure (\[fig6\]) (b) shows results of calculations in which the NN interaction is neglected. Besides the present CDW-EIS results, results of a first Born (B1) calculation performed by us and a B1 calculation from [@WW2014] are also presented. The two B1 calculations are in very good agreement and differ from the CDW-EIS cross sections only at $\eta \le$ 0.5 a.u. values. This tells us that, except for the very low region of $\eta$, the collision with the O$^{8+}$ projectile is still in the B1 regime. Including the NN interaction in a treatment where $ \mathcal{A}_{i {\mathbf{k}}}({\boldsymbol{\rho}})$ is evaluated in B1 yields a similar good account of the measurements as the CDW-EIS model (except at very low $\eta$).
Fully differential cross sections
---------------------------------
A more detailed analysis can be performed on the level of the fully differential cross section (FDCS) $$ \frac{ \mathrm{d} \sigma^{3}}{ \mathrm{d} E_k \mathrm{d}\Omega_e \mathrm{d}\Omega_{f} } = k_e
|\mathcal{R}_{i {\mathbf{k}}}({\boldsymbol{\eta}})|^2,
\label{fdcs}$$ where $\mathrm{d}\Omega_{f}$($\theta_f,\phi_f$) and $\mathrm{d}\Omega_{e}$($\theta_e,\phi_e$) denote the solid angles for the scattered projectile and emitted electron, respectively.
Figure (\[fig7\]) presents FDCS’s for 1.5 MeV/amu O$^{8+}$ - Li(2s,2p) collisions as functions of $\phi_{e}$ when $\theta_{e}$=90$^o$, E$_{e}$=1.5 eV and q was set to 0.3 and 1.0 a.u. for 2p and 2s, respectively. The Y-Z plane is fixed by the incoming and scattered projectile’s momenta with the positive Z axis pointing into the incident projectile direction. This Cartesian coordinate system is completed with an X axis to form a right-handed system. $\phi_e$ is measured in the normal way with respect to the X axis, that is Figure (\[fig7\]) presents electron ejection cross sections in a plane perpendicular to the projectile direction. Let us first discuss results obtained with different forms of the polarization potential.
### Effects of polarization in the ionization of Li(2s) and Li(2p) at different q
Calculations for Li(2p) are carried out for q=0.3 a.u. and as can be expected the cross sections are not very sensitive to the form of the $V_{pol}$ interaction (only NN with (\[VP\]) are presented in Figure (\[fig7\]) (a) and (b)). At the same time for Li(2s) where q=1 a.u., see Figure (\[fig7\]) (c), the FDCS is very sensitive to the form of $V_{pol}$. The calculation with (\[VP\]) for the polarization potential, which showed a good account of the DDCS (see Figures (\[fig1\])-(\[fig3\])), reproduces the main characteristics of the measured distribution, however, it has defects when finer details are considered. Calculations with other forms of $V_{pol}$ are less satisfactory. Calculations performed with an NN interaction potential using the effective target ion charge Z$_{eff}$=1.0 reveal that the collision parameters of Figure (\[fig7\]) correspond to the distant collision regime. Similar calculations with Z$_{eff}$=1.34 reveal better agreement in shape, but a further increase of Z$_{eff}$ cannot be justified as it adversely affects the absolute values of the cross section. A detailed analysis shows that the relative magnitudes of the peaks in Figure (\[fig7\]) (c) depend strongly on the character of the transition amplitude at around $\rho \approx $ 1 a.u., where the polarization potentials change drastically due to the cutting procedures. Figure (\[fig7\]) shows also CP results of [@WW2014] which, especially for Li(2s) are in better agreement with the experiment than the present calculations. Differences between the CP and CDW-EIS results appear not only in the shapes of the FDCS’s but also on the absolute scale. The latter is unexpected if one recalls the good account of the DDCS by both methods, see Figure (\[fig5\]). We think that the observed discrepancy is related to the slightly different account of the NN interaction in the two calculations and this difference is probably emphasized with the decrease of $E_e$.
Figure (\[fig8\]) shows the fully differential angular distribution of electrons ejected in 1.5 MeV/amu O$^{8+}$ - Li(2s) collisions. $E_e$ is fixed at 1.5 eV and $q$=1.0 a.u. Figure (\[fig8\]) (a) presents the FDCS evaluated without internuclear interaction, while (b)-(d) are obtained from calculations including the NN interaction with $V_{pol}$ of (\[VP\]), (\[VPExp\]) and (\[VPExp-e\]), respectively. Considerable differences can be observed between results with and without NN interaction. The more characteristic differences are the sharpening of the FDCS in the direction of $\textbf{q}$ (positive Y axis) and the appearance of the wings in the $\pm$ X directions (perpendicular to the scattering plane) caused by the NN interaction. It is obvious from Figure (\[fig8\]) that the two small peaks in Figure (\[fig7\]) (c) at $\phi \approx$30$^o$ and 150$^o$ are due to the wings whereas calculations without NN interaction only give rise to the centroid peak at $\phi$=90$^o$. This has already been reported in [@Hubele2013] and [@WW2014]. At the same time considerable discrepancies are visible among results with different $V_{pol}$. The node of the 2s orbital is at around $r_T\approx$ 1 a.u.. This is the distance where the differences in the polarization potentials are emphasized. Moreover, the phase due to $V_{pol}$ affects the full NN phase only for $\rho \leq$ 1 a.u.. Accordingly, the strong variation of the FDCS with $V_{pol}$ supports the idea that the shape of the wings is determined by the low $\rho$ character of the NN interaction.
Weaker deviations appear between calculations with and without NN interaction for the Li(2p) ionization FDCS at q=0.3 a.u., see Figure (\[fig9\]). This FDCS is not symmetric with respect to the collision plane even for the calculation without NN interaction and the NN interaction further emphasizes this asymmetry.
### Satellite peaks in the ionization of Li(2s)
Finally let us turn our attention to the satellite peak structure or the presence of the wings in the FDCS of Figures (\[fig7\]) (c) and (\[fig8\]), respectively. First of all we note that slow electrons are usually ejected in distant collisions between the projectile and target, where the three-body dipole interaction dominates [@sto97]. At the same time for high impact velocities and large projectile charges the two-body binary encounter mechanism plays an important role and manifests itself as a sharp peak at $\theta_e \approx$ 90$^o$ in the angular distribution. This is well seen in Figure (\[fig10\]) (a) where the DDCS versus $\theta_e$ is presented for $E_e$=1.5 eV. Results only due to the dipole interaction are derived from a CDW-EIS calculation where only the $l$=0,1 partial waves from the expansion of the wave function for the final state have been taken into account [@GFS95]. The definite role of the binary mechanism is well seen at $\theta_e \approx$ 90$^o$ in the Figure. The FDCS’s obtained only with the dipole interaction and with all interaction terms (including dipole and binary) are displayed in Figure (\[fig10\]) (b). This Figure shows that the multiple peak structure in the FDCS appears only when binary and NN interactions are taken into account in the calculation. The binary interaction describes a head-on collision between the electron and the projectile, which obviously is important in the region where the electron density is significant. Test calculations demonstrated that the multiple peak structure reduces to a single peak characteristic of the projectile-electron interaction, when the NN interaction is neglected in a $\rho$=\[1-5\] a.u. window in the calculation. This region of $\rho$ is comparable to the extension of the electron cloud for the 2s orbital. Moreover, we have performed calculations where Z$_p$ in the NN interaction has been varied. No satellite peaks are obtained for Z$_p \leq$ 3, for which the shape of the FDCS is almost the same as in a calculation without the NN interaction. An analysis of the classical deflection function revealed that the impact parameter that corresponds to Coulomb scattering of the projectile (with Z$_p$=8) from the target nucleus at q=1 a.u. is at $\rho \approx$ 2.5 a.u. For $Z_p$=1 this region shifts to the much lower value $\rho \approx$ 0.2 a.u. where the electron density is negligible. These results confirm the idea that the presence of the satellite peak or the wing structure in the FDCS is due to the combination of the NN interaction and the binary collision mechanisms. This idea is further supported by the fact that a calculation performed at $q$=1 a.u. for the case of the 2p orbital shows similar satellite peak structures as discussed for 2s in Figure (\[fig7\]) (c).
Summary and conclusion {#sec:conc.}
======================
In this paper we applied the continuum distorted wave with eikonal initial state approximation to describe ionisation of Li under the impact of 6 MeV H$^+$ and 1.5 MeV/amu O$^{8+}$ ions. Doubly and fully differential cross sections have been evaluated within the framework of the independent electron approximation. The effect of the internuclear interaction (NN) has been taken into account by a phase factor.
No importance of the NN interaction has been found for the case of proton projectiles. At the same time, when describing collisions with O$^{8+}$ ions the inclusion of the NN interaction potential in the calculation cannot be avoided for a proper account of the processes at play. The NN interaction potential is made of by the Coulomb interaction of the heavy nuclei and two interaction terms due to the screening of the passive electrons and the polarization of the target by the incident projectile ion. The most dominant effect is provided by the Coulomb interaction term, however, the inclusion of the other potentials cannot be avoided for a proper account of the processes. A characteristic role of $V_{pol}$ has been found in the DDCS for high $\eta$ values. $V_{pol}$ is not uniquely defined for short distances and accordingly different approaches and different forms of $V_{pol}$ are available in the literature. Three different forms of $V_{pol}$ have been used in the present study and a very good reproduction of the measured DDCS has been obtained when $V_{pol}$ is given by (\[VP\]). $V_{pol}$ of (\[VP\]) depends on a cut-off parameter and its proper value might differ for the processes and systems under study. Note that we used d=0.57 for our best results, while values for d=0.3-0.93 have also been recommended in various studies in the field of electron-atom collisions [@Zhang1992; @Mitroy2010; @Yurova2014]. In the case of the FDCS deviations in different forms of the NN interaction potential are more emphasized.
The satellite peak structure observed in the FDCS for the O$^{8+}$ - Li(2s) system were attributed to the nodal structure of the 2s orbital in [@Hubele2013] and [@WW2014]. Our study offers an alternative explanation, namely that the satellite structure is due to a combination of the NN interaction and the binary interaction mechanism.
Acknowledgements
================
This work was supported by the Natural Sciences and Engineering Research Council of Canada and by the Hungarian Scientific Research Fund (OTKA Grant No. K 109440). We thank Eberhard Engel for making his atomic structure calculations available to us and Daniel Fischer for discussions on the data of Ref. [@LaForge2013].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We introduce a new class of adaptive methods for optimization problems posed on the cone of convex functions. Among the various mathematical problems which posses such a formulation, the Monopolist problem [@Rochet:1998uj; @Ekeland:2010tl] arising in economics is our main motivation.
Consider a two dimensional domain $\OmegaC$, sampled on a grid $\OmegaD$ of $N$ points. We show that the cone $\operatorname{Conv}(\OmegaD)$ of restrictions to $\OmegaD$ of convex functions on $\OmegaC$ is typically characterized by $\approx N^2$ linear inequalities; a direct computational use of this description therefore has a prohibitive complexity. We thus introduce a hierarchy of sub-cones $\operatorname{Conv}(\cV)$ of $\operatorname{Conv}(\OmegaD)$, associated to stencils $\cV$ which can be adaptively, locally, and anisotropically refined. We show, using the arithmetic structure of the grid, that the trace $U_{|X}$ of any convex function $U$ on $\Omega$ is contained in a cone $\operatorname{Conv}(\cV)$ defined by only $\cO( N \ln^2 N)$ linear constraints, in average over grid orientations.
Numerical experiments for the Monopolist problem, based on adaptive stencil refinement strategies, show that the proposed method offers an unrivaled accuracy/complexity trade-off in comparison with existing methods. We also obtain, as a side product of our theory, a new average complexity result on edge flipping based mesh generation.
author:
- 'Jean-Marie Mirebeau[^1]'
bibliography:
- 'SelectedPapers.bib'
title: |
Adaptive, Anisotropic and Hierarchical\
cones of Discrete Convex functions[^2]
---
A number of mathematical problems can be formulated as the optimization of a convex functional over the *cone of convex functions* on a domain $\OmegaC$ (here compact and two dimensional): $$\operatorname{Conv}(\OmegaC):=\{ \uC : \OmegaC \to \R; \, \uC \text{ is convex}\}.$$ This includes optimal transport, as well as various geometrical conjectures such as Newton’s problem [@LachandRobert:2005bi; @MERIGOT:tr]. We choose for concreteness to emphasize an economic application: the Monopolist (or Principal Agent) problem [@Rochet:1998uj], in which the objective is to design an optimal product line, and an optimal pricing catalog, so as to maximize profit in a captive market. The following minimal instance is numerically studied in [@Aguilera:2008uq; @Ekeland:2010tl; @Oberman:2011wy] and on Figure \[fig:Monopolist\]. With $\OmegaC = [1,2]^2$ $$\label{eq:PrincipalAgent}
\min \left\{ \int_{\OmegaC} \left( \frac 1 2 \|\nabla \uC(z)\|^2 -\<\nabla \uC(z),z\>+ \uC(z)\, \right) dz; \, \uC \in \operatorname{Conv}(\OmegaC), \, \uC \geq 0\right\}.$$ We refer to the numerical section §\[sec:Numerics\], and to [@Rochet:1998uj] for the economic model details; let us only say here that the Monopolist’s optimal product line is $\{\nabla U(z); \, z \in \Omega\}$, and that the optimal prices are given by the Legendre-Fenchel dual of $U$. Consider the following three regions, defined for $k\in \{0,1,2\}$ (implicitly excluding points $z \in \Omega$ close to which $U$ is not smooth) $$\label{def:OmegaK}
\Omega_k := \{z \in \Omega; \, \operatorname{rank}(\operatorname{Hessian}U(z) ) = k\}.$$ Strong empirical evidence suggests that these three regions have a non-empty interior, although no qualitative mathematical theory has yet been developed for these problems. The optimal product line observed numerically, Figure \[fig:Monopolist\], confirms a qualitative (and conjectural) prediction of the economic model [@Rochet:1998uj] called “bunching”: low-end products are less diverse than high-end ones, down to the topological sense. (The monopolist willingly limits the variety of cheap products, because they may compete with the more expensive ones, on which he has a higher margin.)
![ Numerical approximation $U$ of the solution of the classical Monopolist’s problem , computed on a $50 \times 50$ grid. Left: level sets of $U$, with $U=0$ in white. Center left: level sets of $\det(\operatorname{Hessian}U)$ (with again $U=0$ in white); note the degenerate region $\Omega_1$ where $\det(\operatorname{Hessian}U)=0$. Center right: distribution of products sold by the monopolist. Right: profit margin of the monopolist for each type of product (margins are low on the one dimensional part of the product line, at the bottom left). Color scales on Figure \[fig:PARotated\]. []{data-label="fig:Monopolist"}](\pathPic/Convex/PrincipalAgent/50/LevelLines.jpg "fig:"){width="3.8cm"} ![ Numerical approximation $U$ of the solution of the classical Monopolist’s problem , computed on a $50 \times 50$ grid. Left: level sets of $U$, with $U=0$ in white. Center left: level sets of $\det(\operatorname{Hessian}U)$ (with again $U=0$ in white); note the degenerate region $\Omega_1$ where $\det(\operatorname{Hessian}U)=0$. Center right: distribution of products sold by the monopolist. Right: profit margin of the monopolist for each type of product (margins are low on the one dimensional part of the product line, at the bottom left). Color scales on Figure \[fig:PARotated\]. []{data-label="fig:Monopolist"}](\pathPic/Convex/PrincipalAgent/50/HessianOmega.png "fig:"){width="3.8cm"} ![ Numerical approximation $U$ of the solution of the classical Monopolist’s problem , computed on a $50 \times 50$ grid. Left: level sets of $U$, with $U=0$ in white. Center left: level sets of $\det(\operatorname{Hessian}U)$ (with again $U=0$ in white); note the degenerate region $\Omega_1$ where $\det(\operatorname{Hessian}U)=0$. Center right: distribution of products sold by the monopolist. Right: profit margin of the monopolist for each type of product (margins are low on the one dimensional part of the product line, at the bottom left). Color scales on Figure \[fig:PARotated\]. []{data-label="fig:Monopolist"}](\pathPic/Convex/PrincipalAgent/Rotated2/ProductDensity_S50_T0.png "fig:"){width="3.8cm"} ![ Numerical approximation $U$ of the solution of the classical Monopolist’s problem , computed on a $50 \times 50$ grid. Left: level sets of $U$, with $U=0$ in white. Center left: level sets of $\det(\operatorname{Hessian}U)$ (with again $U=0$ in white); note the degenerate region $\Omega_1$ where $\det(\operatorname{Hessian}U)=0$. Center right: distribution of products sold by the monopolist. Right: profit margin of the monopolist for each type of product (margins are low on the one dimensional part of the product line, at the bottom left). Color scales on Figure \[fig:PARotated\]. []{data-label="fig:Monopolist"}](\pathPic/Convex/PrincipalAgent/Rotated2/Margin_S50_T0.png "fig:"){width="3.8cm"}
We aim to address numerically optimization problems posed on the cone of convex functions, through numerical schemes which preserve the rich qualitative properties of their solutions, and have a moderate computational cost. In order to put in light the specificity of our approach, we review the existing numerical methods for these problems, which fall in the following categories. We denote by $X$ a grid sampling of the domain $\Omega$, and by $\operatorname{Conv}(X)$ the cone of discrete (restrictions of) convex functions $$\label{def:ConvX}
\operatorname{Conv}(X) := \{ U_{|X}; \, U \in \operatorname{Conv}(\Omega)\}.$$
- (Interior finite element methods) For any triangulation $\cT$ of $X$, consider the cone $$\operatorname{Conv}(\cT) := \{ u : X \to \R; \, \interp_\cT u \in \operatorname{Conv}(\Omega)\}.$$ A natural but *invalid* numerical method for is to fix a-priori a family $(\cT_h)_{h>0}$ of regular triangulations of $\Omega$, where $h>0$ denotes mesh scale, and to optimize the functional of interest over the associated cones. Indeed, the union of the cones $\operatorname{Conv}(\cT_h)$ is *not* dense in $\operatorname{Conv}(\Omega)$, see [@Chone:2001fa]. Let us also mention that for a given generic $u \in \operatorname{Conv}(X)$, there exists *only one* triangulation $\cT$ of $X$ such that $u \in \operatorname{Conv}(\cT)$, see §\[sec:DelaunayIntro\].
- (Global constraints methods) The functional of interest, suitably discretized, is minimized over the cone $\operatorname{Conv}(X)$ of discrete convex functions [@Carlier:2001tq], or alternatively [@Ekeland:2010tl] on the augmented cone $$\label{def:GradConvX}
\operatorname{GradConv}(X) := \{ (U_{|X}, \nabla U_{|X}); \, U \in \operatorname{Conv}(\Omega)\},$$ in which we refer by $\nabla U$ to arbitrary elements of the subgradient of the convex map $U$.
Both $\operatorname{Conv}(X)$ and $\operatorname{GradConv}(X)$ are characterized by a family of long range linear inequalities, with domain wide supports, and of cardinality growing *quadratically* with $N := \#(X)$, see §\[sec:Characterization\] and [@Ekeland:2010tl]. Despite rather general convergence results, these two methods are impractical due to their expensive numerical cost, in terms of both computation time and memory.
- (Local constraints methods) Another cone $\operatorname{Conv}'(X)$ is introduced, usually satisfying neither $\operatorname{Conv}(X) {\subseteq}\operatorname{Conv}'(X)$ nor $\operatorname{Conv}'(X) {\subseteq}\operatorname{Conv}(X)$, but typically characterized by relatively few constraints, with short range supports. Obermann et al. [@Oberman:2011wi; @Oberman:2011wy] use $\cO(N)$ linear constraints, with $N := \#(X)$. Merigot et al. [@MERIGOT:tr] use slightly more linear constraints, but provide an efficient optimization algorithm based on proximal operators. Aguilera et al. [@Aguilera:2008uq] consider $\cO(N)$ constraints of semi-definite type.
Some of these methods benefit from convergence guarantees [@MERIGOT:tr; @Aguilera:2008uq] as $N \to \infty$. Our numerical experiments with [@Aguilera:2008uq; @Oberman:2011wi; @Oberman:2011wy] show however that they suffer from accuracy issues, see §\[sec:Comparison\] and Figure \[fig:BadHessian\], which limits the usability of their results.
- (Geometric methods) A polygonal convex set can be described as the convex hull of a finite set of points, or as an intersection of half-spaces. Geometric methods approximate a convex function $U$ by representing its epigraph $\{ (z,t); \, z \in \Omega, \, t \geq U(z)\}$ under one of these forms. Energy minimization is done by adjusting the points position, or the coefficients of the affine forms defining the half-spaces, see [@Wachsmuth:2013ta; @LachandRobert:2005bi].
The main drawback of these methods lies in the optimization procedure, which is quite non-standard. Indeed the discretized functional is generally non-convex, and the polygonal structure of the represented convex set changes topology during the optimization.
We propose an implementation of the constraint of convexity via a limited (typically quasi-linear) number of linear inequalities, featuring both short range and domain wide supports, which are selected locally and anisotropically in an adaptation loop using a-posteriori analysis of solutions to intermediate problems. Our approach combines the accuracy of global constraint methods, with the limited cost of local constraint ones, see §\[sec:Comparison\]. It is based on a family of sub-cones $$\operatorname{Conv}(\cV) {\subseteq}\operatorname{Conv}(X),$$ each defined by some linear inequalities associated to a family $\cV$ of *stencils*, see Definition \[def:Cones\]. These stencils are the data $\cV = (\cV(x))_{x \in X}$ of a collection of offsets $e \in \cV(x)$ pointing to selected neighbors $x+e$ of any point $x\in X$, and satisfying minor structure requirements, see Definition \[def:Stencil\]. The cones satisfy the hierarchy property $\operatorname{Conv}(\cV \cap \cV') = \operatorname{Conv}(\cV) \cap \operatorname{Conv}(\cV')$, see Theorem \[th:Hierarchy\]. Most elements of $\operatorname{Conv}(X)$ belong to a cone $\operatorname{Conv}(\cV)$ defined by only $\cO(N \ln^2 N)$ linear inequalities, in a sense made precise by Theorem \[th:Avg\]. Regarding both stencils and triangulations as directed graphs on $X$, we show in Theorem \[th:Decomp\] (under a minor technical condition) that the cone $\operatorname{Conv}(\cV)$ is the union of the cones $\operatorname{Conv}(\cT)$ associated to triangulations $\cT$ included in $\cV$. Our hierarchy of cones has similarities, but also striking differences as discussed in conclusion, with the other multiscale constructions (wavelets, adaptive finite elements) used in numerical analysis.
The minimizer $u \in \operatorname{Conv}(X)$ of a given convex energy $\cE$ can be obtained without ever listing the inequalities defining $\operatorname{Conv}(X)$ (which would often not fit into computer memory for the problem sizes of interest), but only solving a small sequence of optimization problems over sub-cones $\operatorname{Conv}(\cV_i)$ associated to stencils $\cV_i$, designed through adaptive refinement strategies. Our numerical experiments give, we believe, unprecedented numerical insight on the qualitative behavior of the monopolist problem and its variants. Thanks to the adaptivity of our scheme, this accuracy is not at the expense of computation time or memory usage. See §\[sec:Numerics\].
Main results
============
The constructions and results developed in this paper apply to an arbitrary convex and compact domain $\Omega{\subseteq}\R^2$, discretized on an orthogonal grid of the form: $$\label{def:ParametrizedDiscreteDomain}
\OmegaC \cap h R_\theta (\xi+ \Z^2),$$ where $h>0$ is a scale parameter, $R_\theta$ is the rotation of angle $\theta\in \R$, and $\xi\in \R^2$ is an offset. The latter two parameters are used in our main approximation result Theorem \[th:Avg\], heuristically to eliminate by averaging the influence of rare unfavorable cases in which the approximated convex function hessian is degenerate in a direction close to the grid axes. For simplicity, and up to a linear change of coordinates, we assume unless otherwise mentionned that these parameters take their canonical values: $$\OmegaD := \OmegaC \cap \Z^2.$$
The choice of a grid discretization provides arithmetic tools that would not be available for an unstructured point set.
1. An element $e=(\alpha,\beta) \in \Z^2$ is called irreducible iff $\gcd(\alpha,\beta)=1$.
2. A basis of $\Z^2$ is a pair $(f,g) \in (\Z^2)^2$ such that $|\det(f,g)| = 1$. A basis $(f,g)$ of $\Z^2$ is direct iff $\det(f,g) = 1$, and acute iff $\<f,g\> \geq 0$.
Considering special (non-canonical) bases of $\Z^d$ is relevant when discretizing anisotropic partial differential equations on grids, such as anisotropic diffusion [@Fehrenbach:2013ut], or anisotropic eikonal equations [@Mirebeau2012anisotropic]. In this paper, and in particular in the next proposition, we rely on a specific two dimensional structure called the Stern-Brocot tree [@Graham:1994uv], also used in numerical analysis for anisotropic diffusion [@Bonnans:2004ud], and eikonal equations of Finsler type [@Mirebeau:2012wm].
\[prop:Parents\] The application $
(f,g) \mapsto e := f+g
$ defines a bijection between direct acute bases $(f,g)$ of $\Z^2$, and irreducible elements $e\in \Z^2$ such that $\|e\| > 1$. The elements $f,g$ are called the parents of $e$. (Unit vectors have no parents.)
Existence, for a given irreducible $e$ with $\|e\|>1$, of the direct acute basis $(f,g)$ such that $e=f+g$. We assume without loss of generality that $e=(\alpha,\beta)$ has non-negative coordinates. Since $\gcd(\alpha,\beta)=1$ and $\|e\|>1$ we obtain that $\alpha \geq 1$ and $\beta \geq 1$. Classical results on the Stern-Brocot tree [@Graham:1994uv] state that the irreducible positive fraction $\alpha/\beta$ can be written as the *mediant* $(\alpha'+\alpha'')/(\beta'+\beta'')$ of two irreducible fractions $\alpha'/\beta'$, $\alpha''/\beta''$ (possibly equal to $0$ or $+\infty$), with $\alpha',\beta',\alpha'',\beta'' \in \Z_+$ and $\alpha' \beta''-\beta'\alpha''=1$. Setting $f = (\alpha',\beta')$ and $g=(\alpha'',\beta'')$ concludes the proof.
Uniqueness. Assume that $e=f+g=f'+g'$, where $(f,g)$, $(f',g')$ are direct acute bases of $\Z^2$. One has $\det(f,e)=\det(f,f+g) = 1$, and likewise $\det(f',e)=1$. Hence $\det(f-f',e)=0$, and therefore $f' = f+k e$ for some scalar $k$, which is an integer since $e$ is irreducible. Subtracting we obtain $g' = e-f' = g-k e$, and therefore $$\begin{aligned}
\<f',g'\> = \<f+k e, g-k e\> &= \<(k+1) f+k g, -k f +(1-k) g\>\\
&= -k (k+1) \|f\|^2 - k (k-1) \|g\|^2 + \<f,g\> (1-2 k^2). \end{aligned}$$ This expression is negative unless the integer $k$ is zero, hence $f=f'$, and $g=g'$.
Characterization of discrete convexity by linear inequalities {#sec:Characterization}
-------------------------------------------------------------
We introduce some linear forms on the vector space $\cF(X) := \{u : X \to \R\}$, which non-negativity characterizes restrictions of convex maps. The convex hulls of their respective supports have respectively the shape of a segment, a triangle, and a parallelogram, see Figure \[fig:ConstraintTypes\].
\[def:Constraints\] For each $x \in \FixedGrid$, consider the following linear forms of $u \in \cF(\Z^2)$.
1. (Segments) For any irreducible $e \in \FixedGrid$: $$S_x^e (u) := u(x+e)-2 u(x)+u(x-e).$$
2. (Triangles) For any irreducible $e \in \FixedGrid$, with $\|e\|>1$, of parents $f,g$: $$T_x^e (u) := u(x+e)+u(x-f)+u(x-g) - 3 u(x).$$
3. (Parallelograms) For any irreducible $e \in \FixedGrid$, with $\|e\|>1$, of parents $f,g$: $$P_x^e(u) := u(x+e) - u(x+f) - u(x+g) + u(x).$$
A linear form $L$ among the above can be regarded as a finite weighted sum of Dirac masses. In this sense we define the support $\operatorname{supp}(L) {\subseteq}\FixedGrid$, $\#\operatorname{supp}(L) \in \{3,4\}$. The linear form $L$ is also defined on $\cF(X)$ whenever $\operatorname{supp}(L) {\subseteq}X$.
![The supports, and weights, of the different linear forms $S_x^e$, $T_x^e$, $P_x^e$.[]{data-label="fig:ConstraintTypes"}](\pathPic/ConvexityConstraints/ConstraintTypes/ConstraintSze.pdf "fig:"){width="3.5cm"} ![The supports, and weights, of the different linear forms $S_x^e$, $T_x^e$, $P_x^e$.[]{data-label="fig:ConstraintTypes"}](\pathPic/ConvexityConstraints/ConstraintTypes/ConstraintTze.pdf "fig:"){width="4cm"} ![The supports, and weights, of the different linear forms $S_x^e$, $T_x^e$, $P_x^e$.[]{data-label="fig:ConstraintTypes"}](\pathPic/ConvexityConstraints/ConstraintTypes/ConstraintPze.pdf "fig:"){width="4cm"}
If $u \in \operatorname{Conv}(\OmegaD)$, then by an immediate convexity argument one obtains $S_x^e(u) \geq 0$ and $T_x^e(u) \geq 0$, whenever these linear forms are supported on $\OmegaD$. As shown in the next result, this provides a minimal characterization of $\operatorname{Conv}(\OmegaD)$ by means of linear inequalities. The linear forms $P_x^e$ will on the other hand be used to define strict sub-cones of $\operatorname{Conv}(\OmegaD)$. The following result corrects[^3] Corollary 4 in [@Carlier:2001tq].
\[th:AllConstraints\]
- The cone $\operatorname{Conv}(\OmegaD)$ is characterized by the non-negativity of the linear forms $S_x^e$ and $T_x^e$, introduced in Definition \[def:Constraints\], which are supported in $\OmegaD$.
- If one keeps only one representative among the identical linear forms $S_x^e$ and $S_x^{-e}$, then the above characterization of $\operatorname{Conv}(\OmegaD)$ by linear inequalities is minimal.
For any given $x \in \OmegaD$, the number of linear inequalities $S_x^e$ (resp. $T_x^e$) appearing in the characterization of $\operatorname{Conv}(\OmegaD)$ is bounded by the number of irreducible elements $e \in \Z^2$ such that $x+e \in X$. Hence the $N$-dimensional cone $\operatorname{Conv}(\OmegaD)$ is characterized by at most $2 N^2$ linear inequalities, where $N := \#(\OmegaD)$.
If all the elements of $\OmegaD$ are *aligned*, this turns out to be an over estimate: one easily checks that exactly $N-1$ inequalities of type $S_x^e$ remain, and no inequalities of type $T_x^e$. This favorable situation does not extend to the two dimensional case however, because irreducible elements arise frequently in $\FixedGrid$, with positive density [@Hardy:1979vq]: $$\frac 6 {\pi^2} = \lim_{n \to \infty} n^{-2} \# \left\{(i,j) \in \{1,\cdots,n\}^2; \, \gcd(i,j)=1\right\}.$$ If the domain $\OmegaC$ has a non-empty interior, then one easily checks from this point that the minimal description of $\operatorname{Conv}(\OmegaD)$ given in Theorem \[th:AllConstraints\] involves no less than $c N^2$ linear constraints[^4], where the constant $c > 0$ depends on the domain shape but not on its scale (or equivalently, not on the grid scale $h$ in ). This quadratic number of constraints, announced in the description of global constraint methods in the introduction, is a strong drawback for practical applications, which motivates the construction of adaptive sub-cones of $\operatorname{Conv}(X)$ in the next subsection.
Several works addressing optimization problems posed on the cone of convex functions [@Carlier:2001tq; @Oberman:2011wi], have in the past omitted all or part of the linear constraints $T_x^e$, $x\in X$, $e\in \Z^2$ irreducible with $\|e\|>1$. We consider in Appendix \[sec:Directional\] this weaker notion of discrete convexity, introducing the cone $\operatorname{DConv}(X)$ of directionally convex functions, defined by the non-negativity of only $S_x^e$, $x \in X$, $e \in \Z^2$ irreducible.
We show that elements of $\operatorname{DConv}(X)$ cannot in general be extended into globally convex functions, but that one can extend their restriction to a grid coarsened by a factor $2$. We also introduce a hierarchy of sub-cones of $\operatorname{DConv}(X)$, similar to the one presented in the next subsection.
Hierarchical cones of discrete convex functions
-----------------------------------------------
We introduce in this section the notion of stencils $\cV = (\cV(x))_{x \in X}$ on $X$, and discuss the properties (hierarchy, complexity) of cones $\operatorname{Conv}(\cV)$ attached to them. The following family $\cV_{\max}$ of sets is referred to as the “maximal stencils”: for all $x \in X$ $$\label{def:MaximalStencils}
\cV_{\max}(x) := \{ e \in \Z^2 \text{ irreducible}; \, x+e \in X\}.$$ The convex cone generated by a subset $A$ of a vector space is denoted by $\operatorname{Cone}(A)$, with the convention $\operatorname{Cone}(\emptyset) = \{0\}$.
\[def:Stencil\] A family $\cV$ of stencils on $\OmegaD$ (or just: “Stencils on $X$”) is the data, for each $x \in \OmegaD$ of a collection $\cV(x) {\subseteq}\cV_{\max}(x)$ (the stencil at $x$) of irreducible elements of $\Z^2$, satisfying the following properties:
- (Stability) Any parent $f \in \cV_{\max}(x)$, of any $e \in \cV(x)$, satisfies $f \in \cV(x)$.
- (Visibility) One has $\operatorname{Cone}(\cV(x)) = \operatorname{Cone}(\cV_{\max}(x))$.
The set of candidates for refinement $\Candidates(x)$ consists of all elements $e\in \cV_{\max}(x) \sm \cV(x)$ which two parents $f,g$ belong to $\cV(x)$.
![Left: a maximal stencil at a point of a domain. Center: some minimal stencils. Right: some adaptively generated stencils used in the numerical resolution of .[]{data-label="fig:Stencils"}](\pathPic/ConvexityConstraints/Stencils/MaximalStencil.pdf "fig:"){width="4cm"} ![Left: a maximal stencil at a point of a domain. Center: some minimal stencils. Right: some adaptively generated stencils used in the numerical resolution of .[]{data-label="fig:Stencils"}](\pathPic/ConvexityConstraints/Stencils/MinimalStencils.pdf "fig:"){width="4cm"} ![Left: a maximal stencil at a point of a domain. Center: some minimal stencils. Right: some adaptively generated stencils used in the numerical resolution of .[]{data-label="fig:Stencils"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Default_2.pdf "fig:"){width="4cm"}
In other words, a stencil $\cV(x)$ at a point $x\in \Omega$ contains the parents of its members whenever possible (Stability), and covers all possible directions (Visibility). By construction, these properties are still satisfied by the *refined stencil* $\cV(x) \cup \{e\}$, for any candidate for refinement $e \in \Candidates(x)$. The collection $\Candidates(x)$ is easily recovered from $\cV(x)$, see Proposition \[prop:Trigo\].
\[def:Cones\] We attach to a family $\cV$ of stencils on $\OmegaD$ the cone $\operatorname{Conv}(\cV) {\subseteq}\cF(\OmegaD)$, characterized by the non-negativity of the following linear forms: for all $x\in X$
1. $S_x^e$, for all $e \in \cV(x)$ such that $\operatorname{supp}(S_x^e) {\subseteq}\OmegaD$.
2. $T_x^e$ for all $e \in \cV(x)$, with $\|e\|>1$, such that $\operatorname{supp}(T_x^e) {\subseteq}\OmegaD$.
3. $P_x^e$ for all $e \in \Candidates(x)$ (by construction $\operatorname{supp}(P_x^e) {\subseteq}X$).
When discussing unions, intersections, and cardinalities, we (abusively) identify a family $\cV$ of stencils on $\OmegaD$ with a subset of $X \times \Z^2$: $$\label{StencilIdentification}
\cV \approx \{(x,e); \, x\in \OmegaD, \, e \in \cV(x)\}.$$ Note that the cone $\operatorname{Conv}(\cV)$ is defined by at most $3\#(\cV)$ linear inequalities. The sets $\cV_{\max}$ are clearly stencils on $X$, which are maximal for inclusion, and by Theorem \[th:AllConstraints\] we have $\operatorname{Conv}(\cV_{\max}) = \operatorname{Conv}(X)$. The cone $\operatorname{Conv}(\cV)$ always contains the quadratic function $q(x) := \frac 1 2 \|x\|^2$, for any family $\cV$ of stencils. Indeed, the inequalities $S_x^e(q) \geq 0$, $x \in X$, $e \in \cV(x)$, and $T_x^e(q) \geq 0$, $\|e\|>1$, hold by convexity of $q$. In addition for all $e \in \Candidates(x)$, of parents $f,g$, one has $$P_x^e(q) = \frac 1 2 \left( \|x+f+g\|^2 - \|x+f\|^2 -\|x+g\|^2+\|x\|^2 \right) = \<f,g\> \geq 0,$$ since the basis $(f,g)$ of $\FixedGrid$ is acute by definition, see Proposition \[prop:Parents\].
\[th:Hierarchy\] The union $\cV \cup \cV'$, and the intersection $\cV \cap \cV'$ of two families $\cV, \cV'$ of stencils are also families of stencils on $X$. In addition $$\begin{aligned}
\label{StencilIntersection}
\operatorname{Conv}(\cV) \cap \operatorname{Conv}( \cV') &= \operatorname{Conv}(\cV \cap \cV'), \\
\label{StencilUnion}
\operatorname{Conv}(\cV) \cup \operatorname{Conv}( \cV')&{\subseteq}\operatorname{Conv}(\cV \cup \cV') .\end{aligned}$$
As a result, if two families of stencils $\cV, \cV'$ satisfy $\cV {\subseteq}\cV'$, then $$\operatorname{Conv}(\cV) {\subseteq}\operatorname{Conv}(\cV') {\subseteq}\operatorname{Conv}(X).$$ The left inclusion follows from , and the right inclusion from applied to $\cV'$ and $\cV_{\max}$. The intersection rule also implies the existence of stencils $\cV_{\min}$ minimal for inclusion, which are illustrated on Figure \[fig:Stencils\] and characterized in Proposition \[prop:MinimalStencilsCharacterization\].
\[rem:OptStrategy\] For any $u \in \operatorname{Conv}(x)$, there exists by a unique smallest (for inclusion) family of stencils $\cV$ such that $u \in \operatorname{Conv}(\cV)$. If $u$ is the minimizer of an energy $\cE$ on $\operatorname{Conv}(X)$, then it can be recovered by minimizing $\cE$ on the smaller cone $\operatorname{Conv}(\cV)$, defined by $\cO(\#(\cV))$ linear constraints. Algorithm \[algo:SubCones\] in §\[sec:Numerics\], attempts to find these smallest stencils $\cV$ (or slightly larger ones), starting from $\cV_{\min}$ and performing successive adaptive refinements.
In the rest of this subsection, we fix a grid scale $h>0$ and consider for all $\theta \in \R$, and all $\xi \in \R^2$, the grid $$\label{def:XThetaXi}
X_\theta^\xi := \Omega \cap h R_\theta (\xi+\Z^2).$$ The notions of stencils and related cones trivially extend to this setting, see §\[sec:Avg\] for details. We denote by $|\Omega|$ the domain area, and by $\diam(\Omega) := \max \{\|y-x\|; \, x,y \in \Omega\}$ its diameter. We also introduce rescaled variants, defined for $h>0$ by $$|\Omega|_h := h^{-2} |\Omega|, \qquad \diam_h(\Omega) := h^{-1} \diam(\Omega).$$ For any parameters $\theta, \xi$, one has denoting $N := \#(X_\theta^\xi)$ (with underlying constants depending only on the shape of $\Omega$) $$\label{eq:ApproxN}
|\Omega|_h \approx N, \qquad \diam_h(\Omega) \approx \sqrt{N}.$$
\[prop:WorstCase\] Let $X := X_\theta^\xi$, for some grid position parameters $\theta\in \R$, $\xi\in \R^2$, and let $N := \#(X)$. Let $u \in \operatorname{Conv}(X)$, and let $\cV$ be the minimal stencils on $X$ such that $u \in \operatorname{Conv}(\cV)$. Then $\#(\cV) \leq C N \diam_h(\Omega)$, for some universal constant $C$ (i.e. independent of $\Omega,h,\theta, \xi,u$).
Combining this result with we see that an optimization strategy as described in Remark \[rem:OptStrategy\] should heuristically not require solving optimization problems subject to more than $N \diam_h(\Omega) \approx N^{\frac 3 2}$ linear constraints. This is already a significant improvement over the $\approx N^2$ linear constraints defining $\operatorname{Conv}(X)$. The typical situation is however even more favorable: in average over randomized grid orientations $\theta$ and offsets $\xi$, the restriction to $X_\theta^\xi$ of a convex map $U : \Omega \to \R$ (e.g. the global continuous solution of the problem of interest) belongs to a cone $\operatorname{Conv}(\cV_\theta^\xi)$ defined by a quasi-linear number $\cO(N \ln^2 N)$ of linear inequalities.
\[th:Avg\] Let $U \in \operatorname{Conv}(\Omega)$, and let $\cV_\theta^\xi$ be the minimal stencils on $X_\theta^\xi$ such that $U_{|X_\theta^\xi} \in \operatorname{Conv}(\cV_\theta^\xi)$, for all $\theta \in \R$, $\xi \in \R^2$. Assuming $\diam_h(\Omega) \geq 2$, one has for some universal constant $C$ (i.e. independent of $h,\Omega,U$): $$\label{eq:Avg}
\int_{[0,1]^2} \int_0^{\pi/2} \#(\cV_\theta^\xi) \, d \theta \, d \xi \leq C \, |\Omega|_h \, (\ln\diam_h(\Omega))^2.$$
Stencils and triangulations {#sec:DelaunayIntro}
---------------------------
We discuss the connections between stencils and triangulations, which provides in Theorem \[th:Decomp\] a new insight on the hierarchy of cones $\operatorname{Conv}(\cV)$, and yields in Theorem \[th:Delaunay\] a new result of algorithmic geometry as a side product of our theory. We assume in this subsection and §\[sec:Delaunay\] that the discrete domain convex hull, denoted by $\operatorname{Hull}(X)$, has a non-empty interior. All triangulations considered in this paper are implicitly assumed to cover $\operatorname{Hull}(X)$ and to have $X$ as collection of vertices.
\[def:TinV\] Let $\cT$ be a triangulation, and let $\cV$ be a family of stencils on $X$. We write $\cT \prec \cV$ iff the directed graph associated to $\cT$ is included in the one associated to $\cV$. In other words iff for any edge $[x,x+e]$ of $\cT$, one has $e \in \cV(x)$.
The next result provides a new interpretation to our approach to optimization problems posed on the cone of convex functions, as a relaxation of the naïve (and flawed without this modification) method via interior finite elements.
\[th:Decomp\] Let $\cV$ be a family of stencils on $X$. If $\operatorname{Conv}(\cV)$ has a non-empty interior, then $$\label{eq:partitionV}
\operatorname{Conv}(\cV) = \bigcup_{T \prec \cV} \operatorname{Conv}(\cT).$$
Delaunay triangulations are a fundamental concept in discrete geometry [@Edelsbrunner:1986wt]. We consider in this paper a slight generalization in which the lifting map needs not be the usual paraboloid, but can be an arbitrary convex function, see Definition \[def:Delaunay\]. Within this paper Delaunay triangulations are simultaneously (i) a theoretical tool for proving results, notably Proposition \[prop:WorstCase\] and Theorem \[th:Decomp\], (ii) an object of study, since in Theorem \[th:Delaunay\] we derive new results on the cost of their construction, and (iii) a numerical post-processing tool providing global convex extensions of elements of $\operatorname{Conv}(X)$, see Figure \[fig:Flipping\] and Remark \[rem:Subgradients\].
\[def:Delaunay\] We say that $\cT$ is an $u$-Delaunay triangulation iff $u \in \operatorname{Conv}(\cT)$; equivalently the piecewise linear interpolation $\interp_\cT u : \operatorname{Hull}(X) \to \R$ is convex. We refer to $u$ as the *lifting map*.
A $q$-Delaunay triangulation, with $q(x) := \frac 1 2 \|x\|^2$, is simply called a Delaunay triangulation.
Two dimensional Delaunay triangulations, and three dimensional convex hulls, have well known links [@Edelsbrunner:1986wt]. Indeed $\cT$ is an $u$-Delaunay triangulation iff the map $x \in \operatorname{Hull}(X) \mapsto (x, \interp_\cT u(x)) \in \R^3$ spans the bottom part of the convex envelope $K$ of lifted set $\{(x,u(x)); \, x \in X\}$. As a result of this interpretation, we find that (i) any element $u \in \operatorname{Conv}(X)$ admits an $u$-Delaunay triangulation, and (ii) generic elements of $\operatorname{Conv}(X)$ admit exactly one $u$-Delaunay triangulation (whenever all the faces of $K$ are triangular). In particular, the union is disjoint up to a set of Hausdorff dimension $N-1$.
![Delaunay triangulations associated to numerical solutions of (variants of) the monopolist problem, see \[sec:Monopolist\]. Corresponding convex functions shown on Figure \[fig:Monopolist\] (left), Figure \[fig:Bundles\] (left), and Figure \[fig:Assets\] (left) respectively. Right: illustration of edge flipping. The diagonal $[x,t]$ shared by two triangles $T:=[x,y,t]$ and $T':=[x,z,t]$ can be flipped into $[y,z]$ if $T \cup T'$ is convex. Right, below: the piecewise linear interpolation of a discrete map $u$ is made convex by this flip.[]{data-label="fig:Flipping"}](\pathPic/Convex/Meshes/PrincipalAgent_Mesh20.pdf "fig:"){width="3.9cm"} ![Delaunay triangulations associated to numerical solutions of (variants of) the monopolist problem, see \[sec:Monopolist\]. Corresponding convex functions shown on Figure \[fig:Monopolist\] (left), Figure \[fig:Bundles\] (left), and Figure \[fig:Assets\] (left) respectively. Right: illustration of edge flipping. The diagonal $[x,t]$ shared by two triangles $T:=[x,y,t]$ and $T':=[x,z,t]$ can be flipped into $[y,z]$ if $T \cup T'$ is convex. Right, below: the piecewise linear interpolation of a discrete map $u$ is made convex by this flip.[]{data-label="fig:Flipping"}](\pathPic/Convex/Meshes/PALinear_Mesh20.pdf "fig:"){width="3.9cm"} ![Delaunay triangulations associated to numerical solutions of (variants of) the monopolist problem, see \[sec:Monopolist\]. Corresponding convex functions shown on Figure \[fig:Monopolist\] (left), Figure \[fig:Bundles\] (left), and Figure \[fig:Assets\] (left) respectively. Right: illustration of edge flipping. The diagonal $[x,t]$ shared by two triangles $T:=[x,y,t]$ and $T':=[x,z,t]$ can be flipped into $[y,z]$ if $T \cup T'$ is convex. Right, below: the piecewise linear interpolation of a discrete map $u$ is made convex by this flip.[]{data-label="fig:Flipping"}](\pathPic/Convex/Meshes/Assets_Value_Mesh20.pdf "fig:"){width="3.9cm"} [1.5cm ]{}
Any two triangulations $\cT, \cT'$ of $X$ can be transformed in one another through a sequence of elementary modifications called *edge flips*, see Figure \[fig:Flipping\]. The minimal number of such operations is called the edge flipping distance between $\cT$ and $\cT'$. Edge flipping is a simple, robust and flexible procedure, which is used in numerous applications ranging from fluid dynamics simulation [@Dobrzynski:2008ix] to GPU accelerated image vectorization [@Qi:2012fd]. Sustained research has been devoted to estimating edge flipping distances within families of triangulations of interest [@Hurtado:1999jj], although flipping distance bounds are usually quadratic in the number of vertices.
\[th:Delaunay\] Let $\cV$ be a family of stencils on $X$, and let $u \in \operatorname{Conv}(\cV)$. Then any standard Delaunay triangulation of $X$ can be transformed into an $u$-Delaunay triangulation via a sequence of $C\#(\cV)$ edge flips. The constant $C$ is universal (in particular it is independent of $u, \cV, \Omega$).
Combining this result with Theorem \[th:Avg\], we obtain that only $\cO(N \ln^2 N)$ edge flips are required to construct an $U$-Delaunay triangulation of $X$, with $N := \#(X)$, for any convex function $U \in \operatorname{Conv}(\Omega)$, in an average sense over grid orientations. Note that (more complex and specialized) convex hull algorithms [@PrincetonUniversityDeptofComputerScience:1991uv] could also be used to produce an $U$-Delaunay triangulation, at the slightly lower cost $\cO(N \ln N)$. Theorem \[th:Delaunay\] should be understood as a first step in understanding the typical behavior of edge flipping.
Outline
-------
We prove in §\[sec:ConstraintConvexity\] the characterization of discrete convexity by linear constraints of Theorem \[th:AllConstraints\]. The hierarchy properties of Theorem \[th:Hierarchy\] are established in §\[sec:Hierarchy\]. Triangulation related arguments are used in §\[sec:Delaunay\] to show Proposition \[prop:WorstCase\] and Theorems \[th:Decomp\] and \[th:Delaunay\]. The average cardinality estimate of Theorem \[th:Avg\] is proved in §\[sec:Avg\]. Numerical experiments, and algorithmic details, are presented in §\[sec:Numerics\]. Finally, the weaker notion of directional convexity is discussed in Appendix \[sec:Directional\].
Characterization of convexity via linear constraints {#sec:ConstraintConvexity}
====================================================
This section is devoted to the proof of Theorem \[th:AllConstraints\], which characterizes discrete convex functions $u \in \operatorname{Conv}(X)$ via linear inequalities. The key ingredient is its generalization, in [@Carlier:2001tq], to arbitrary unstructured finite point sets $X'$ (in contrast with the grid structure of $X$).
\[th:ConvexityUnstructured\] Let $\OmegaC$ be a convex domain, and let $\OmegaD' {\subseteq}\OmegaC$ be an arbitrary finite set. The cone $\operatorname{Conv}(\OmegaD')$, of all restrictions to $\OmegaD'$ of convex functions on $\OmegaC$, is characterized by the following inequalities, none of which can be removed:
- For all $x,y,z \in \OmegaD'$, all $\alpha \in ]0,1[$, $\beta := 1-\alpha$, such that $z = \alpha x+ \beta y$ and $[x,y] \cap \OmegaD' = \{x,y,z\}$: $$\label{SegmentUnstructured}
\alpha u(x) + \beta u(y) \geq u(z).$$
- For all $p,q,r,z \in \OmegaD'$, all $\alpha, \beta, \gamma\in \R_+^*$ with $\alpha+\beta+\gamma=1$, such that $z = \alpha p+ \beta q+\gamma r$, the points $p,q,r$ are not aligned and $[p,q,r]\cap \OmegaD' = \{p,q,r,z\}$: $$\label{TriangleUnstructured}
\alpha u(p) + \beta u(q) + \gamma u(r) \geq u(z).$$
In the following, we establish Theorem \[th:AllConstraints\] by specializing Theorem \[th:ConvexityUnstructured\] to the grid $\OmegaD := \OmegaC \cap \Z^2$, and identifying and with the constraints $S_z^e(u) \geq 0$ and $T_z^e(u) \geq 0$ respectively, in Propositions \[prop:Seg\] and \[prop:Tri\] respectively. Note that the cases and are unified in [@Carlier:2001tq], although we separated them in the above formulation for clarity. For $x_1, \cdots, x_n \in \R^2$, we denote $$[x_1, \cdots, x_n] := \operatorname{Hull}(\{x_1, \cdots, x_n\}).$$
\[prop:Seg\] Let $x,y,z \in \OmegaD$, $\alpha \in ]0,1[$, $\beta:=1-\alpha$, be such that $z = \alpha x+ \beta y$ and $[x,y] \cap \OmegaD = \{x,y,z\}$. Then $\alpha = \beta = 1/2$ and there exists an irreducible $e\in \FixedGrid$ such that $x=z+e$, $y=z-e$. (Thus $\alpha u(x)+ \beta u(y)-u(z) = S_z^e(u)/2$.)
We define $e := x-z \in \FixedGrid$, and assume for contradiction that $e$ is not irreducible: $e=k e'$, for some integer $k \geq 2$ and some $e' \in \FixedGrid$. Then $z+e' \in [z,x] \cap \Z^2 {\subseteq}[x,y] \cap X$, which is a contradiction. Thus $e$ is irreducible, and likewise $f := y-z \in \FixedGrid$ is irreducible. Observing that $f$ is negatively proportional to $e$, namely $f = -(\alpha/ \beta) e$, we obtain that $f=-e$, which concludes the proof.
The next lemma, used in Proposition \[prop:Tri\] to identify the constraints , provides an alternative characterization of the parents of an irreducible vector, see Proposition \[prop:Parents\].
\[lem:SumParents\] Let $e,f,g\in \Z^2$ be such that $e=f+g$, $\|e\| \geq \max \{\|f\|,\|g\|\}$, and $\det(f,g)=1$. Then $f,g$ are the parents of $e$.
Since $\det(f,e) = \det(f,f+g)=1$, the vector $e$ is irreducible. In addition, $\|e\|>1$, since otherwise $e,f,g$ would be three pairwise linearly independent unit vectors in $\Z^2$. Let $f_0,g_0$ be the parents of $e$. Observing that $\det(f,e) = \det(f_0,e)=1$, we obtain that $f = f_0+k e$ for some scalar $k$, which must be an integer since $e$ is irreducible. If $k > 0$ then $\|f\|^2 = k^2 \|e\|^2 + 2 k \<f_0,e\> +\|f_0\|^2 > \|e\|^2$ which is a contradiction (recall that $\<f_0,e\> = \<f_0,f_0+g_0\> \geq 0$). If $k<0$, then observing that $g = e-f = g_0-k e$ we obtain likewise a contradiction. Hence $k=0$ and $f,g$ are the parents of $e$, which concludes the proof.
\[corol:SumParent\] Let $(f,g)$ be a basis of $\Z^2$ which is *not* acute. If $\|f\| \geq \|g\|$, then $f+g$ is a parent of $f$, and otherwise it is a parent ot $g$.
Up to exchanging the roles of $f,g$, we may assume that $\det(f,g)=1$. Denoting $m := \max\{\|f\|,\|g\|,\|f+g\|\}$, we have by Lemma \[lem:SumParents\] three possibilities: (i) $\|f+g\|=m$, and $f,g$ are the parents of $f+g$, (ii) $\|f\|=m$, and $-g,f+g$ are the parents of $f$, (iii) $\|g\|=m$, and $f+g,-f$ are the parents of $g$. Excluding (i), since $(f,g)$ is not an acute basis, we conclude the proof.
\[prop:Tri\] Let $p,q,r,z \in \OmegaD$, $\alpha, \beta, \gamma \in \R_+^*$, with $\alpha+\beta+\gamma=1$, be such that $z = \alpha p+ \beta q + \gamma r$, the points $p,q,r$ are not aligned, and $[p,q,r] \cap \OmegaD = \{p,q,r,z\}$. Then $\alpha = \beta = \gamma = 1/3$ and, up to permuting $p,q,r$, there exists an irreducible $e\in \FixedGrid$ with $\|e\| > 1$, of parents $f,g$, such that $p =z+e$, $q=z-f$, and $r=z-g$. (Thus $\alpha u(p) + \beta u(q) + \gamma u(r) - u(z) = T_z^e(u)/3$.)
Let $e:=p-z$, $f:=z-q$, $g:=z-r$. Up to permuting $p,q,r$, we may assume that $\|e\| \geq \max \{\|f\|, \|g\|\}$ and $\det(f,g) \geq 0$. Note that $f$ and $g$ are not collinear since $z$ lies in the interior of $[p,q,r]$. We claim that $(f,g)$ is a basis of $\FixedGrid$. Indeed, otherwise, the triangle $[0,f,g]$ would contain an element of $e' \in \FixedGrid$ distinct from its vertices. Since $\OmegaC$ is convex, this implies $[p,q,r] \cap \OmegaD {\supseteq}\{p,q,r,z,z+e'\}$, which is a contradiction.
Thus $(f,g)$, and likewise $(e,f)$ and $(e,g)$, are bases of $\FixedGrid$, and therefore $$\label{eq:Det}
|\det(e,f)| = |\det(f,g)| = |\det(g,e)|=1.$$ Injecting in the above equation the identity $e = (\beta/\alpha) f + (\gamma/ \alpha) g$, we obtain that $|\beta/\alpha |=1$ and $|\gamma/\alpha| = 1$. Thus $\alpha = \beta = \gamma = 1/3$ since these coefficients are positive and sum to one. Finally, we have $e = f+g$, $\|e\| \geq \max \{\|f\|, \|g\|\}$, and $f,g$ is a direct basis. Using Lemma \[lem:SumParents\] we conclude as announced that $f,g$ are the parents of $e$.
Hierarchy of the cones $\operatorname{Conv}(\cV)$ {#sec:Hierarchy}
=================================================
This section is devoted to the proof of Theorem \[th:Hierarchy\], which is split into two parts: the proof that an intersection (or union) of stencils is still a stencil, and the hierarchy properties and .
An intersection of stencils is still a stencil
----------------------------------------------
Let $\cV, \cV'$ be families of stencils on $X$. Property (Stability) of stencils immediately holds for the intersection $\cV \cap \cV'$ and union, $\cV \cup \cV'$, while property (Visibility) is also clear for the union $\cV \cup \cV'$. In order to establish property (Visibility) for the intersection $\cV \cap \cV'$, we identify in Proposition \[prop:MinimalStencils\] a family $\cV_{\min}$ of stencils which included in any other. From $\cV_{\min} {\subseteq}\cV$ and $\cV_{\min} {\subseteq}\cV'$ we obtain $\cV_{\min} {\subseteq}\cV \cap \cV'$, so that (Visibility) for $\cV_{\min}$ implies the same property for $\cV \cap \cV'$.
\[def:CyclicOrder\] The cyclic strict trigonometric order on $\R^2 \sm \{0\}$ is denoted by $\prec$.
In other words $e_1 \prec e_2 \prec e_3$ iff there exists $\theta_1, \theta_2, \theta_3 >0$, such that $\theta_1+\theta_2+\theta_3 = 2 \pi$ and $e_{i+1}/\|e_{i+1}\| = R_{\theta_i} e_i / \|e_i\|$ for all $1 \leq i \leq 3$, with $e_4 : =e_1$. The following lemma, and Corollary \[corol:ChildrenInCone\], discuss the combination of the cyclic ordering with the notions of parents (and children) of an irreducible vector.
\[def:Ancestors\] For any irreducible $e \in \Z^2$, let $\operatorname{Anc}(e)$ be the smallest set containing $e$ and the parents of any element $e' \in \operatorname{Anc}(e)$ such that $\|e'\|>1$.
\[lem:ConsecutiveParents\] Let $e \in \Z^2\sm \{0\}$, let $(f,g)$ be a direct basis of $\Z^2$ such that $f \prec e \prec g$, and let us consider the triangle $T := [e,f,g]$. Then (i) $f+g \in T$. If in addition $e$ is irreducible, $\|e\|>1$ and (ii.a) $\<f,g\> \geq 0$ or (ii.b) $e \notin \operatorname{Anc}(f) \cup \operatorname{Anc}(g)$, then the parents of $e$ also belong to $T$.
Point (i). By construction, we have $e=\alpha x+\beta y$ for some positive integers $\alpha, \beta$. One easily checks that $e + (\beta-1) f + (\alpha-1) g = (\alpha+\beta-1) (f+g)$. This expression of $f+g$ as a weighted barycenter of the points $e,f,g,$ establishes (i).
Points (ii.a) and (ii.b). We fix $e$ and show these points by *decreasing* induction on the integer $k = \<f,g\>$. Initialization: Assume that $k = \<f,g\> \geq \frac 1 2 \|e\|^2$. Then $\|e\|^2 = \|\alpha f + \beta g\|^2 \geq 2 \alpha \beta \<f,g\> \geq 2 \<f,g\>$, which is a contradiction. No basis $(f,g)$ satisfies simultaneously $f\prec e \prec g$ and $\<f,g\> = k$. The statement is vacuous, hence true.
Case $k=\<f,g\> \geq 0$. If $e=f+g$, then $f,g$ are the parents of $e$, and the result follows. Otherwise, we have either $f \prec e \prec (f+g)$ or $(f+g)\prec e \prec g$. Since $\<f,f+g\> > \<f,g\>$ and $\<f+g,g\> > \<f,g\>$, we may apply our induction hypothesis to the bases $(f,f+g)$ and $(f+g,g)$ which satisfy (ii.a). Thus the parents of $e$ belong to $T_1 := [e,f,f+g]$ or $T_2 := [e,g,f+g]$. Finally, Point (i) implies that $f+g \in T$, thus $T_1 \cup T_2 {\subseteq}T$ which concludes the proof of this case.
Case $k = \<f,g\> < 0$. Assumption (ii.b) must hold, since (ii.a) contradicts this case. By corollary \[corol:SumParent\], $f+g$ is a parent of $f$ or of $g$. Hence $e \neq f+g$ and $\operatorname{Anc}(f+g) {\subseteq}\operatorname{Anc}(f) \cup \operatorname{Anc}(g)$. We apply our induction hypothesis to the bases $(f,f+g)$ and $(f+g,g)$ which satisfy (ii.b), and conclude the proof similarly to the case $k \geq 0$.
\[lem:AllChildren\] Consider an irreducible $e \in \Z^2$, $\|e\|>1$, and let $f,g$ be its parents. The children of $e$ (i.e. the vectors $e' \in \Z^2$ of which $e$ is a parent) have the form $f+k e$ and $k e + g$, $k \geq 1$.
Let $e'$ be a children of $e$, and let $f'$, $g'$ be its parents. Without loss of generality, we assume that $g'=e$. Then $\det(f',e) = 1 = \det(f,e)$, thus $f' = f+k e$ for some $k \in \R$. Since $e$ is irreducible, one has $k \in \mZ$. Since $0 \leq \<f',e\> = \<f,e\> + k \|e\|^2 \leq (k-1)\|e\|^2$, one has $k \geq 1$. The result follows.
\[corol:ChildrenInCone\] Let $e \in \Z^2\sm \{0\}$, let $(f,g)$ be a direct acute basis of $\Z^2$ such that $f \prec e \prec g$. Then any child $e'$ of $e$ (i.e. $e$ is a parent of $e'$) satisfies $f \prec e' \prec g$.
Let $K := \operatorname{Cone}(\{f,g\})$, and let $\interior K$ be its interior. Let also $f',g'$ be the parents of $e$. By assumption $e \in \interior K$, and by Lemma \[lem:ConsecutiveParents\] (ii.a) one has $f',g' \in [e,f,g] {\subseteq}K$. Thus $f'+ k e, k e+g' \in \interior K$ for any integer $k \geq 1$, which by Lemma \[lem:AllChildren\] concludes the proof.
\[lem:ConsecutiveBasis\] Let $\cV$ be a family of stencils on $\OmegaD$, let $x \in \OmegaD$, and let $f,g$ be two trigonometrically consecutive elements of $\cV(x)$. Then either (i) $(f,g)$ form a direct acute basis, or (ii) no element $e \in \cV_{\max}(x)$ satisfies $f \prec e \prec g$.
We distinguish three cases, depending on the value of $\det(f,g)$. In case $\det(f,g) \leq 0$, property Visibility of stencils implies (ii).
Case $\det(f,g)=1$. Assuming that (i) does not hold, Corollary \[corol:SumParent\] implies that $f+g$ is a parent of $f$ or of $g$. Assuming that (ii) does not hold, we have $f+g \in [e,f,g]$ for some $e\in \cV_{\max}(x)$ by Lemma \[lem:ConsecutiveParents\] (i), thus $f+g \in \cV_{\max}(x)$ by convexity of $\Omega$, thus $f+g \in \cV(x)$ by (Stability), which contradicts our assumption $f,g$ are trigonometrically consecutive in $\cV(x)$.
Case $k:=\det(f,g) > 1$. We assume without loss of generality that $\|f\| \geq \|g\|$, hence $\|f\|^2 \geq \det(f,g) > 1$. Let $f',g'$ be the parents of $f$, so that $g = \alpha f' + \beta g'$ for some $\alpha, \beta \in \Z$. We obtain $k = \det(f,\alpha f' + \beta g') = \beta-\alpha$. If $\alpha=0$ or $\beta=0$, then $g$ is not irreducible, which is a contradiction. If $\alpha$ and $\beta$ have the same sign, then $\|g\|>\|f\|$, which again is a contradiction. Hence $\alpha, \beta$ have opposite signs, and since $\beta-\alpha=k>1$ we obtain $\beta>0>\alpha$. Finally we have $g=\alpha (f-g')+\beta g'$, thus $g - \alpha f = (\beta - \alpha) g'$, and therefore $g' \in [0,f,g]$. By convexity $g' \in \cV_{\max}(x)$, by (Stability) $g' \in \cV(x)$, which contradicts our assumption that $f,g$ are trigonometrically consecutive in $\cV(x)$. This concludes the proof.
\[prop:MinimalStencils\] For all $x \in X$, define $\cV_{\min}(x)$ as the collection of all $e \in \cV_{\max} (x)$ which have none or just one parent in $\cV_{\max}(x)$ (this includes all unit vectors in $\cV_{\max}(x)$). Then $\cV_{\min} := (\cV_{\min}(x))_{x \in X}$ is a family of stencils, which is contained in any other family of stencils.
Property (Stability) of stencils. Consider $x\in X$ and $e \in \cV_{\min}(x)$. Assume for contradiction that $e$ has one parent $e' \in \cV_{\max}(x)$ which is not an element of $\cV_{\min}(x)$. Hence $e'$ has two parents $f,g \in \cV_{\max}(x)$. By Corollary \[corol:ChildrenInCone\] we have $f \prec e \prec g$, thus by Lemma \[lem:ConsecutiveParents\] (ii.a) the *two* parents of $e$ belong to the triangle $[e,f,g]$, hence also to $\cV_{\max}(x)$ by convexity of $\Omega$. This contradicts our assumption that $e \in \cV_{\min}(x)$.
Property (Visibility). We consider $e\in \cV_{\max}(x)$, and prove by induction on the norm $\|e\|$ that $e\in K := \operatorname{Cone}(\cV_{\min}(x))$. If $\|e\|=1$ or if none of just one parent of $e$ belongs to $\cV_{\max}(x)$, then $e \in \cV_{\min}(x) {\subseteq}K$. If both parents $f,g$ of $e$ belong to $\cV_{\max}(x)$, then by induction $f,g \in K$, and by additivity $f+g \in K$, which concludes the proof.
Minimality for inclusion of $\cV_{\min}$. Let $\cV$ be a family of stencils, let $x \in X$, $e \in \cV_{\min}(x)$, and let us assume for contradiction that $e \notin \cV(x)$. By property (Visibility) of stencils, the vector $e$ belongs to the cone generated by two elements $f,g \in \cV(x)$, which can be chosen trigonometrically consecutive in $\cV(x)$. By lemma \[lem:ConsecutiveBasis\], $(f,g)$ is a direct acute basis of $\Z^2$. By Lemma \[lem:ConsecutiveParents\] (ii.a) the parents of $e$ belong to the triangle $[e,f,g]$, hence to $\cV_{\max}(x)$ by convexity of $\Omega$, which contradicts the definition of $\cV_{\min}(x)$.
\[prop:Trigo\] Let $\cV$ be a family of stencils on $\OmegaD$, and let $x \in \OmegaD$. Then the parents $f,g$, of any candidate for refinement $e \in \Candidates(x)$, are consecutive elements of $\cV(x)$ in trigonometric order.
Since $e \notin \cV(x)$, there exists by (Visibility) two elements $f,g \in \cV(x)$ such that $f\prec e \prec g$, and which we can choose trigonometrically consecutive in $\cV(x)$. By Lemma \[lem:ConsecutiveBasis\], $(f,g)$ is a direct acute basis. By Lemma \[lem:ConsecutiveParents\] (ii.a) the parents $f',g'$ of $e$ between satisfy $f \preceq f' \prec g' \preceq g$. Recalling that $f',g' \in \cV(x)$, by definition of $\hat \cV(x)$, we obtain $f=f'$ and $g=g'$ which concludes the proof.
Combining and intersecting constraints {#sec:CombiningIntersecting}
--------------------------------------
The following characterization of the cones $\operatorname{Conv}(\cV)$ implies the announced hierarchy properties.
\[prop:ConvLComplement\] For any family $\cV$ of stencils on $X$ one has $$\operatorname{Conv}(\cV) = \{u \in \operatorname{Conv}(X); \, P_x^e(u) \geq 0 \text{ for all } x \in X, \, e \in \cV(x)\}. $$
Before turning to the proof of this proposition, we use it to conclude the proof of Theorem \[th:Hierarchy\]. The sub-cone $\operatorname{Conv}(\cV)$, of $\operatorname{Conv}(\Omega)$, is characterized by the non-negativity of a family of linear forms indexed by $\cV_{\max} \sm \cV$, with the convention . Observing that $$\cV_{\max} \sm (\cV \cup \cV') = (\cV_{\max} \sm \cV) \cap (\cV_{\max} \sm \cV'), \qquad \cV_{\max} \sm (\cV \cap \cV') = (\cV_{\max} \sm \cV) \cup (\cV_{\max} \sm \cV'),$$ we find that $\operatorname{Conv}(\cV \cup \cV')$ is characterized, as a subset of $\operatorname{Conv}(\OmegaD)$, by the intersection of the families of constraints defining $\operatorname{Conv}(\cV)$ and $\operatorname{Conv}( \cV')$, while $\operatorname{Conv}(\cV\cap \cV')$ is defined by their union. Hence we conclude as announced $$\operatorname{Conv}(\cV \cup \cV') \supseteq \operatorname{Conv}(\cV) \cup \operatorname{Conv}( \cV'), \qquad \operatorname{Conv}(\cV \cap \cV') = \operatorname{Conv}(\cV) \cap \operatorname{Conv}( \cV').$$
We proceed by decreasing induction on the cardinality $\#(\cV)$.
Initialization. If $\#(\cV) = \#(\cV_{\max})$, then $\cV = \cV_{\max}$, and therefore $\operatorname{Conv}(\cV) = \operatorname{Conv}(\cV_{\max}) = \operatorname{Conv}(\OmegaD)$ and $\cV_{\max} \sm \cV = \emptyset$. The result follows.
Induction. Assume that $\#(\cV) < \#(\cV_{\max})$, thus $\cV \subsetneq \cV_{\max}$. Let $x\in \OmegaD$ and $e \in \cV_{\max}(x) \sm \cV(x)$ be such that $\|e\|$ is minimal. Since $e \notin \cV_{\min}(x) {\subseteq}\cV(x)$, the two parents $f,g$ of $e$ belong to $\cV_{\max}(x)$. Since $\|e\| > \max \{\|f\|,\|g\|\}$, and by minimality of the norm of $e$, we have $f,g \in \cV(x)$. Hence $e$ is a candidate for refinement: $e\in \hat \cV(x)$.
Consider the extended stencils $\cV'$ defined by $\cV'(x) := \cV(x) \cup \{e\}$, and $\cV'(y) := \cV(y)$ for all $y \in \OmegaD\sm \{x\}$. Let $\cL$ and $\cL'$ be the collections of linear forms enumerated in Definition \[def:Cones\], which non-negativity respectively defines the cones $\operatorname{Conv}(\cV)$ and $\operatorname{Conv}(\cV')$ as subsets of $\cF(X)$. Let also $\cL_0 := \cL \cap \cL'$. Since $e\in \hat \cV(x)$ we have $\cL = \cL_0 \cup \{P_x^e\}$. Using Proposition \[prop:Trigo\] we obtain $\hat \cV'(x) \sm \hat \cV(x) {\subseteq}\{e+f,e+g\}$, hence $\cL'$ is the union of $\cL_0$ and of those of the following constraints which are supported on $\OmegaD$: $$\label{AdditionalForms}
S_x^e, \ T_x^e, \ P_x^{e+f}, \ P_x^{e+g}.$$ We next show that $\operatorname{Cone}(\cL) = \operatorname{Cone}(\cL' \cup \{P_x^e\})$, by expressing the linear forms in terms of the elements of $\cL$.
- If $S_x^e$ is supported on $X$, then $-e \in \cV_{\max}(x)$. Assuming that $-e \in \cV(x)$, we obtain $S_x^e = S_x^{-e} \in \cL$. On the other hand, assuming that $-e \notin \cV(x)$, we obtain $-f,-g \in \cV_{\max}(x)$ by Proposition \[prop:MinimalStencils\], since otherwise $-e \in \cV_{\min}(x) {\subseteq}\cV(x)$. Therefore $S_x^f, S_x^g$ are supported on $X$, hence they belong to $\cL$. By minimality of the norm of $e$, we have $-f,-g \in \cV(x)$, hence $-e \in \hat \cV(x)$ and therefore $P_x^{-e} \in \cL$. As a result $S_x^e = P_x^e+P_x^{-e}+S_x^f+S_x^g \in \operatorname{Cone}(\cL)$.
- If $T_x^e$ is supported on $X$, then $-f, -g \in \cV_{\max}(x)$. Therefore $S_x^f, S_x^g$ are supported on $X$, hence they belong to $\cL$. As a result $T_x^e = P_x^e+S_x^f+S_x^g \in \operatorname{Cone}(\cL)$.
- If $P_x^{e+f}$ is supported on $X$, then $x+e+f \in \OmegaD$, thus $f\in \cV_{\max}(x+e)$ and therefore $f\in \cV(x+e)$ by minimality of $\|e\|$. The linear form $S_{x+e}^f$ belongs to $\cL$, since it has support $\{x+g,x+e,x+e+f\} {\subseteq}\OmegaD$. Observing that the parents of $e+f$ are $e$ and $f$, we find that $P_x^{e+f} = P_x^e + S_{x+e}^f \in \operatorname{Cone}(\cL)$. The case of $P_x^{e+g}$ is similar.
Denoting by $K^*$ the dual cone of a cone $K$, we obtain $$\operatorname{Conv}(\cV) =
\operatorname{Cone}(\cL)^* =
\operatorname{Cone}(\cL' \cup \{P_x^e\})^* =
\{u \in \operatorname{Conv}(\cV'); \, P_x^e(u) \geq 0\}.$$ Applying the induction hypothesis to $\cV'$, we conclude the proof.
Stencils and triangulations {#sec:Delaunay}
===========================
Using the interplay between stencils $\cV$ and triangulations $\cT$, we prove Proposition \[prop:WorstCase\] and Theorems \[th:Decomp\], \[th:Delaunay\]. By convention, all stencils $\cV$ are on $X$, and all triangulations $\cT$ have $X$ as vertices and cover $\operatorname{Hull}(X)$.
Minimal stencils containing a triangulation
-------------------------------------------
We characterize in Proposition \[prop:VofT\] the minimal stencils $\cV$ containing a triangulation, in the sense of Definition \[def:TinV\], and we estimate their cardinality, proving Proposition \[prop:WorstCase\]. In the way, we establish in Proposition \[prop:UinConvV\] “half” (one inclusion) of the decomposition of $\operatorname{Conv}(\cV)$ announced in Theorem \[th:Decomp\].
\[lem:EdgeIneq\] Let $\cT$ be a triangulation, and let $u \in \operatorname{Conv}(\cT)$. Let $p,q,r \in X$. Assume that $[p,q]$ is an edge of $\cT$, and that $s := p+q-r \in X$. Then $
u(r)+u(s) \geq u(p)+u(q).
$
The interpolating function $U := \interp_\cT u$ is convex on $\operatorname{Hull}(X)$, and linear on the edge $[p, q]$. Introducing the edge midpoint $m := (p+q)/2 = (r+s)/2$ we obtain $u(p)+u(q) = 2 U(m) \leq u(r)+u(s)$, as announced.
The inequalities $u(r)+u(s) \geq u(p)+u(q)$ identified in the previous lemma are closely tied with the linear constraints $P_x^e$, since $[p,q,r,s]$ is a parallelogram, and as shown in the next lemma. The set $\operatorname{Anc}(e)$ of ancestors of an irreducible vector $e\in \Z^2$ was introduced Definition \[def:Ancestors\].
\[lem:PHierarchy\] Let $e \in \Z^2$ be irreducible, with $\|e\|> 1$, and let $(f,g)$ be a direct basis such that $f \prec e \prec g$ and $e \notin \operatorname{Anc}(f) \cup \operatorname{Anc}(g)$. Let $x \in X$ be such that $f,g,f+g,e \in \cV_{\max}(x)$. If $u \in \operatorname{Conv}(X)$ satisfies $u(x)+u(x+f+g) \geq (x+f)+u(x+g)$, then $P_x^e(u) \geq 0$.
Without loss of generality, up to adding a global affine map to $u$, we may assume that $u(x+e) = u(x+f)=u(x+g)=0$. Denoting by $f',g'$ the parents of $e$, we have by Lemma \[lem:ConsecutiveParents\] (ii.b) $f',g', f+g \in [e,f,g]$, hence by convexity $u(x+f'), u(x+g'), u(x+f+g) \leq 0$. Our hypothesis implies $u(x) \geq -u(x+f+g) \geq 0$, therefore $P_x^e(u) = u(x)-u(x+f')-u(x+g')+u(x+e) \geq 0$.
\[prop:UinConvV\] If a triangulation $\cT$, and stencils $\cV$, satisfy $\cT \prec \cV$, then $\operatorname{Conv}(\cT) {\subseteq}\operatorname{Conv}(\cV)$.
The inequalities $S_x^e(u) \geq 0$, and $T_x^e(u) \geq 0$, for $x \in X$, $e \in \cV(x)$, hold by convexity of $u$. We thus consider an arbitrary refinement candidate $e \in \hat \cV(x)$, $x\in X$, and establish below that $P_x^e(u) \geq 0$.
Since the triangulation $\cT$ covers $\operatorname{Hull}(X)$, there exists a triangle $T \in \cT$, containing $x$, and such that $e \in \operatorname{Cone}(T-x)$. Since $\cT \prec \cV$ and $e \notin \cV(x)$, the segment $[x,x+e]$ is not an edge of $\cT$. Denoting the vertices of $T$ by $[x,x+f,x+g]$ we have $f \prec e \prec g$. Since $e \in \hat \cV(x)$, one has $e \notin (\operatorname{Anc}(f) \cup \operatorname{Anc}(g)) {\subseteq}\cV(x)$. Applying Lemma \[lem:EdgeIneq\] to the edge $[x+f,x+g]$ we obtain $u(x)+u(x+f+g) \geq u(x+f)+u(x+g)$. Finally, Lemma \[lem:PHierarchy\] implies $P_x^e(u) \geq 0$ as announced.
\[prop:VofT\] Let $\cT$ be a triangulation, and for all $x \in X$ let $V_x$ be the collection of all $e\in \Z^2$ such that $[x,x+e]$ is an edge of $T$. The minimal family of stencils satisfying $\cT \prec \cV$ is given by $$\cV(x) := \cV_{\max}(x) \cap \bigcup_{e \in V_x} \operatorname{Anc}(e).$$
The family of sets $\cV$ satisfies the (Stability) property by construction. Since the triangulation $\cT$ covers $\operatorname{Hull}(X)$, the sets $(V_x)_{x\in X}$ satisfy the (Visibility) property, hence also the larger sets $\cV(x) {\supseteq}V_x$.
Minimality. Consider *arbitrary* stencils $\cV$ such that $\cT \prec \cV$. Let also $x \in X$, $e \in V_x$, $e' \in \cV_{\max}(x) \cap \operatorname{Anc}(e)$, and let us assume for contradiction that $e' \notin \cV(x)$. By property (Visibility) there exists $f,g$, trigonometrically consecutive elements of $\cV(x)$, such that $f\prec e \prec g$ (where $\prec$ refers to the cyclic trigonometric order, see Definition \[def:CyclicOrder\]). By Lemma \[lem:ConsecutiveBasis\], $(f,g)$ is a direct acute basis of $\Z^2$. By Corollary \[corol:ChildrenInCone\], and an immediate induction argument, we have $f \prec e \prec g$, hence $e \notin \cV(x)$, which contradicts our assumption that $\cT \prec \cV$.
Given a triangulation $\cT$, our next objective is to estimate the cardinality of the minimal stencils $\cV$ such that $\cT \prec \cV$. We begin by counting the ancestors of an irreducible vector.
\[lem:AncestorsSimple\]
1. Let $(f,g)$ be an acute basis of $\Z^2$. Then either (i) $f$ is a parent of $g$, (ii) $g$ is a parent of $f$, or (iii) $\|f\|=\|g\|=1$.
2. For any irreducible $e \in \Z^2$ one has $\#(\operatorname{Anc}(e)) \leq \|e\|_\infty+2$, where $\|(\alpha,\beta)\|_\infty := \max\{|\alpha|, |\beta|\}$.
Point 1. If $\<f,g\> > 0$, then applying Corollary \[corol:SumParent\] to the non-acute basis $(f,-g)$ we find that either (i) $(f,g-f)$ are the parents of $g$, or (ii) $(f-g,g)$ are the parents of $f$. On the other hand if $\<f,g\>=0$, then $1 = |\det(f,g)| = \|f\| \|g\|$, hence (iii) $\|f\|=\|g\|=1$.
Before proving Point 2, we introduce the cone $K$ generated by $(1,0),(1,1)$, so that $\|(\alpha, \beta)\|_\infty = \alpha$ for any $(\alpha,\beta) \in K$. If $e\in \Z^2$ irreducible belongs to the interior of $K$, then its parents $f,g \in K$, and we have $\|e\|_\infty = \|f\|_\infty+\|g\|_\infty$.
Point 2 is proved by induction on $\|e\|_\infty$. It is immediate if $\|e\|_\infty = 1$, hence we may assume that $\|e\|_\infty \geq 2$, and denote its parents by $f,g$. We have $\|e\|_\infty = \|f\|_\infty+ \|g\|_\infty$, since without loss of generality we may assume that $e\in K$. Applying Point 1 we find that either (i) $\operatorname{Anc}(e) = \operatorname{Anc}(g) \cup \{e\}$, (ii) $\operatorname{Anc}(e) = \operatorname{Anc}(f) \cup \{e\}$, or (iii) $\|f\|=\|g\|=1$, so that $\|e\|_\infty=1$, a case which we have excluded. Thus $\#\operatorname{Anc}(e) \leq \max\{\# \operatorname{Anc}(f), \#\operatorname{Anc}(g)\} +1$, which implies the announced result by induction.
\[prop:VTCard\] Let $\cT$ be a triangulation, and let $\cV$ be the minimal family of stencils such that $\cT \prec \cV$. Then $\#(\cV) \leq 6 (N-2) (\diam(\OmegaC) + 2)$, with $N:=\#(X)$. A sharper estimate holds for (standard) Delaunay triangulations: $\#(\cV) \leq 6 (N-2)$,
Let $E,F$ be respectively the number of edges and faces of $\cT$, where faces refer to both triangles and the infinite exterior face. By Euler’s theorem, $N-E+F=2$. Since each edge is shared by two faces, and each face has at least three edges, one gets $2 E \geq 3 F$, hence $E \leq 3 (N-2)$, and therefore, with the notation $V_x$ of Proposition \[prop:VofT\], $$\label{eq:Euler}
\sum_{x\in X} \#(V_x) = 2 E \leq 6(N-2).$$ Combining lemma \[lem:AncestorsSimple\], Proposition \[prop:VofT\], and observing that any edge $[x,x+e]$ of $\cT$ satisfies $\|e\|_\infty \leq \diam(\Omega)$, we obtain $\#(\cV(x)) \leq \#(V_x) (\diam(\Omega)+2)$, which in combination with implies the first estimate on $\#(\cV)$.
In the case of a Delaunay triangulation, we claim that $\cV(x) = V_x$. Indeed, consider an edge $[x,x+e]$ of $\cT$, and a parent $f\in \cV_{\max}(x)$ of $e$. Since $\cT$ covers $\operatorname{Hull}(X)$, it contains a triangle $[x,x+e,x+f']$ with $f$ and $f'$ on the same side of the edge $[x,x+e]$. Thus the determinants $\det(e,f)$ and $\det(e,f')$ have the same sign, and therefore the same value since their magnitude is $1$. As a result $f'=k e + f$, for some integer $k$. Since $\cT$ is Delaunay, the point $x+f$ is outside of the circumcircle of $[x,x+e,x+f']$. This property is equivalent to the non-positivity of the following determinant, called the in-circle predicate: assuming without loss of generality that $\det(e,f)=1$ so that the vertices $(x,x+e,x+f')$ are in trigonometric order $$\begin{aligned}
\nonumber
\det
\left(
\begin{array}{ccc}
e_1 & e_2 & \|e\|^2\\
f_1 & f_2 & \|f\|^2 \\
k e_1+f_1 & k e_2+ f_1 & \| k e + f\|^2
\end{array}
\right)
&=
\det
\left(
\begin{array}{ccc}
e_1 & e_2 & \|e\|^2\\
f_1 & f_2 & \|f\|^2\\
0 & 0 & \| k e + f\|^2 - k \|e\|^2 - \|f\|^2
\end{array}
\right),\\
\nonumber
& = \| k e + f\|^2 - k \|e\|^2 - \|f\|^2,\\
\label{inCircle}
& = k(k-1)\|e\|^2 + 2 k \<e,f\>,\end{aligned}$$ where we denoted $e=(e_1,e_2)$, $f=(f_1,f_2)$. Observing that $0 < \<e,f\> \leq \|e\| \|f\| < \|e\|^2$, we find that is non-positive only for $k=0$. Thus $f=f'$, hence $f \in V_x$, and therefore $\cV(x)=V_x$ as announced. Finally, the announced estimate of $\#(\cV)$ immediately follows from .
Let us conclude the proof of Proposition \[prop:WorstCase\]. Let $u \in \operatorname{Conv}(X)$, and let $\cV_u$ be the minimal stencils such that $u \in \operatorname{Conv}(\cV_u)$. Let $\cT$ be an $u$-Delaunay triangulation, and let $\cV_\cT$ be the minimal stencils such that $\cT \prec \cV$. By Proposition \[prop:UinConvV\] we have $u \in \operatorname{Conv}(\cT) {\subseteq}\operatorname{Conv}(\cV_\cT)$, hence $\cV_u {\subseteq}\cV_\cT$. Estimating $\#(\cV_T)$ with Proposition \[prop:VTCard\], we obtain as announced $\#(\cV_u) \leq 6 (N-2) (\diam(\Omega)+2)$.
Decomposition of the cone $\operatorname{Conv}(\cV)$, and edge-flipping distances
---------------------------------------------------------------------------------
We conclude in this section the proof of Theorem \[th:Decomp\], and establish the complexity result Theorem \[th:Delaunay\] on the edge-flipping generation of $u$-Delaunay triangulations.
We say (abusively) that a discrete map $u : X \to \R$ is generic iff, for all $x\in X$ and all $e \in \Z^2$ such that the linear form $P_x^e$ is supported on $X$, one has $P_x^e(u) \neq 0$.
Generic elements are dense in $\operatorname{Conv}(X)$, since this set is convex, has non-empty interior, and since non-generic elements lie on a union of hyperplanes. The quadratic function $q(x) := \frac 1 2 \|x\|^2$ is not generic however, since choosing $e=(1,1)$ one gets $P_x^e(q) = 0$.
\[lem:genericTV\] Consider stencils $\cV$, a generic $u \in \operatorname{Conv}(\cV)$, and an $u$-Delaunay triangulation $\cT$. Then $\cT \prec \cV$.
Consider an edge $[x,x+e]$ of $\cT$. If the linear form $P_x^e$ is not supported on $\cT$, then $e \in \cV_{\min}(x) {\subseteq}\cV(x)$ by Proposition \[prop:MinimalStencils\]. On the other hand if $P_x^e$ is supported on $X$, then $P_x^e(u) \leq 0$ by Lemma \[lem:EdgeIneq\]. By genericity of $u$, we have $P_x^e(u) < 0$, hence $e \in \cV(x)$ by Proposition \[prop:ConvLComplement\]. This concludes the proof.
We established in Proposition \[prop:UinConvV\] that $\operatorname{Conv}(\cV) {\supseteq}\cup_{\cT \prec \cV} \operatorname{Conv}(\cT)$. The next corollary, stating the reverse inclusion, concludes the proof of Theorem \[th:Decomp\].
\[corol:DecompSub\] If $\operatorname{Conv}(\cV)$ has a non-empty interior, then $\operatorname{Conv}(\cV) {\subseteq}\cup_{\cT \prec \cV} \operatorname{Conv}(\cT)$.
The set $K := \cup_{\cT \prec \cV} \operatorname{Conv}(\cT)$ contains all generic elements of $\operatorname{Conv}(\cV)$, by Lemma \[lem:genericTV\]. Observing that $K$ is closed, and recalling that generic elements are dense in $\operatorname{Conv}(\cV)$, we obtain the announced inclusion.
The next lemma characterizes the obstructions to the convexity of the piecewise linear interpolant $\interp_\cT u$ of a convex function $u \in \operatorname{Conv}(X)$ on a triangulation $\cT$. See also Figure \[fig:Flipping\] (right).
\[lem:BadEdge\] Consider $u \in \operatorname{Conv}(X)$, and a triangulation $\cT$ which is *not* $u$-Delaunay. Then there exists $x\in X$, and a direct basis $(f,g)$ of $\Z^2$, such that the triangles $[x,x+f,x+g]$ and $[x+f,x+g,x+f+g]$ belong to $\cT$, and satisfy $u(x)+u(x+f+g) < u(x+f)+u(x+g)$.
Since convexity is a local property, there exists two triangles $T,T' \in \cT$, sharing an edge, such that the interpolant $\interp_\cT u$ is not convex on $T \cup T'$ (i.e. convexity fails on the edge by $T$ and $T'$). Up to a translation of the domain, we may assume that $T=[0,f,g]$ and $T'=[f,g,e]$, for some $e,f,g \in \Z^2$. The pair $(f,g)$ is a basis of $\Z^2$ because the triangle $T$ contains no point of $\Z^2$ except its vertices; up to exchanging $f$ and $g$ we may assume that it is a direct basis. Up to adding an affine function to $u$, we may assume that $u$ vanishes at the vertices $0,f,g$ of $T$.
If $f$ lies in the triangle $[0,e,g]$, then since $u\in \operatorname{Conv}(X)$, and recalling that $u(0)=u(f)=u(g)=0$, we obtain $u(e) \geq 0$. This implies that $\interp_\cT u$ is convex on $T \cup T'$, which contradicts our assumption. Likewise $g \notin [0,f,e]$, thus $f \prec e \prec g$ and therefore $\det(f,e)> 0$ and $\det(e,g)>0$. We next observe that $$\det(f,g) + \det(g-e,f-e) = \det(f,e)+\det(e,g).$$ The four members of this equation are integers, the two left being equal to $2|T| = 2|T'| = 1$, and the two right being positive. Hence $\det(f,e)=\det(e,g)=1$, and therefore $e=f+g$ as announced. From this point, the inequality $u(x)+u(x+f+g) < u(x+f)+u(x+g)$ is easily checked to be equivalent to the non-convexity of $\interp_\cT u$ on $T \cup T'$.
\[prop:Flipping\] Consider stencils $\cV$, a triangulation $\cT \prec \cV$, and $u \in \operatorname{Conv}(\cV)$. Define a sequence of triangulations $\cT_0 := \cT,\ \cT_1,\ \cT_2 \cdots$ as follows: if $\cT_i$ is $u$-Delaunay, then the sequence ends, otherwise $\cT_{i+1}$ is obtained by flipping an arbitrary edge of $\cT_i$ satisfying Lemma \[lem:BadEdge\]. Then the sequence is finite, contains at most $\#(\cV)$ elements, and $\cT_i \prec \cV$ for all $0 \leq i \leq n$.
Proof that $\cT_i \prec \cV$, by induction on $i \geq 0$. Initialization: $\cT_0 := \cT \prec \cV$ by assumption. Induction: adopting the notations of Lemma \[lem:BadEdge\], the “flipped” edge $[x+f,x+g]$ of $\cT_i$ is replaced with $[x,x+e]$ in $\cT_{i+1}$, with $e:=f+g$. We only need to check that $e \in \cV(x)$, and for that purpose we distinguish two cases. If the basis $(f,g)$ is acute, then $f,g$ are the parents of $e$, and we have $P_x^e(u) < 0$ by Lemma \[lem:BadEdge\]. This implies $e \in \cV(x)$ by Proposition \[prop:ConvLComplement\]. On the other hand, if the basis $(f,g)$ is not acute, then by Corollary \[corol:SumParent\] the vector $e$ is a parent of either $f$ or $g$, thus $e \in \cV(x)$ by property (Stability) of stencils.
Bound on the number $n$ of edge flips. For all $0 \leq i < n$ one has $\interp_{\cT_{i+1}} u \leq \interp_{\cT_i} u$ on $\operatorname{Hull}(X)$, and this inequality is strict at the common midpoint of the flipped edges $[x_i+f_i, x_i+g_i]$ and $[x_i,x_i+e_i]$, with the above conventions. Hence the edge $[x_i,x_i+e_i]$ appears in the triangulation $\cT_{i+1}$ but not in any of the $\cT_j$, for all $0 \leq j \leq i$. It follows that $i \mapsto (x_i,e_i)$ is injective, and since $e_i \in \cV(x_i)$ this implies $n \leq \#(\cV)$.
We finally prove Theorem \[th:Delaunay\]. Consider a Delaunay triangulation $\cT$, and the minimal stencils $\cV_{\cT}$ such that $\cT \prec \cV$. Let also $u \in \operatorname{Conv}(X)$, and let $\cV_u$ be the minimal stencils such that $u \in \operatorname{Conv}(\cV_u)$. Then, by Proposition \[prop:Flipping\], $\cT$ can be transformed into an $u$-Delaunay triangulation via $\#(\cV_\cT \cup \cV_u)$ edge flips. Furthermore $\#(\cV_\cT) = \cO(\#(X))$ by Proposition \[prop:VTCard\] and $\#(\cV_u) \geq \#(X)$, as follows e.g. from property (Visibility) of stencils. Thus $\#(\cV_\cT \cup \cV_u) = \cO(\#(\cV_u))$, and the result follows.
Average case estimate of the cardinality of minimal stencils {#sec:Avg}
============================================================
The minimal stencils $\cV$, such that the cone $\operatorname{Conv}(\cV)$ contains a given discrete convex map, admit a simple characterization described in the following proposition.
\[prop:MinimalStencilsCharacterization\] Let $u\in \operatorname{Conv}(X)$, and let $\cV$ be the minimal stencils on $X$ such that $u \in \operatorname{Conv}(\cV)$. For any $x \in X$, and any irreducible $e \in \Z^2$ with $\|e\|>1$, one has:
$e \in \cV(x)$ $\Leftrightarrow$ ($P_x^e$ is not supported on $X$, or $P_x^e(u) < 0$).
Proof of implication $\Leftarrow$. If $P_x^e$ is not supported on $X$, then $e \in \cV_{\min}(x) {\subseteq}\cV(x)$ by Proposition \[prop:MinimalStencils\]. On the other hand if $P_x^e(u) < 0$, then $e \in \cV(x)$ by Proposition \[prop:ConvLComplement\].
Proof of implication $\Rightarrow$. Consider $x\in X$, $e \in \cV(x)$, with $\|e\| > 1$, and such that $P_x^e$ is supported on X. Assume for contradiction that $P_x^e(u) \geq 0$, and denote by $f,g$ the parents of $e$. Let $E := \{e' \in \cV(x); \, f \prec e' \prec g\}$. The parents of any $e' \in E$ belong to $E \cup \{f,g\}$ by Lemma \[lem:ConsecutiveParents\] (ii), and one has $P_z^{e'} (u) \geq 0$ by Lemma \[lem:PHierarchy\]. Defining new stencils by $\cV'(x) := \cV(x) \sm E$, and $\cV'(y) := \cV(y)$ for $y \neq x$, we contradict the minimality of $\cV$.
The rest of this section is devoted to the proof of Theorem \[th:Avg\], and for that purpose we consider the rotated and translated grids $X_\theta^\xi$, defined in . For simplicity, but without loss of generality, we assume a unit grid scale $h:=1$. For each rotation angle $\theta \in \R$, and each offset $\xi \in \R^2$, we introduce an affine transform $A_\theta^\xi$: for all $x\in \R^2$ $$A_\theta^\xi (x) := R_\theta (\xi+x).$$ For any set $E {\subseteq}\R^2$, and any affine transform $A$, we denote $A(E) := \{A(e); \, e\in E\}$. For instance, the displaced grids are given by $X_\theta^\xi := \Omega \cap A_\theta^\xi(\Z^2)$.
The maximal stencils on the grid $X_\theta^\xi$ are defined by: for all $x\in X_\theta^\xi$ $$\cV_{\max}^{\theta,\xi}(x) := \{e \in \Z^2 \text{ irreducible}; \ x+ R_\theta e\in X_\theta^\xi\}.$$ A family $\cV_\theta^\xi$ of stencils on $X_\theta^\xi$ is a collection of sets $\cV_\theta^\xi(x) {\subseteq}\cV_{\max}^{\theta,\xi}(x)$, $x \in X_\theta^\xi$ which satisfies the usual (Stability) and (Visibility) properties of Definition \[def:Stencil\] (replacing, obviously, instances of $\cV_{\max}$ with $\cV_{\max}^{\theta, \xi}$). For $x \in X_\theta^\xi$, and $e \in \cV_{\max}^{\theta,\xi}$ we consider the linear forms $S_{x, \theta}^e(u) := u(x+R_\theta e)-2 u(x) +u(x-R_\theta e)$, and likewise $T_{x, \theta}^e$, $P_{x, \theta}^e$, which are used to define cones $\operatorname{Conv}(\cV_\theta^\xi) {\subseteq}\operatorname{Conv}(X_\theta^\xi)$. In a nutshell, when embedding a stencil element $e \in \cV_\theta^\xi (x) {\subseteq}\Z^2$, where $x\in X_\theta^\xi$, into the physical domain $\Omega$ (e.g. considering $x+R_\theta e \in X_\theta^\xi$), one should never forget to apply the rotation $R_\theta$.
Consistently with the notations of Theorem \[th:Avg\], we consider a fixed convex map $U\in \operatorname{Conv}(\Omega)$, and study the smallest stencils $\cV_\theta^\xi {\subseteq}\Z^2$ on $X_\theta^\xi$ such that the restriction of $U$ to $X_\theta^\xi$ belongs to $\operatorname{Conv}(\cV_\theta^\xi)$. The *midpoints* $m = x+R_\theta e/2$ of “stencil edges” $[x,x+R_\theta e]$, $x\in X_\theta^\xi$, $e\in \cV_\theta^\xi(x)$, play a central role in our proof.
For any $m \in \Omega$, and any irreducible $e \in \Z^2$, let $$\Lambda_m^e := \{(\theta, \xi)\in [0, \pi/2[ \times [0,1[^2; \, m = x+R_\theta e/2, \text{ for some } x \in X_\theta^\xi \text{ such that } e \in \cV_\theta^\xi(x)\}.$$
We introduce offsetted grids, of points with half-integer coordinates $$\cZ : =(\textstyle {\frac 1 2},0) +\Z^2, \qquad \cZ' : =(0,\textstyle {\frac 1 2}) +\Z^2, \qquad \cZ'' : =(\textstyle {\frac 1 2},\textstyle {\frac 1 2}) +\Z^2.$$ For any $x,y \in \Z^2$ with $x-y$ irreducible, the midpoint $(x+y)/2$ of the segment $[x,y]$ belongs to the disjoint union $\cZ \sqcup \cZ' \sqcup \cZ''$.
\[lem:HalfGrid\] For any $m \in \Omega$ and any $(\theta,\xi) \in \Lambda_m^e$, one has $m \in A_\theta^\xi (\cZ \sqcup \cZ' \sqcup \cZ'')$.
Let $x \in X_\theta^\xi$, and $e \in \cV_\theta^\xi(x)$, be such that $m = x+R_\theta e/2$. Observing that the coordinates of $e$ are not both even, since $e$ is irreducible, we obtain $e/2 \in \cZ \sqcup \cZ' \sqcup \cZ''$. Adding $R_\theta (e/2)$ to $x \in A_\theta^\xi (\Z^2)$ yields as announced a point $m\in A_\theta^\xi (\cZ \sqcup \cZ' \sqcup \cZ'')$.
For any point $m \in \R^2$, and any angle $\theta$, there exists exactly one offset $\xi\in [0,1[^2$ such that $m \in A_\theta^\xi(\cZ)$; and likewise for $\cZ'$, $\cZ''$. Hence the set $\Lambda_m^e$ contains redundant information, which motivates the following definition: for any $m \in \Omega$, and any irreducible $e \in \Z^2$ $$\label{def:ThetaME}
\Theta_m^e := \{ \theta \in [0, \pi/2[; \, \exists \xi \in [0,1[^2, \, (\theta, \xi) \in \Lambda_m^e \text{ and } m \in A_\theta^\xi (\cZ) \}.$$ and similarly we define $\Theta_m'^e$, $\Theta_m''^e$, by replacing $\cZ$ with $\cZ'$, $\cZ''$ respectively in . By convention, $\Theta_m^e = \Theta_m'^e=\Theta_m''^e=\emptyset$ for *non* irreducible vectors $e \in \Z^2$. The following lemma accounts in analytical terms for a simple combinatorial identity: one can count stencil edges by looking at their endpoints or their midpoints.
The following integrals are equal: $$\label{eq:MeasEqual}
\int_{[0,1]^2} \int_0^{\frac \pi 2} \#(\cV_\theta^\xi) \, d \theta d \xi = \sum_{e \in \Z^2} \int_{m \in \Omega} (|\Theta_m^e| + |\Theta_m'^e| + |\Theta_m''^e|) \,dm,$$ where $|\Theta|$ denotes the Lebesgue measure of a Borel set $\Theta {\subseteq}\R$.
Consider $m \in \Omega$, $e \in \Z^2$, and $\theta \in \Theta_m^e$. Then there exists a unique $\xi \in [0,1[^2$ such that $m \in A_\theta^\xi(\cZ)$. This uniquely determines the point $x := m-\frac 1 2 R_\theta e \in X_\theta^\xi$ such that $e \in \cV_\theta^\xi(x)$. Likewise for $\Theta_m'^e$, $\Theta_m''^e$. Conversely, the data of $\theta$, $\xi$, $x\in X_\theta^\xi$ and $e \in \cV_\theta^\xi(x)$ uniquely determines $m := x+R_\theta e/2$, and also by Lemma \[lem:HalfGrid\] a unique set among $\Theta_m^e$, $\Theta_m'^e$, $\Theta_m''^e$ containing $\theta$. As a result the left and right hand side of are just two different expressions of the measure of $$\{ (m,e, i, \theta); \, \theta \in \Theta_m^{(i)e}\} {\subseteq}\Omega \times \Z^2 \times \{0,1,2\} \times [0,\pi/2[ ,
$$ where $\Theta_m^{(0)e} := \Theta_m^e$, $\Theta_m^{(1)e} := \Theta_m'^e$, and $\Theta_m^{(2)e} := \Theta_m''^e$. Implicitly, we equipped $\Z^2$ and $\{0,1,2\}$ with the counting measure, and $[0,\pi/2[$ and $\Omega$ with the Lebesgue measure (which in the latter case is preserved by the rotations $R_\theta$).
In order to estimate , we bound in the next lemma the size of the sets $\Theta_m^e$, $\Theta_m'^e$, $\Theta_m''^e$.
\[lem:DoubleQuad\] Let $e\in \Z^2$ be irreducible, with $\|e\|>1$, of parents $f,g$. Let also $m \in \Omega$. Then for any $\theta, \vp \in \Theta_m^e$, one has $\sin |\theta-\vp| \leq 2/ \min\{\<e,f\>, \<e,g\>\}$. Likewise for $\Theta_m'^e$, $\Theta_m''^e$.
Without loss of generality, we may assume that $m$ is the origin of $\R^2$. Let $Q$ be the parallelogram of vertices $\{\pm R_\theta e, \pm R_\vp e\}$; note that $\frac 1 2 Q {\subseteq}\Omega$. A point $x \in \R^2$ belongs to $Q$ iff $$\label{eq:InQ}
|\det(x, R_\theta e \pm R_\vp e)| \leq |\det(R_\theta e, R_\vp e)| = \|e\|^2 \sin |\theta- \vp|.$$ Indeed $\sin |\theta-\vp| = |\sin(\theta-\vp)|$, since $\theta, \vp \in [0, \pi/2[$ by construction. We assume without loss of generality that $\<e,f\> \leq \<e,g\>$. Introducing $h:=e- 2 f = g-f$ we observe that $\<e,h\> \geq 0$, and compute $$\begin{aligned}
|\det(R_\theta h, R_\theta e)| &= |\det(h,e)| = |\det(e-2 f,e)| = 2.\\
|\det(R_\theta h, R_\vp e)| & \leq |\det(h,e)| \cos(\vp-\theta) + |\<h,e\>| \sin |\vp-\theta| \leq 2+( \|e\|^2 - 2 \<e,f\>) \sin | \vp-\theta|.\end{aligned}$$ In the second line, we used the identity $\sin(a+b) = \sin(a) \cos(b) + \cos(a) \sin(b)$, where $a$ denotes the angle between $e$ and $h$, and $b:=\vp-\theta$. Combining these two estimates with , and assuming for contradiction that $\sin |\theta-\vp| \geq 2 / \<e,f\>$, we obtain that $R_\theta h \in Q$. By symmetry, $-R_\theta h \in Q$, and likewise $\pm R_\vp h \in Q$.
In the following, we denote $x := -R_\theta e/2$, $y := -R_\vp e/2$, $p := \pm R_\theta h/2$, $q := \pm R_\vp h/2$, where the signs for $p$ and $q$ are chosen so that $p,q \in [x,-x,y]$. Denoting by $\alpha, \beta, \gamma$ (resp. $\alpha', \beta', \gamma'$) the barycentric coordinates of $p$ (resp. $q$) in this triangle, convexity implies $$\begin{aligned}
\label{eq:xqqr}
\uC(p) &\leq \alpha \uC(x) + \beta \uC(-x)+ \gamma \uC (y),\\
\label{eq:yqqr}
\uC(q) &\leq \alpha' \uC(x) + \beta' \uC(-x)+ \gamma' \uC (y).\end{aligned}$$ Let $\xi\in [0,1[^2$ be such that $m \in A_\theta^\xi(\cZ)$. Then $x \in X_\theta^\xi$, $e \in \cV_\theta^\xi(x)$, and $m=x+R_\theta e/2$ (recall that we fixed $m=0$). Using the characterization of minimal stencils of Proposition \[prop:MinimalStencilsCharacterization\], we obtain $$\label{eq:PUNeg}
0 > P_{x, \theta}^e(U) = U(x) - U(x+R_\theta f)-U(x+R_\theta g) + U(x + R_\theta e),$$ provided this linear form is supported on $X_\theta^\xi$. Note that $x+R_\theta e = -x \in X_\theta^\xi$, that $x+R_\theta f = \ve p$, and that $x+R_\theta g = -\ve p$ for some $\ve \in \{-1,1\}$. Since $\pm p \in \frac 1 2 Q {\subseteq}\Omega$ this confirms that is supported on $X_\theta^\xi := \Omega \cap A_\theta^\xi (\Z^2)$. Inserting in the values of $-x,p,-p$, and proceeding likewise for $y$ and $q$, we obtain $$\begin{aligned}
\label{eq:ppxx}
\uC(x) + \uC(-x) &< U(p)+U(-p)\\
\label{eq:qqyy}
\uC(y) + \uC(-y) &< U(q)+U(-q).\end{aligned}$$ Up to adding an affine map to $\uC$, we may assume that $\uC (x) = \uC(-x) = 0$, and $\uC(p) = \uC(-p)$. From we obtain $\uC(p) > 0$. Hence also $U(y)>0$ using , and therefore $\uC(q) \leq \gamma' \uC(y) \leq \uC(y)$ using . Likewise $\uC(-q) \leq \uC(-y)$, which contradicts and concludes the proof.
We finally conclude the proof of Theorem \[th:Avg\], by combining with the next lemma.
For any $m\in \Omega$, with $r := \max\{1,\diam(\Omega)\}$: (likewise for $\Theta_m'^e$, $\Theta_m''^e$) $$\label{eq:SumOmega}
\sum_{e \in \Z^2} | \Theta_m^e| \leq 2 \pi + 4 \pi^2 (1+\ln r)^2.$$
Note that $\Theta_m^e {\subseteq}[0,\pi/2[$ for any $e\in \Z^2$, and that $\Theta_m^e = \emptyset$ if the $e$ is not irreducible, or if $\|e\| > r$. For any two vectors $e,e' \in \Z^2$, we write $e' \lhd e$ iff $e'$ is a parent of $e$ (which implies that $e,e'$ are irreducible, and that $\|e\| >1$). Isolating the contributions to of the four unit vectors, and applying Lemma \[lem:DoubleQuad\] to other vectors, we thus obtain $$\label{eq:SumVariablesExchange}
\sum_{e \in \Z^2} |\Theta_m^e| \leq 4 \times \frac \pi 2 + \sum_{\|e\| \leq r} \sum_{e' \lhd \, e} \arcsin\left(\frac 2 {\<e,e'\>}\right) \leq 2 \pi + \sum_{e'\in \Z^2} \sum_{\substack{e \, \rhd e'\\ \|e\| \leq r}} \frac \pi {\<e,e'\>},$$ where we used the concavity bound $\arcsin(x) \leq \frac \pi 2 x$ for all $x \in [0,1]$ (and slightly abused notations for arguments of $\arcsin$ larger than $1$). Consider a fixed irreducible $e' \in \Z^2$, and denote by $f,g$ its parents if $\|e'\| > 1$, or the two orthogonal unit vectors if $\|e'\|=1$, so that $\det(f,e') = 1 = \det(e',g)$. If $e \in \Z^2$ is such that $e' \lhd e$, then $|\det(e,e')|= 1$; assuming $\det(e,e')=1$ (resp. $-1$) we obtain that $e = f + k e'$ (resp. $e=g+k e'$) for some scalar $k$ which must be (i) an integer since $e'$ is irreducible, (ii) non-negative since $\<e,e'\> \geq 0$, and (iii) positive since $\|e'\| < \|e\|$. Assuming $\|e\| \leq r$, we obtain in addition $k\|e'\| \leq r$, thus $k\leq r$ and $\|e'\| \leq r$. As a result $$\label{eq:SumEp}
\sum_{\substack{e \, \rhd e'\\ \|e\| \leq r}} \frac 1 {\<e,e'\>} \leq \sum_{1 \leq k \leq r} \left(\frac 1 {\<f+k e', e'\>} + \frac 1 {\<g+k e', e'\>}\right) \leq \frac 2 {\|e'\|^2} \sum_{1 \leq k \leq r} \frac 1 k. $$ Inserting into yields the product of the two following sums, which are easily bounded via comparisons with integrals: isolating the terms for $k=1$, and for all $\|e'\| \leq \sqrt 2$ $$\sum_{1\leq k \leq r} \frac 1 k \leq 1+\int_1^r \frac {dt} t = 1+\ln r, \qquad
\sum_{\substack{0 < \|e'\| \leq r\\ \text{irreducible}}} \frac 1 {\|e'\|^2} \leq 4+4 \times \frac 1 2 + \int_{1 \leq \|x\| \leq r} \frac {dx} {\|x\|^2} \leq 6 + 2 \pi \ln r.$$ Noticing that $2 \pi \geq 6$ we obtain as announced.
Numerical experiments {#sec:Numerics}
=====================
Our numerical experiments cover the classical formulation [@Rochet:1998uj] of the monopolist problem, as well as several variants, including lotteries [@Manelli:2006ib; @Thanassoulis:2004uy], or the pricing of risky assets [@Carlier:2007gy]. We choose to emphasize this application in view of its appealing economical interpretation, and the often surprising qualitative behavior. Our algorithm can also be applied in a straightforward manner to the computation of projections onto the cone of convex functions defined on some square domain, with respect to various norms as considered in [@Carlier:2001tq; @MERIGOT:tr; @Oberman:2011wi; @Oberman:2011wy] (this amounts to denoising under a convexity prior). It may however not be perfectly adequate for the investigation of geometric conjectures [@LachandRobert:2005bi; @Wachsmuth:2013ta; @MERIGOT:tr], due to the use of a grid discretization.
The hierarchical cones of discrete convex functions introduced in this paper are combined with a simple yet adaptive and anisotropic stencil refinement strategy, described in §\[sec:strategy\]. The monopolist model is introduced in §\[sec:Monopolist\], and illustrated with numerous experiments. We compare in §\[sec:Comparison\] our implementation of the constraint of convexity, with alternative methods proposed in the literature, in terms of computation time, memory usage, and solution quality.
Stencil refinement strategy {#sec:strategy}
---------------------------
We introduce two algorithms which purpose is to minimize a given lower semi-continuous proper convex functional $\cE : \cF(\OmegaD) \to \R \cup \{+\infty\}$, on the $N$-dimensional cone $\operatorname{Conv}(\OmegaD)$, $N := \#(X)$, without ever listing the $\cO(N^2)$ linear constraints which characterize this cone. They both generate an increasing sequence of stencils $\cV_0 \subsetneq \cV_1 \subsetneq \cdots \subsetneq \cV_n$ on $\OmegaD$, and minimizers $(u_i)_{i=0}^n$ of $\cE$ on cones defined by $\cO(\#(\cV_i))$ linear constraints. The subscript $i$ refers to the loop iteration count in Algorithms \[algo:SubCones\] and \[algo:SuperCones\], and the loop ends when the stencils are detected to stabilize: $\cV_n = \cV_{n+1}$. The final map $u_n$ is guaranteed to be the global minimum of $\cE$ on $\operatorname{Conv}(\OmegaD)$.
Our first algorithm is based on an increasing sequence $\operatorname{Conv}(\cV_0) {\subseteq}\cdots {\subseteq}\operatorname{Conv}(\cV_n)$ of sub-cones of $\operatorname{Conv}(\OmegaD)$. If constraints of type $P_x^e$, $x \in X$, $e \in \cV_i(x)$ are *active* for the minimizer $u_i$ of $\cE$ on $\operatorname{Conv}(\cV_i)$ (i.e. the corresponding Lagrange multipliers are positive), then refined stencils $\cV_{i+1}$ are adaptively generated from $\cV_i$; otherwise $u_i$ is the global minimizer of $\cE$ on $\operatorname{Conv}(X)$, and the method ends. Note that the optimization of $\cE$ on $\operatorname{Conv}(\cV_{i+1})$ can be hot-started from the previous minimizer $u_i \in \operatorname{Conv}(\cV_i) {\subseteq}\operatorname{Conv}(\cV_{i+1})$.
Start with the minimal stencils: $\cV \leftarrow \cV_{\min}$. (See Proposition \[prop:MinimalStencils\])\
**Until** the stencils $\cV$ stabilize\
**Find** a minimizer $u$ of the energy $\cE$ on $\operatorname{Conv}(\cV)$,\
and extract the Lagrange multipliers $\lambda$ associated to the constraints $P_x^e$, $x\in X$, $e \in \hat \cV(x)$.\
**Set** $\cV(x) \leftarrow \cV(x) \cup \{e \in \hat \cV(x);\, \lambda(P_x^e) > 0\}$, for all $x\in \OmegaD$.
For any family $\cV$ of stencils on $X$, we denote by $\operatorname{Conv}'(\cV) {\subseteq}\cF(X)$ the cone defined by the non-negativity of: for all $x \in X$, and all $e\in \cV(x)$, the linear forms $S_x^e$ and $T_x^e$ if $\|e\|>1$, provided they are supported on $\OmegaD$. Note that $\operatorname{Conv}(\cV) {\subseteq}\operatorname{Conv}(X) {\subseteq}\operatorname{Conv}'(\cV)$.
Algorithm \[algo:SuperCones\] is based on a *decreasing* sequence $\operatorname{Conv}'(\cV_0) {\supseteq}\cdots {\supseteq}\operatorname{Conv}'(\cV_n)$ of super-cones of $\operatorname{Conv}(\OmegaD)$. The minimizer $u_i$ of $\cE$ on $\operatorname{Conv}'(\cV_i)$ may not belong to $\operatorname{Conv}(\OmegaD)$, even less to $\operatorname{Conv}(\cV_i)$, and in particular some of the values $P_x^e(u_i)$, $x\in \OmegaD$, $e \in \hat \cV_i(x)$, may be negative. In that case, refined stencils $\cV_{i+1}$ are adaptively generated from $\cV_i$; otherwise, $u_i$ is the global minimizer of $\cE$ on $\operatorname{Conv}(\OmegaD)$, and the method ends.
![ Top: Algorithm 2, for the classical monopolist problem on $[1,2]^2$ with a $20 \times 20$ grid, converges in $2$ stencils refinement steps (using the extended candidates $\hat \cV_\rho$, $\rho:=1.5$). Bottom: $4$ refinement steps are needed with a different density of customers, uniform on the square $[1,2]^2$ rotated by $\pi/12$. Top right: $u$-Delaunay triangulations associated with the respective discrete solutions, for illustration of Theorem \[th:Decomp\]. []{data-label="fig:StencilRefinement"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Default_0.pdf "fig:"){width="3cm"} ![ Top: Algorithm 2, for the classical monopolist problem on $[1,2]^2$ with a $20 \times 20$ grid, converges in $2$ stencils refinement steps (using the extended candidates $\hat \cV_\rho$, $\rho:=1.5$). Bottom: $4$ refinement steps are needed with a different density of customers, uniform on the square $[1,2]^2$ rotated by $\pi/12$. Top right: $u$-Delaunay triangulations associated with the respective discrete solutions, for illustration of Theorem \[th:Decomp\]. []{data-label="fig:StencilRefinement"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Default_1.pdf "fig:"){width="3cm"} ![ Top: Algorithm 2, for the classical monopolist problem on $[1,2]^2$ with a $20 \times 20$ grid, converges in $2$ stencils refinement steps (using the extended candidates $\hat \cV_\rho$, $\rho:=1.5$). Bottom: $4$ refinement steps are needed with a different density of customers, uniform on the square $[1,2]^2$ rotated by $\pi/12$. Top right: $u$-Delaunay triangulations associated with the respective discrete solutions, for illustration of Theorem \[th:Decomp\]. []{data-label="fig:StencilRefinement"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Default_2.pdf "fig:"){width="3cm"} ![ Top: Algorithm 2, for the classical monopolist problem on $[1,2]^2$ with a $20 \times 20$ grid, converges in $2$ stencils refinement steps (using the extended candidates $\hat \cV_\rho$, $\rho:=1.5$). Bottom: $4$ refinement steps are needed with a different density of customers, uniform on the square $[1,2]^2$ rotated by $\pi/12$. Top right: $u$-Delaunay triangulations associated with the respective discrete solutions, for illustration of Theorem \[th:Decomp\]. []{data-label="fig:StencilRefinement"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Default_T.pdf "fig:"){width="3cm"} ![ Top: Algorithm 2, for the classical monopolist problem on $[1,2]^2$ with a $20 \times 20$ grid, converges in $2$ stencils refinement steps (using the extended candidates $\hat \cV_\rho$, $\rho:=1.5$). Bottom: $4$ refinement steps are needed with a different density of customers, uniform on the square $[1,2]^2$ rotated by $\pi/12$. Top right: $u$-Delaunay triangulations associated with the respective discrete solutions, for illustration of Theorem \[th:Decomp\]. []{data-label="fig:StencilRefinement"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Rot_Pi12_T.pdf "fig:"){width="3.3cm"}\
![ Top: Algorithm 2, for the classical monopolist problem on $[1,2]^2$ with a $20 \times 20$ grid, converges in $2$ stencils refinement steps (using the extended candidates $\hat \cV_\rho$, $\rho:=1.5$). Bottom: $4$ refinement steps are needed with a different density of customers, uniform on the square $[1,2]^2$ rotated by $\pi/12$. Top right: $u$-Delaunay triangulations associated with the respective discrete solutions, for illustration of Theorem \[th:Decomp\]. []{data-label="fig:StencilRefinement"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Rot_Pi12_0.pdf "fig:"){width="3.3cm"} ![ Top: Algorithm 2, for the classical monopolist problem on $[1,2]^2$ with a $20 \times 20$ grid, converges in $2$ stencils refinement steps (using the extended candidates $\hat \cV_\rho$, $\rho:=1.5$). Bottom: $4$ refinement steps are needed with a different density of customers, uniform on the square $[1,2]^2$ rotated by $\pi/12$. Top right: $u$-Delaunay triangulations associated with the respective discrete solutions, for illustration of Theorem \[th:Decomp\]. []{data-label="fig:StencilRefinement"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Rot_Pi12_1.pdf "fig:"){width="3.3cm"} ![ Top: Algorithm 2, for the classical monopolist problem on $[1,2]^2$ with a $20 \times 20$ grid, converges in $2$ stencils refinement steps (using the extended candidates $\hat \cV_\rho$, $\rho:=1.5$). Bottom: $4$ refinement steps are needed with a different density of customers, uniform on the square $[1,2]^2$ rotated by $\pi/12$. Top right: $u$-Delaunay triangulations associated with the respective discrete solutions, for illustration of Theorem \[th:Decomp\]. []{data-label="fig:StencilRefinement"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Rot_Pi12_2.pdf "fig:"){width="3.3cm"} ![ Top: Algorithm 2, for the classical monopolist problem on $[1,2]^2$ with a $20 \times 20$ grid, converges in $2$ stencils refinement steps (using the extended candidates $\hat \cV_\rho$, $\rho:=1.5$). Bottom: $4$ refinement steps are needed with a different density of customers, uniform on the square $[1,2]^2$ rotated by $\pi/12$. Top right: $u$-Delaunay triangulations associated with the respective discrete solutions, for illustration of Theorem \[th:Decomp\]. []{data-label="fig:StencilRefinement"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Rot_Pi12_3.pdf "fig:"){width="3.3cm"} ![ Top: Algorithm 2, for the classical monopolist problem on $[1,2]^2$ with a $20 \times 20$ grid, converges in $2$ stencils refinement steps (using the extended candidates $\hat \cV_\rho$, $\rho:=1.5$). Bottom: $4$ refinement steps are needed with a different density of customers, uniform on the square $[1,2]^2$ rotated by $\pi/12$. Top right: $u$-Delaunay triangulations associated with the respective discrete solutions, for illustration of Theorem \[th:Decomp\]. []{data-label="fig:StencilRefinement"}](\pathPic/Convex/PrincipalAgent/20/Stencils/Rot_Pi12_4.pdf "fig:"){width="3.3cm"}
Start with the minimal stencils $\cV$.\
**Until** the stencils $\cV$ stabilize\
**Find** a minimizer $u$ of the energy $\cE$ on $\operatorname{Conv}'(\cV)$.\
**Set** $\cV(z) \leftarrow \cV(z) \cup \{e \in \hat \cV(z); \, P_z^e(u) < 0\}$, for all $z\in \OmegaD$.
Algorithms \[algo:SubCones\] and \[algo:SuperCones\] are provided “as is”, without any complexity guarantee. Our numerical experiments are based on algorithm \[algo:SuperCones\], because the numerical test “$P_x^e(u) < 0$” turned out to be more robust than “$\lambda(P_x^e)>0$”. In order to limit the number $n$ of stencil refinement steps, we use a slightly extended set $\hat \cV_\rho(x)$ of candidates for refinement, with $\rho := 1.5$, see Definition \[def:ExtendedCandidates\] below (note that $\hat \cV_1 = \hat \cV$). With this modification, $n$ remained below 10 in all our experiments. The constructed stencils were generally sparse, highly anisotropic, and almost minimal for the discrete problem solution $u\in \operatorname{Conv}(X)$ eventually found, see Figure \[fig:StencilRefinement\]. Observation of Figure \[fig:Comparison2\] suggests that $n$ grows logarithmically with the problem dimension $N$, and that the final stencils cardinality $\#(\cV_n)$ depends quasi-linearly on $N$, as could be expected in view of Remark \[rem:OptStrategy\] and Theorem \[th:Avg\]. However, we could not mathematically establish such complexity estimates.
\[def:ExtendedCandidates\] Let $\cV$ be a family of stencils, let $\rho \geq 1$, and let $x \in X$. A vector $e \in \cV_{\max}(x) \sm \cV(x)$, of parents $f,g$, belongs to the extended candidates $\hat \cV_\rho(x)$ iff there exists trigonometrically consecutive $f',g' \in \cV(x)$ such that $f' \preceq f \prec g \preceq g'$ and $\|f\| \|g\| \leq \rho \|f'\| \|g'\|$.
The monopolist problem {#sec:Monopolist}
----------------------
A monopolist has the ability to produce goods, which have two characteristics and may thus be represented by a point $q \in \R^2$. The manufacturing cost $\operatorname{Cost}(q) : \R^2 \to \R\cup \{+\infty\}$ is known and fixed a-priori. Infinite costs account for products which are “meaningless”, or impossible to build. The selling price $\pi : \R^2 \to \R\cup \{+\infty\}$ is fixed unilaterally by the monopolist except for the “null” product $(0,0)$, which must be available for free ($\pi(0,0) \leq 0$). The characteristics of the consumers are also represented by a point $z\in \R^2$, and the utility of product $q$ to consumer $z$ is modeled by the scalar product between their characteristics $$\label{eq:Utility}
\cU(q,z) := \<q,z\>.$$ More general utility pairings $\cU$ are considered in [@Figalli:2011tz], yet the numerical implementation of the resulting optimization problems remains out of reach, see [@MERIGOT:tr] for a discussion. All consumers $z$ are rational, “screen” the proposed price catalog $\pi$, and choose the product of maximal *net* utility $\<q,z\> - \pi(q)$ (i.e. raw utility minus price). Introducing the Legendre-Fenchel dual $U$ of the prices $\pi$: for all $z\in \R^2$ $$U(z): = \pi^*(z) := \sup_{q\in \R^2} \, \<q,z\> - \pi(z),$$ we observe that the optimal product[^5] for consumer $z$ is $\nabla U(z)$, which is sold at the price $$\label{eq:PriceU}
\pi(\nabla U(z)) = \<\nabla U(z), z\> - U(z).$$ The net utility function $U$ is convex and non-negative by construction, and uniquely determines the products bought and their prices. Conversely, any convex non-negative $U$ defines prices $\pi = U^*$ satisfying the admissibility condition $\pi(0) \leq 0$, and such that $\pi^* = U^{**} = U$. The distribution of the characteristics of the potential customers is known to the monopolist, under the form of a bounded measure $\mu$ on $\R^2$. He aims to maximize his total profit: the integrated difference (sales margin) between the selling price , and the production cost $$\label{eq:TotalProfit}
\sup\left\{ \int_{\R^2} \Big(\<\nabla U(z),z\> - U(z) - \operatorname{Cost}(\nabla U(z))\Big) d\mu(z); \, \, U \in \operatorname{Conv}(\R^2), \, U \geq 0\right\}$$ If production costs are convex, then this amounts to maximizing a concave functional of $U$ under convex constraints; see [@Carlier:2001gv] for precise existence results. If $U$ maximizes , then an optimal catalog of prices is given by $U^*$. Quantities of particular economic interest are the monopolist margin, and the distribution of product sales: $$\label{eq:ProductMarginDistribution}
{\rm Margin} = U^* - \operatorname{Cost}, \qquad
{\rm SalesDistribution} = (\nabla U)_{\#} \mu,$$ where $\#$ denotes the push forward operator on measures. The regions defined by $\{U=0\}$ and $\{\det(\operatorname{Hessian}u)=0\}$ are also important, as they correspond to different categories of customers, see below. We present numerical results for three instances of the monopolist problem, associated to different product costs. These three models are clearly simplistic idealizations of real economy. Their interest lies in their, striking, qualitative properties, which are stable and are expected to transfer to more complex models.
For implementation purposes, we observe that the maximum profit is unchanged if one considers $U$ only defined on a convex set $K \supseteq \operatorname{supp}(\mu)$, and imposes the additional constraint $\operatorname{Cost}(\nabla U(x)) < \infty$ for all $x \in K$. The chosen discrete domain is a square grid $X$, such that $\operatorname{supp}(\mu) {\subseteq}\operatorname{Hull}(X)$. This density is represented by non-negative weights $(\mu_x)_{x\in X}$, set to zero outside $\operatorname{supp}(\mu)$. The integral appearing in is discretized using finite differences, see [@Carlier:2001tq] for convergence results. The resulting convex program is solved by combining Mosek software’s interior point (for linear problems) or conic (for quadratic[^6] problems) optimizer, with the stencil refinement strategy of Algorithm \[algo:SuperCones\], §\[sec:strategy\].
#### Classical model.
The produced goods are cars (for concreteness), which characteristics $q=(q_1,q_2)$ are non-negative and account for the engine horsepower $q_1$ and the upholstery quality $q_2$. Production cost is quadratic: $\operatorname{Cost}(q) := \frac 1 2 \|q\|^2$ for all $q \in \R_+^2$, and $\operatorname{Cost}(q)=+\infty$ otherwise (cars with negative characteristics are unfeasible). Consumer characteristics $x=(x_1,x_2)$ are their appetite $x_1$ for car performance, and $x_2$ for comfort, consistently with . The qualitative properties of this model are the following [@Rochet:1998uj]: denoting by $U$ a solution of , and ignoring regularity issues in this heuristic discussion
- (Desirability of exclusion) The optimal monopolist strategy often involves neglecting a positive proportion of potential customers - which “buy” the null product $0$ at price $0$. In other words, the solution of satisfies $U=0$ on an open subset of $\operatorname{supp}(\mu)$, hence also $\nabla U=0$. The economical interpretation is that introducing (low end) products attractive to this population would reduce overall profit, because other customers currently buying expensive, high margin products, would change their minds and buy these instead.
- (Bunching) “Wealthy” customers $z$ generally buy products which are specifically designed for them, in the sense that $\nabla U$ is a local diffeomorphism close to $z$. “Poor” potential customers are excluded from the trade: $U=0$ close to $z$, see the previous point. There also exists an intermediate category of customers characterized by $\det(\operatorname{Hessian}U)=0$ close to $z$, so that the same product $q=\nabla U(z)$ is bought by a one dimensional “bunch” of customers $(\nabla U)^{-1}\{q\}$. The image of this category of customers, by $\nabla U$, is one dimensional product line. From an economic point of view, the optimal strategy limits the variety of intermediate range products in order, again, to avoid competing with high margin sales.
Considering, as in [@Rochet:1998uj; @MERIGOT:tr], a uniform density of customers on the square $[1,2]^2$, we illustrate[^7] on Figure \[fig:Monopolist\] the estimated solution $U$ (left), $\det(\operatorname{Hessian}U)$ (center left), the sales distribution (center right) and the monopolist margin (right), see . The phenomena of exclusion $U=0$ and of bunching $\det(\operatorname{Hessian}U)=0$ are visible (center left subfigure) as a white triangle and as the darkest level set of $\det(\operatorname{Hessian}U)$ respectively. The image by $\nabla U$ of customers subject to bunching appears (center right subfigure) as a one dimensional red structure in the product sales distribution.
We also consider variants where the density of customers is uniform on the square $[1,2]^2$ *rotated* by an angle $\theta\in [0, \pi/4]$ around its center, see Figures \[fig:DTGC\] (left) and \[fig:PARotated\]. Our experiments suggest that exclusion occurs iff $\theta \in [0, \theta_0]$, with $\theta_0 \approx 0.47$ rad. Bunching is always present, yet two regimes can be distinguished: the one dimensional product line, associated to the bunching phenomenon, is included in the boundary of the two dimensional one iff $\theta\in [\theta_1, \pi/4]$, with $\theta_0 < \theta_1 \approx 0.55$ rad. Proving mathematically this qualitative behavior is an open problem.
![ Left: domain $[1,2]^2$ (thick black), and rotated domains used in numerical experiments for the classical principal agent problem, see Figure \[fig:PARotated\]. We computed a minimizer $u \in \operatorname{Conv}(X)$ of a discretization of the classical monopolist problem on $[1,2]^2$, see , on a $20 \times 20$ grid $X$, and an $u$-Delaunay triangulation $\cT$, see Figure \[fig:Flipping\]. Center left: subgradients cells $\partial_x U$, $x \in X$, with $U := \interp_\cT u$. Center right: the gradients $\nabla \interp_T u$, $T \in \cT$ (vertices of the previous cells). Right: the less precise numerical method OF$_3$, see \[sec:Comparison\], thickens the product line and hides the bunching phenomenon. []{data-label="fig:DTGC"}](\pathPic/Convex/PrincipalAgent/Rotated2/Domains.pdf "fig:"){width="3.8cm"} ![ Left: domain $[1,2]^2$ (thick black), and rotated domains used in numerical experiments for the classical principal agent problem, see Figure \[fig:PARotated\]. We computed a minimizer $u \in \operatorname{Conv}(X)$ of a discretization of the classical monopolist problem on $[1,2]^2$, see , on a $20 \times 20$ grid $X$, and an $u$-Delaunay triangulation $\cT$, see Figure \[fig:Flipping\]. Center left: subgradients cells $\partial_x U$, $x \in X$, with $U := \interp_\cT u$. Center right: the gradients $\nabla \interp_T u$, $T \in \cT$ (vertices of the previous cells). Right: the less precise numerical method OF$_3$, see \[sec:Comparison\], thickens the product line and hides the bunching phenomenon. []{data-label="fig:DTGC"}](\pathPic/Convex/PrincipalAgent/20/DualCells.pdf "fig:"){width="3.8cm"} ![ Left: domain $[1,2]^2$ (thick black), and rotated domains used in numerical experiments for the classical principal agent problem, see Figure \[fig:PARotated\]. We computed a minimizer $u \in \operatorname{Conv}(X)$ of a discretization of the classical monopolist problem on $[1,2]^2$, see , on a $20 \times 20$ grid $X$, and an $u$-Delaunay triangulation $\cT$, see Figure \[fig:Flipping\]. Center left: subgradients cells $\partial_x U$, $x \in X$, with $U := \interp_\cT u$. Center right: the gradients $\nabla \interp_T u$, $T \in \cT$ (vertices of the previous cells). Right: the less precise numerical method OF$_3$, see \[sec:Comparison\], thickens the product line and hides the bunching phenomenon. []{data-label="fig:DTGC"}](\pathPic/Convex/PrincipalAgent/20/Gradients.pdf "fig:"){width="3.8cm"} ![ Left: domain $[1,2]^2$ (thick black), and rotated domains used in numerical experiments for the classical principal agent problem, see Figure \[fig:PARotated\]. We computed a minimizer $u \in \operatorname{Conv}(X)$ of a discretization of the classical monopolist problem on $[1,2]^2$, see , on a $20 \times 20$ grid $X$, and an $u$-Delaunay triangulation $\cT$, see Figure \[fig:Flipping\]. Center left: subgradients cells $\partial_x U$, $x \in X$, with $U := \interp_\cT u$. Center right: the gradients $\nabla \interp_T u$, $T \in \cT$ (vertices of the previous cells). Right: the less precise numerical method OF$_3$, see \[sec:Comparison\], thickens the product line and hides the bunching phenomenon. []{data-label="fig:DTGC"}](\pathPic/Convex/Efficiency/PrincipalAgent/ObermanStrict3_Gradients_20.pdf "fig:"){width="3.8cm"}
\[rem:Subgradients\] Studying the “bunching” phenomenon requires to estimate the hessian determinant $\det(\operatorname{Hessian}U) \geq 0$ of the solution $U$ of , and to visualize the degenerate region $\det(\operatorname{Hessian}U)=0$. The hessian determinant also appears in the density of product sales . These features need to be extracted from a minimizer $u \in \operatorname{Conv}(X)$ of a finite differences discretization of , which is a delicate problem since (i) the hessian determinant is a “high order” quantity, and (ii) equality to zero is a numerically unstable test. Naïvely computing a discrete hessian $H_u$ via second order finite differences, we obtain an oscillating, non-positive and overall imprecise approximation $\det(H_u)$, see Figure \[fig:BadHessian\] (right).
The following approach gave better results, see Figure \[fig:BadHessian\] (left): compute the largest convex $U: \operatorname{Hull}(X) \to \R$ such that $U \leq u$ on $X$ (if $u \in \operatorname{Conv}(X)$, then $U=\interp_\cT u$ for any $u$-Delaunay triangulation). Then for all $x \in X \sm \partial \operatorname{Hull}(X)$ $$h^2 \det(\operatorname{Hessian}U (x)) \approx | \{\nabla U (x+e); \, \|e\|_\infty \leq h/2\} | \approx | \{\nabla \hat U (x+e);\, \|e\|_\infty \leq h/2\} | = |\partial_x \hat U|,$$ where $\partial$ denotes the sub-gradient, set-valued operator on convex functions, and $|\cdot|$ the two dimensional Lebesgue measure. The sub-gradient sets $\partial_x \hat U$ are illustrated on Figure \[fig:DTGC\].
#### Product bundles and lottery tickets.
Two types of products $P_1$, $P_2$ are considered, which the consumer of characteristics $x=(x_1,x_2)\in \R^2$ respectively values $x_1$ and $x_2$. The two products are indivisible, and consumers are not interested in buying more than one of each. The monopolist sells them in bundles $q = (q_1,q_2) \in \{0,1\}^2$ which characteristics are the presence ($q_i=1$) of product $P_i$, or its absence ($q_i=0$), for $i\in \{1,2\}$. In order to maximize profit, the monopolist also considers probabilistic bundles, or lottery tickets, $q\in [0,1]^2$ for which the product $P_i$ has the probability $q_i$ of being present. This is consistent with , provided consumers are risk neutral. Production costs are neglected, so that $\operatorname{Cost}(q) = 0$ if $q\in [0,1]^2$ and $\operatorname{Cost}(q) = +\infty$ otherwise. Three different customer densities were considered, see below. The qualitative property of interest is the presence, or not, of probabilistic bundles in the monopolist’s optimal strategy.
- Uniform customer density on $[0,1]^2$. We recover the known exact minimizer [@Manelli:2006ib]: $$U(x,y) := \max\{0,\, x-a,\, y-a,\, x+y-b\}, \text{ with } a:=2/3, \text{ and } b:=(4-\sqrt 3)/2,$$ up to numerical accuracy, see Figure \[fig:Bundles\] (left). This optimal strategy does *not* involve lottery tickets: $\nabla U(x,y) \in \{0,1\}^2$, wherever this gradient is defined. The uselessness of lottery tickets is known for similar 1D problems and was thought to extend to higher dimension, until the following two counter-examples were independently found [@Manelli:2006ib; @Thanassoulis:2004uy].
- Uniform customer density on the triangle $T := \{(x,y) \in [0,1]^2; \, x+y/2\geq 1\}$. The monopolist strategy associated to $$\label{eq:BestBundleT}
U(x,y) := \max \{0, \, x+y/2-1, \, x+y-b\}, \text{ with } b=1+1/(2\sqrt 3),$$ which involves the lottery ticket $(1,1/2)$, yields better profits that any strategy restricted to deterministic bundles [@Manelli:2006ib]. The triangle $T$, and the numerical best $U$, are illustrated on Figure \[fig:Bundles\] (center). These experiments suggest that is a[^8] globally optimal solution.
- Uniform customer density on the kite shaped domain $\{(x,y) \in [0,1]^2;\, x+y/2 \geq 1 \text{ or } x/2+y \geq 1\}$, see Figure \[fig:Bundles\] (right). The optimal monopolist strategy is proved in [@Thanassoulis:2004uy] to involve probabilistic bundles, but it is not identified. Our numerical experiments suggest that it has the form $$U(x,y) := \max \{0, \, x+y/2-1, \, x/2+y-1, \, x+y-b\},$$ which involves the lottery tickets $(1,1/2)$ and $(1/2,1)$. Under this assumption, the optimal value $b=1+1/(3\sqrt 2)$ is easily computed.
Numerous qualitative questions remain open. Is there a distribution of customers for which the optimal strategy involves a continuum of distinct lottery tickets $\{(1,\alpha);\, \alpha_0\leq \alpha \leq \alpha_1\}$ ?
![Three dimensional plot of the optimal $U$ for the product-bundles variant of the monopolist problem, with respect to various customer distributions. Left : uniform distribution on $[0,1]^2$. Center, and right: distribution uniform on the illustrated black polygon.[]{data-label="fig:Bundles"}](\pathPic/Convex/PrincipalAgentLinear/PALinear_3D_100_Square.png "fig:"){width="3cm"} ![Three dimensional plot of the optimal $U$ for the product-bundles variant of the monopolist problem, with respect to various customer distributions. Left : uniform distribution on $[0,1]^2$. Center, and right: distribution uniform on the illustrated black polygon.[]{data-label="fig:Bundles"}](\pathPic/Convex/PrincipalAgentLinear/TriangleDensityHalf.pdf "fig:"){width="2cm"} ![Three dimensional plot of the optimal $U$ for the product-bundles variant of the monopolist problem, with respect to various customer distributions. Left : uniform distribution on $[0,1]^2$. Center, and right: distribution uniform on the illustrated black polygon.[]{data-label="fig:Bundles"}](\pathPic/Convex/PrincipalAgentLinear/PALinear_3D_100_Triangle.png "fig:"){width="2.5cm"} ![Three dimensional plot of the optimal $U$ for the product-bundles variant of the monopolist problem, with respect to various customer distributions. Left : uniform distribution on $[0,1]^2$. Center, and right: distribution uniform on the illustrated black polygon.[]{data-label="fig:Bundles"}](\pathPic/Convex/PrincipalAgentLinear/KiteDensity.pdf "fig:"){width="3cm"} ![Three dimensional plot of the optimal $U$ for the product-bundles variant of the monopolist problem, with respect to various customer distributions. Left : uniform distribution on $[0,1]^2$. Center, and right: distribution uniform on the illustrated black polygon.[]{data-label="fig:Bundles"}](\pathPic/Convex/PrincipalAgentLinear/PALinear_3D_100_Kite.png "fig:"){width="3.5cm"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
#### Pricing of risky assets.
A more complex economic model is considered in [@Carlier:2007gy], where financial products, characterized by their expectancy of gain and their variability, are sold to agents characterized by their risk aversion and their initial risk exposure. We do not give the details of this model here, but simply point out that it fits in the general framework of with the cost function $\operatorname{Cost}(a,b) := - \alpha (\xi a + \sqrt{-(a^2+b)})$, if $a^2+b \leq 0$, and $+\infty$ otherwise, where $\xi \in \R$ and $\alpha \geq 0$ are parameters, see Example 3.2 in [@Carlier:2007gy]. Observing that, for $a^2+b < 0$ $$\det(\operatorname{Hessian}\operatorname{Cost}(a,b)) = \frac 1 {4 (a^2+b)^2},$$ we easily obtain that this cost is convex[^9]. The lack of smoothness of the square-root appearing in the cost function is a potential issue for numerical implementation, hence the problem is reformulated using an additional variable $V$ subject to a (optimizer friendly) conic constraint $$\label{eq:Assets}
\max \left\{ \int \left(\<\nabla U,z\> - U + \alpha\xi\, \partial_x U + \alpha V\right) d \mu ; \, U \in \operatorname{Conv}_0(\R^2), \, V^2 + (\partial_x U)^2+ \partial_y U \leq 0\right\}.$$ A numerical solution, presented Figure \[fig:Assets\], displays the same qualitative properties (Desirability of exclusion, Bunching) as the classical monopolist problem with quadratic cost.
![Optimal pricing of risky assets, with the parameters $\alpha=1$, $\xi=1$, and a uniform customer density on $[1,2]^2$. Left: level sets of the estimated solution $U$ of , with exclusion region $U=0$ in white. Center left: level sets of $\det(\operatorname{Hessian}U)$, the darkest one is the estimated bunching region $\det(\operatorname{Hessian}U)=0$. Center right, and right (detail): optimal product line, colored with the monopolist margin.[]{data-label="fig:Assets"}](\pathPic/Convex/Assets_Value/Paper/Solution.png "fig:"){width="3.8cm"} ![Optimal pricing of risky assets, with the parameters $\alpha=1$, $\xi=1$, and a uniform customer density on $[1,2]^2$. Left: level sets of the estimated solution $U$ of , with exclusion region $U=0$ in white. Center left: level sets of $\det(\operatorname{Hessian}U)$, the darkest one is the estimated bunching region $\det(\operatorname{Hessian}U)=0$. Center right, and right (detail): optimal product line, colored with the monopolist margin.[]{data-label="fig:Assets"}](\pathPic/Convex/Assets_Value/Paper/Hessian.png "fig:"){width="3.8cm"} ![Optimal pricing of risky assets, with the parameters $\alpha=1$, $\xi=1$, and a uniform customer density on $[1,2]^2$. Left: level sets of the estimated solution $U$ of , with exclusion region $U=0$ in white. Center left: level sets of $\det(\operatorname{Hessian}U)$, the darkest one is the estimated bunching region $\det(\operatorname{Hessian}U)=0$. Center right, and right (detail): optimal product line, colored with the monopolist margin.[]{data-label="fig:Assets"}](\pathPic/Convex/Assets_Value/Paper/Margin.png "fig:"){width="2.4cm"} ![Optimal pricing of risky assets, with the parameters $\alpha=1$, $\xi=1$, and a uniform customer density on $[1,2]^2$. Left: level sets of the estimated solution $U$ of , with exclusion region $U=0$ in white. Center left: level sets of $\det(\operatorname{Hessian}U)$, the darkest one is the estimated bunching region $\det(\operatorname{Hessian}U)=0$. Center right, and right (detail): optimal product line, colored with the monopolist margin.[]{data-label="fig:Assets"}](\pathPic/Convex/Assets_Value/Paper/Margin_Detail.png "fig:"){width="4cm"}
Comparison with alternative methods {#sec:Comparison}
-----------------------------------
We compare our implementation of the constraint of convexity with alternative methods that have been proposed in the literature. The compared algorithms are the following:
- (Adaptive constraints) The optimization strategy ($\operatorname{Conv}$) described in Algorithm \[algo:SuperCones\] section §\[sec:strategy\], based on the hierarchy of cones $\operatorname{Conv}(\cV)$, and used in our numerical experiments §\[sec:Monopolist\]. The adaptation ($\operatorname{DConv}$) of this strategy to the hierarchy $\operatorname{DConv}(\cV)$ of cones of “directionally convex” functions, see Appendix \[sec:Directional\].
- (Local constraints) The approach of Aguilera and Morin (AM, [@Aguilera:2008uq]) based on semi-definite programming. A method of Oberman and Friedlander (OF$_2$, OF$_3$, [@Oberman:2011wi]), where OF$_k$ refers to minimization over the cone $\operatorname{DConv}(\cV_k)$ associated to the fixed stencil $\cV_k(x) := \{e \in \cV_{\max}(x); \, \|e\| \leq k\}$. A modification of OF$_3$ by Oberman (Ob$_3$, [@Oberman:2011wy]), with additional constraints ensuring that the output is truly convex.
- (Global constraints) Direct minimization over the full cone $\operatorname{Conv}(X)$, as proposed by Carlier, Lachand-Robert and Maury (CLRM, [@Carlier:2001tq]). Minimization over $\operatorname{GradConv}(X)$, see , following[^10] Ekeland and Moreno (EM, [@Ekeland:2010tl]).
The numerical test chosen is the classical model of the monopolist problem, with quadratic cost, on the domain $[1,2]^2$, see Figure \[fig:Monopolist\] and §\[sec:Monopolist\]. This numerical test case is classical and also considered in [@Ekeland:2010tl; @MERIGOT:tr; @Oberman:2011wi]. It is discretized on a $n \times n$ grid, for different values of $n$ ranging from $10$ to $100$ ($10$ to $50$ for global constraint methods due to memory limitations).
The number of linear constraints of the optimization problems assembled by the methods is shown on Figure \[fig:Comparison1\] (center left). For adaptive strategies, this number corresponds to the final iteration. The semi-definite approach AM is obviously excluded from this comparison. Two groups are clearly separated: Adaptive and Local methods on one side, with quasi-linear growth, and Global methods on the other side, with quadratic growth. Let us emphasize that, despite the similar cardinalities, many constraints of the adaptive methods are not local, see Figure \[fig:StencilRefinement\]. The method $\operatorname{DConv}$ generally uses the least number of constraints, followed by OF$_2$ and then $\operatorname{Conv}$.
\[def:Defect\] The convexity defect of a discrete map $u : X \to \R$, is the smallest $\ve\geq 0$ such that $u+\ve q \in \operatorname{Conv}(X)$. The directional convexity defect of $u$ is the smallest $\ve \geq 0$ such that $u+\ve q \in \operatorname{DConv}(X)$, see Appendix \[sec:Directional\].
Figure \[fig:Comparison1\] displays the convexity defect of the discrete solutions produced by the different algorithms, at several resolutions. This quantity stabilizes at a positive value for the methods OF$_2$ and OF$_3$, which betrays their non-convergence as $n\to \infty$. We expect the convexity defect of the method AM to tend to zero, as the resolution increases, since this method benefits from a convergence guarantee [@Aguilera:2008uq]; for practical resolutions, it remains rather high however. Other methods, except $\operatorname{DConv}$, have a convexity defect several orders of magnitude smaller, and which only reflects the numerical precision of the optimizer (some of the prescribed linear constraints are slightly violated by the optimizer’s output). Finally, the method $\operatorname{DConv}$ has a special status since it often exhibits a large convexity defect, but its *directional* convexity defect vanishes (up to numerical precision).
We attempt on figure \[fig:BadHessian\] to extract, with the different numerical methods, the regions of economical interest: potential customers excluded from the trade $\{U=0\}$, and of customers subject to bunching $\{\det (\operatorname{Hessian}U)=0\}$. While the features extracted from the method $\operatorname{Conv}$ are (hopefully) convincing, the coordinate bias of the method OF$_2$ is apparent, whereas the method AM does not recover the predicted triangular shape of the set of excluded customers [@Rochet:1998uj]. The other methods $\operatorname{DConv}$, CLRM, EM, (not shown) perform similarly to $\operatorname{Conv}$; the method OF$_3$ (not shown) works slightly better than OF$_2$, but still suffers from coordinate bias. The method Ob$_3$ (not shown) seems severely inaccurate[^11]: indeed the hessian matrix condition number with Ob$_k$, $k \geq 1$, cannot drop below $\approx 1/k^2$, see [@Oberman:2011wy], which is incompatible with the bunching phenomenon, see the solution gradients on Figure \[fig:DTGC\].
For each method we compute exactly the monopolist profit , associated with the largest global map $U \in \operatorname{Conv}(\Omega)$ satisfying $U \leq u$ on $X$, where $u\in \cF(X)$ is the method’s discrete output. It is compared on Figure \[fig:Comparison1\] with the best possible profit (which is not known, but was extrapolated from the numerical results). Convergence rate is numerically estimated to $n^{-1.1}$ for all methods[^12] except (i) the semi-definite approach AM for which we find $n^{-0.75}$, and (ii) the method Ob$_3$, for which energy does not seem to decrease.
In terms of computation time[^13], three groups of methods can be distinguished. Global methods suffer from a huge memory cost in addition to their long run times. Methods using a (quasi)-linear number of constraints have comparable run times, thanks to the limited number of stencil refinement steps of the adaptive ones (their computation time might be further reduced by the use of appropriate hot starts for the consecutive subproblems). Finally, the semi-definite programming based method AM is surprisingly fast[^14], although this is at the expense of accuracy, see above. For $n=100$, the method CLRM would use $27\times 10^6$ linear constraints, which with our equipment simply do not fit in memory. The proposed method $\operatorname{Conv}$ selects in $5$ refinement steps a subset containing $\approx 0.4 \%$ of these constraints ($100\times 10^3$), and which is by construction guaranteed to include all the active ones; it completes in $6$ minutes on a standard laptop. In summary, adaptive methods combine the accuracy and convergence guarantees of methods based on global constraints, with the speed and low memory usage of those based on local constraints.
Conclusion and perspectives {#conclusion-and-perspectives .unnumbered}
===========================
We in this paper introduced a new hierarchy of discrete spaces, used to adaptively solve optimization problems posed on the cone of convex functions. The comparison with existing hierarchies of spaces, such as wavelets or finite element spaces on adaptively refined triangulations, is striking by its similarities as much as by its differences. The cones $\operatorname{Conv}(\cV)$ (resp. adaptive wavelet or finite element spaces) are defined through linear inequalities (resp. bases), which become increasingly global (resp. local) as the adaptation loop proceeds. Future directions of research include improving the algorithmic guarantees, developing more applications of the method such as optimal transport, and generalizing the constructed cones of discrete convex functions to unstructured or three dimensional point sets.
#### Acknowledgement.
The author thanks Pr Ekeland and Pr Rochet for introducing him to the monopolist problem, and the Mosek team for their free release policy for public research.
![Comparison of different numerical methods for the classical Monopolist problem.[]{data-label="fig:Comparison1"}](\pathPic/Convex/Efficiency/PrincipalAgent/Legend_LongNames.pdf "fig:"){width="3.9cm"} ![Comparison of different numerical methods for the classical Monopolist problem.[]{data-label="fig:Comparison1"}](\pathPic/Convex/Efficiency/PrincipalAgent/NumberOfConstraints.pdf "fig:"){width="3.9cm"} ![Comparison of different numerical methods for the classical Monopolist problem.[]{data-label="fig:Comparison1"}](\pathPic/Convex/Efficiency/PrincipalAgent/ConvexityDefect.pdf "fig:"){width="3.9cm"} ![Comparison of different numerical methods for the classical Monopolist problem.[]{data-label="fig:Comparison1"}](\pathPic/Convex/Efficiency/PrincipalAgent/DirectionalDefect.pdf "fig:"){width="3.9cm"}\
![Comparison of different numerical methods for the classical Monopolist problem.[]{data-label="fig:Comparison1"}](\pathPic/Convex/Efficiency/PrincipalAgent/ProfitConvergence.pdf "fig:"){width="4.1cm"} ![Comparison of different numerical methods for the classical Monopolist problem.[]{data-label="fig:Comparison1"}](\pathPic/Convex/Efficiency/PrincipalAgent/NumberOfIterations.pdf "fig:"){width="4.1cm"} ![Comparison of different numerical methods for the classical Monopolist problem.[]{data-label="fig:Comparison1"}](\pathPic/Convex/Efficiency/PrincipalAgent/ComputationTime.pdf "fig:"){width="4.1cm"}\
$n=50$ $\sm$ Method $\operatorname{Conv}$ $\operatorname{DConv}$ AM OF$_2$ OF$_3$ Ob$_3$ CLRM EM
----------------------------------------- ----------------------- ------------------------ ------ -------- -------- -------- ------ -------
Constraints $\times 10^{-3}$ 24 10 NA 18 35 72 1738 3803
Defect $\times 10^{3}$ 0.03 3.8 46 59 14 0 0.02 0.01
Profit under estimation $\times 10^{3}$ 0.11 0.11 0.57 0.11 0.11 10 0.12 0.11
Computation time 18s 13s 1.7s 3.8s 6.8s 20s 391s 2070s
\[fig:Comparison2\]
![ Level set $U<10^{-4}$, in white, approximating the region $U=0$ of excluded customers. Other level sets: $\{k\eta \leq \det(\operatorname{Hessian}U) \leq (k+1) \eta\}$, with $\eta=0.07$. The dark blue one, for $k=0$, approximates the region $\det(\operatorname{Hessian}U) = 0$ of customers subject to bunching. Hessian determinant extracted via subgradient measures, see Remark \[rem:Subgradients\], and the numerical methods $\operatorname{Conv}$ (left), OF$_2$ (center left) and AM (center right). Right: extraction by taking the determinant of a finite differences discrete Hessian (and here the method $\operatorname{Conv}$); this naïve procedure is unstable and produces negative values (in red). []{data-label="fig:BadHessian"}](\pathPic/Convex/Efficiency/PrincipalAgent/HessianComparisons/Hessian_Adaptive_50.png "fig:"){width="3.9cm"} ![ Level set $U<10^{-4}$, in white, approximating the region $U=0$ of excluded customers. Other level sets: $\{k\eta \leq \det(\operatorname{Hessian}U) \leq (k+1) \eta\}$, with $\eta=0.07$. The dark blue one, for $k=0$, approximates the region $\det(\operatorname{Hessian}U) = 0$ of customers subject to bunching. Hessian determinant extracted via subgradient measures, see Remark \[rem:Subgradients\], and the numerical methods $\operatorname{Conv}$ (left), OF$_2$ (center left) and AM (center right). Right: extraction by taking the determinant of a finite differences discrete Hessian (and here the method $\operatorname{Conv}$); this naïve procedure is unstable and produces negative values (in red). []{data-label="fig:BadHessian"}](\pathPic/Convex/Efficiency/PrincipalAgent/HessianComparisons/Hessian_OF2_50.png "fig:"){width="3.9cm"} ![ Level set $U<10^{-4}$, in white, approximating the region $U=0$ of excluded customers. Other level sets: $\{k\eta \leq \det(\operatorname{Hessian}U) \leq (k+1) \eta\}$, with $\eta=0.07$. The dark blue one, for $k=0$, approximates the region $\det(\operatorname{Hessian}U) = 0$ of customers subject to bunching. Hessian determinant extracted via subgradient measures, see Remark \[rem:Subgradients\], and the numerical methods $\operatorname{Conv}$ (left), OF$_2$ (center left) and AM (center right). Right: extraction by taking the determinant of a finite differences discrete Hessian (and here the method $\operatorname{Conv}$); this naïve procedure is unstable and produces negative values (in red). []{data-label="fig:BadHessian"}](\pathPic/Convex/Efficiency/PrincipalAgent/HessianComparisons/Hessian_AM_50.png "fig:"){width="3.9cm"} ![ Level set $U<10^{-4}$, in white, approximating the region $U=0$ of excluded customers. Other level sets: $\{k\eta \leq \det(\operatorname{Hessian}U) \leq (k+1) \eta\}$, with $\eta=0.07$. The dark blue one, for $k=0$, approximates the region $\det(\operatorname{Hessian}U) = 0$ of customers subject to bunching. Hessian determinant extracted via subgradient measures, see Remark \[rem:Subgradients\], and the numerical methods $\operatorname{Conv}$ (left), OF$_2$ (center left) and AM (center right). Right: extraction by taking the determinant of a finite differences discrete Hessian (and here the method $\operatorname{Conv}$); this naïve procedure is unstable and produces negative values (in red). []{data-label="fig:BadHessian"}](\pathPic/Convex/Efficiency/PrincipalAgent/HessianComparisons/HessianByDifferences.png "fig:"){width="3.9cm"}
$\phantom{bla bla } \theta = \pi/8$ $\theta=13\pi/80$ $\theta=\pi/5$ $\theta=\pi/4$\
![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Solution_S50_T8.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Solution_S50_T13.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Solution_S50_T16.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Solution_S50_T20.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/SolutionBarChart.png "fig:"){width="0.6cm"}\
![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Hessian_S50_T8.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Hessian_S50_T13.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Hessian_S50_T16.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Hessian_S50_T20.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/HessianBarChart.png "fig:"){width="0.6cm"}\
![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/ProductDensity_S50_T8.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/ProductDensity_S50_T13.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/ProductDensity_S50_T16.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/ProductDensity_S50_T20.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/ProductDensityBarChart.png "fig:"){width="0.6cm"}\
![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Margin_S50_T8.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Margin_S50_T13.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Margin_S50_T16.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/Margin_S50_T20.png "fig:"){width="3.6cm"} ![ Results of the classical principal agent model, with a uniform density of customers on the domain obtained by rotating the square $[1,2]^2$ around its center $(3/2,3/2)$ by the indicated angle $\theta$. From top to bottom: level sets of $U$ (vanishing set indicated in white), level sets of $\det(\operatorname{Hessian}U)$ (allows to discriminate between the three categories of customers), density of products bought (which explodes on some curves), margin of the monopolist. []{data-label="fig:PARotated"}](\pathPic/Convex/PrincipalAgent/Rotated2/MarginBarChart.png "fig:"){width="0.6cm"}
Directional convexity {#sec:Directional}
=====================
We introduce and discuss a weak notion of discrete convexity, which involves slightly fewer linear constraints than and seems sufficient to obtain convincing numerical results, see §\[sec:Comparison\].
\[def:DConv\] We denote by $\operatorname{DConv}(X)$ the collection of elements in $\cF(X)$ on which all the linear forms $S_x^e$, $x \in X$, $e\in \Z^2$ irreducible, supported on $X$, take non-negative values.
Some elements of $\operatorname{DConv}(X)$ cannot be extended into global convex maps on $\operatorname{Hull}(X)$. Their existence follows from the second point of Theorem \[th:AllConstraints\] (minimality of the collection of constraints $S_x^e$, $T_x^e$), but for completeness we give (without proof) a concrete example.
Let $u \in \cF(\Z^2)$ be defined by $u(1,1) = 1$, $u(-1,0)=u(0,-1) = -1$, and $u(x) := 2\|x\|^2$ for other $x \in \Z^2$. Then for all $x \in \Z^2$, and all irreducible $e \in \Z^2$, one has $S_x^e (u) \geq 1$, and $T_x^e(u) \geq 2$ if $\|e\|>1$, with the exception $T_0^{(1,1)}(u) = -1$.
Elements of $\operatorname{DConv}(\Z^2)$ are nevertheless “almost” convex, in the sense that their restriction to a coarsened grid is convex.
If $u \in \operatorname{DConv}(X)$, then $u_{|X'} \in \operatorname{Conv}(X')$, with $X' : =X \cap 2 \Z$.
Let $x \in X$, and let $e \in \Z^2$, $\|e\|>1$, be irreducible and of parents $f,g$. Assuming that $x+2 e, x-2 f, x-2 g \in X$, and observing that $x \pm e \in X$ by convexity, we obtain $$u(x+2 e)+u(x-2 f) +u(x- 2 g) - 3 u(x) = 2 S_x^e(u) + S_{x+e}^e(u) + S_{x-e}^{f-g}(u) \geq 0.$$ Likewise $u(x+2 e)- 2 u(x) + u(x-2 e) = S_{x-e}^e(u)+2 S_x^e(u)+S_{x+e}^e(u) \geq 0$.
The cone $\operatorname{DConv}(X)$ of directionally convex functions admits, just like $\operatorname{Conv}(X)$, a hierarchy of sub-cones $\operatorname{DConv}(\cV)$ associated to stencils.
Let $\cV$ be a family of stencils on $X$, and let $u \in \cF(X)$. The cone $\operatorname{DConv}(\cV)$ is defined by the non-negativity of the following linear forms: for all $x \in X$
- For all $e \in \cV(x)$, the linear form $S_x^e$, if supported on $X$.
- For all $e \in \hat \cV(x)$, the linear form $
H_x^e := P_x^e + P_x^{-e}
$, if supported on $X$.
<!-- -->
- For any stencils $\cV, \cV'$, one has $\operatorname{DConv}(\cV) {\subseteq}\operatorname{DConv}(X)$, $\operatorname{DConv}(\cV) \cap \operatorname{DConv}(\cV') = \operatorname{DConv}(\cV \cap \cV')$, and $\operatorname{DConv}(\cV) \cup \operatorname{DConv}(\cV') {\subseteq}\operatorname{DConv}(\cV \cup \cV')$.
- For any stencils $\cV$, one has $$\label{eq:DConvH}
\operatorname{DConv}(\cV) = \{u \in \operatorname{DConv}(X); \, H_x^e(u) \geq 0 \text{ for all } x \in X, \, e \in \cV_{\max}(x) \sm \cV(x)\}.$$
As observed in §\[sec:CombiningIntersecting\], the second point of this proposition implies the first one. We denote by $\mP(\cV)$ the identity , and prove it by decreasing induction over $\#(\cV)$. Since $\mP(\cV_{\max})$ clearly holds, we consider stencils $\cV \subsetneq \cV_{\max}$.
Let $x \in X$, $e \in \cV_{\max}(x) \sm \cV(x)$, be such that $\|e\|$ is minimal. Similarly to Proposition \[prop:ConvLComplement\] we find that $e$ belongs to the set $\hat \cV(x)$ of candidates for refinement at $x$, and define stencils $\cV'$ by $\cV(x) := \cV(x) \cup \{e\}$, and $\cV'(y) := \cV(y)$ for $y \neq x$. The cones $\operatorname{DConv}(\cV)$ and $\operatorname{DConv}(\cV')$ are defined by a common collection of constraints, with the addition respectively of $H_x^e$ for $\operatorname{DConv}(\cV)$, and $S_x^e$, $H_x^{e+f}$, $H_x^{e+g}$ for $\operatorname{DConv}(\cV')$. Expressing the latter linear forms as combinations of those defining $\operatorname{DConv}(\cV)$ $$S_x^e = H_x^e + S_x^f + S_x^g,\quad H_x^{e+f} = H_x^e + S_{x+e}^f + S_{x-e}^f, \quad H_x^{e+g} = H_x^e + S_{x+e}^g + S_{x-e}^g,$$ and observing that $\mP(\cV')$ holds by induction, we conclude the proof of $\mP(\cV)$.
[^1]: CNRS, University Paris Dauphine, UMR 7534, Laboratory CEREMADE, Paris, France.
[^2]: This work was partly supported by ANR grant NS-LBR ANR-13-JS01-0003-01.
[^3]: Precisely, the constraints $T_x^e$ were omitted in [@Carlier:2001tq] for $\|e\|>\sqrt 2$.
[^4]: This number of constraints is empirically (and slightly erroneously) estimated to $\cO(N^{1.8})$ in [@Carlier:2001tq].
[^5]: Strictly speaking, the optimal product $Q(z)$ is an element of the subgradient $\partial_z U$, which (Lebesgue-)almost surely is a singleton $\{\nabla U(z)\}$. Hence we may write in terms of $\nabla U(z)$, provided the density $\mu$ of customers is absolutely continuous with respect to the Lebesgue measure.
[^6]: Following the indications of Mosek’s user manual, quadratic functionals are implemented under the form of linear functionals involving auxiliary variables subject to conic constraints.
[^7]: With this customer density, [@Rochet:1998uj] expected the bunching region to be triangular, and the image $\nabla U$ to be the union of the segment $[(0,0),(1,1)]$ and of the square $[1,2]^2$. After discussion with the author, and in view of the numerical experiments, we believe that these predictions are erroneous.
[^8]: Optimal solutions of are not uniquely determined outside the customer density support $T=\operatorname{supp}(\mu)$.
[^9]: This property was not noticed in the original work [@Carlier:2007gy].
[^10]: We use the description of $\operatorname{GradConv}(X)$ by $\cO(N^2)$ linear constraints given in [@Ekeland:2010tl], but (for simplicity) not their energy discretization, nor their method for globally extending elements $u \in \operatorname{GradConv}(X)$.
[^11]: The methods (Ob$_k$)$_{k\geq 1}$ are closely related to our approach since they produce outputs with zero convexity defect (up to numerical precision), and the number of linear constraints only grows linearly with the domain cardinality: $\cO(k^2 N)$, with $N:= \#(X)$. We suspect that better results could be obtained with these methods by selecting adaptively and locally the integer $k$.
[^12]: The (presumed) non-convergence of the methods OF$_2 $ and OF$_3$ is not visible in this graph.
[^13]: Experiments conducted on a 2.7 GHz Core i7 (quad-core) laptop, equipped with 16 GB of RAM.
[^14]: The method AM, implemented with Mosek’s conic optimizer, takes only 2.5s to solve the product bundles variant of the monopolist problem on a $64\times 64$ grid, with a uniform density of consumers on $[0,1]^2$. This contrasts with the figure, 751s, reported in [@Aguilera:2008uq] in the same setting but with a different optimizer.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
A Gray code for a combinatorial class is a method for listing the objects in the class so that successive objects differ in some prespecified, small way, typically expressed as a bounded Hamming distance. In a previous work, the authors of the present paper showed, among other things, that the $m$-ary Reflected Gray Code Order yields a Gray code for the set of restricted growth functions. Here we further investigate variations of this order relation, and give the first Gray codes and efficient generating algorithms for bounded restricted growth functions.
[**Keywords:**]{} [*[Gray code (order), restricted growth function, generating algorithm]{}*]{}
author:
- Ahmad Sabri
- Vincent Vajnovszki
title: 'More restricted growth functions: Gray codes and exhaustive generations'
---
Introduction
============
In [@SV] the authors shown that both the order relation induced by the generalization of the Binary Reflected Gray Code and one of its suffix partitioned version yield Gray codes on some sets of restricted integer sequences, and in particular for restricted growth functions. These results are presented in a general framework, where the restrictions are defined by means of statistics on integer sequences.
In the present paper we investigate two prefix partitioning order relations on the set of [*bounded*]{} restricted growth functions: as in [@SV], the original Reflected Gray Code Order on $m$-ary sequences, and a new order relation which is an appropriate modification of the former one. We show that, according to the parity of the imposed bound, one of these order relations gives a Gray code on the set of bounded restricted growth functions. As a byproduct, we obtain a Gray code for restricted growth functions with a specified odd value for the largest entry; the case of an even value of the largest entry remains an open problem. In the final part we present the corresponding exhaustive generating algorithms. A preliminary version of these results were presented at The Japanese Conference on Combinatorics and its Applications in May 2016 in Kyoto [@SV_Jap].
Notation and definitions
========================
A [*restricted growth function*]{} of length $n$ is an integer sequence ${\boldsymbol{s}}=s_1s_2\ldots s_n$ with $s_1=0$ and $0\leq s_{i+1}\leq \max\{s_j\}_{j=1}^i+1$, for all $i$, $1\leq i\leq n-1$. We denote by $R_n$ the set of length $n$ restricted growth functions, and its cardinality is given by the $n$th Bell number (sequence [A000110]{} in [@sloa]), with the exponential generating function $e^{e^x}-1$. And length $n$ restricted growth functions encode the partitions of an $n$-set.
For an integer $b\geq 1$, let $R_n(b)$ denote the set of [*$b$-bounded*]{} sequences in $R_n$, that is, $$R_n(b)=\{s_1s_2\ldots s_n\in R_n\,:\, \max\{s_i\}_{i=1}^n\leq b\},$$ and $$R_n^{*}(b)=\{s_1s_2\ldots s_n\in R_n\,:\, \max\{s_i\}_{i=1}^n= b\}.$$ See Table \[Tb1\] for an example.
---- ----------- -- ----- ----------- --------- ----- ----------------------------------------------------------------------------------------------------- --
1. 0 0 0 0 0 15. 0 1 0 0 0 [*3*]{} 29. **[0 1 1 1 2]{} & [*1*]{}\
2. & 0 0 0 0 1 & [*1*]{} & 16. & 0 1 0 0 1 & [*1*]{} & 30. & **[0 1 1 2 2]{} & [*1*]{}\
3. & 0 0 0 1 0 & [*2*]{} & 17. & **[0 1 0 0 2]{} & [*1*]{} & 31. & **[0 1 1 2 1]{} & [*1*]{}\
4. & 0 0 0 1 1 & [*1*]{} & 18. & 0 1 0 1 0 & [*2*]{} & 32. & **[0 1 1 2 0]{} & [*1*]{}\
5. &**[0 0 0 1 2]{} & [*1*]{} & 19. & 0 1 0 1 1 & [*1*]{} & 33. & **[0 1 2 2 0]{} & [*1*]{}\
6. & 0 0 1 0 0 & [*3*]{} & 20. & **[0 1 0 1 2]{} & [*1*]{} & 34. & **[0 1 2 2 1]{} & [*1*]{}\
7. & 0 0 1 0 1 & [*1*]{} & 21. & **[0 1 0 2 2]{} & [*1*]{} & 35. & **[0 1 2 2 2]{} & [*1*]{}\
8. &**[0 0 1 0 2]{} & [*1*]{} & 22. & **[0 1 0 2 1]{} & [*1*]{} & 36. & **[0 1 2 1 2]{} & [*1*]{}\
9. & 0 0 1 1 0 & [*2*]{} & 23. & **[0 1 0 2 0]{} & [*1*]{} & 37. & **[0 1 2 1 1]{} & [*1*]{}\
10. & 0 0 1 1 1 & [*1*]{} & 24. & 0 1 1 0 0 & [*2*]{} & 38. & **[0 1 2 1 0]{} & [*1*]{}\
11. &**[0 0 1 1 2]{} & [*1*]{} & 25. & 0 1 1 0 1 & [*1*]{} & 39. & **[0 1 2 0 2]{} & [*2*]{}\
12. &**[0 0 1 2 2]{} & [*1*]{} & 26. & **[0 1 1 0 2]{} & [*1*]{} & 40. & **[0 1 2 0 1]{} & [*1*]{}\
13. &**[0 0 1 2 1]{} & [*1*]{} & 27. & 0 1 1 1 0 & [*2*]{} & 41. & **[0 1 2 0 0]{} & [*1*]{}\
14. &**[0 0 1 2 0]{} & [*1*]{} & 28. & 0 1 1 1 1 & [*1*]{} & & &\
**************************************************
---- ----------- -- ----- ----------- --------- ----- ----------------------------------------------------------------------------------------------------- --
: \[Tb1\] The set $R_5(2)$, and in bold-face the set $R^*_5(2)$. Sequences are listed in $\preccdot$ order (see Definition \[de:coRGCorder\]) and in italic is the Hamming distance between consecutive sequences.
If a list of same length sequences is such that the Hamming distance between successive sequences (that is, the number of positions in which the sequences differ) is bounded from above by a constant, independent on the sequences length, then the list is said to be a [*Gray code*]{}. When we want to explicitly specify this constant, say $d$, then we refer to such a list as a [*$d$-Gray code*]{}; in addition, if the positions where the successive sequences differ are adjacent, then we say that the list is a [*$d$-adjacent Gray code*]{}.
The next two definitions give order relations on the set of $m$-ary integer sequences of length $n$ on which our Gray codes are based.
\[de:RGCorder\] Let $m$ and $n$ be positive integers with $m\geq 2$. The [*Reflected Gray Code Order*]{} $\prec$ on $\{0,1,\ldots,m-1\}^n$ is defined as: ${\boldsymbol{s}}=s_1s_2\ldots s_n$ is less than ${\boldsymbol{t}}=t_1t_2\ldots t_n$, denoted by ${\boldsymbol{s}}\prec{\boldsymbol{t}}$, if
either $\sum_{i=1}^{k-1} s_i$ is even and $s_k<t_k$, or $\sum_{i=1}^{k-1} s_i$ is odd and $s_k>t_k$
for some $k$ with $s_i=t_i$ ($1\leq i\leq k-1$) and $s_k\neq t_k$.
This order relation is the natural extension to $m$-ary sequences of the order induced by the Binary Reflected Gray Code introduced in [@Gray]. See for example [@BBPSV; @SV] where this order relation and its variations are considered in the context of factor avoiding words and of statistic-restricted sequences.
\[de:coRGCorder\] Let $m$ and $n$ be positive integers with $m\geq 2$. The [*co-Reflected Gray Code Order*]{}[^1] $\preccdot$ on $\{0,1,\ldots,m-1\}^n$ is defined as: ${\boldsymbol{s}}=s_1s_2\ldots s_n$ is less than ${\boldsymbol{t}}=t_1t_2\ldots t_n$, denoted by ${\boldsymbol{s}}\preccdot\ {\boldsymbol{t}}$, if
either $U_k$ is even and $s_k<t_k$, or $U_k$ is odd and $s_k>t_k$
for some $k$ with $s_i=t_i$ ($1\leq i\leq k-1$) and $s_k\neq t_k$ where $
U_k= | \{i\in \{1,2,\ldots,k-1\}:s_i\neq 0, s_i \mbox{ is even}\}|$.
See Table \[Tb\_co\] for an example.
For a set $S$ of same length integer sequences the [*$\prec$-first*]{} (resp. [*$\prec$-last*]{}) sequence in $S$ is the first (resp. last) sequence when the set is listed in $\prec$ order; and [*$\preccdot$-first*]{} and [*$\preccdot$-last*]{} are defined in a similar way. And for sequence ${\boldsymbol{u}}$, $ {\boldsymbol{u}}\,|\,S$ denotes the subset of $S$ of sequences having prefix ${\boldsymbol{u}}$.
Both order relations, Reflected and co-Reflected Gray Code Order produce prefix partitioned lists, that is to say, if a set of sequences is listed in one of these order relations, then the sequences having a common prefix are consecutive in the list.
---- ------- ----- ------- ----- -------
1. 0 0 0 10. 1 0 0 19. 2 2 0
2. 0 0 1 11. 1 0 1 20. 2 2 1
3. 0 0 2 12. 1 0 2 21. 2 2 2
4. 0 1 0 13. 1 1 0 22. 2 1 2
5. 0 1 1 14. 1 1 1 23. 2 1 1
6. 0 1 2 15. 1 1 2 24. 2 1 0
7. 0 2 2 16. 1 2 2 25. 2 0 2
8. 0 2 1 17. 1 2 1 26. 2 0 1
9. 0 2 0 18. 1 2 0 27. 2 0 0
---- ------- ----- ------- ----- -------
: \[Tb\_co\] The set $\{0,1,2\}^3$ listed in $\preccdot$ order.
The Gray codes
==============
In this section we show that the set $R_n(b)$, with $b$ odd, listed in $\prec$ order is a Gray code. However, $\prec$ does not induce a Gray code when $b$ is even: the Hamming distance between two consecutive sequences can be arbitrary large for large enough $n$. To overcome this, we consider $\preccdot$ order instead of $\prec$ order when $b$ is even, and we show that the obtained list is a Gray code.
In the proof of Theorem \[k\_odd\] below we need the following propositions which give the forms of the last and first sequence in $R_n(b)$ having a certain fixed prefix, when sequences are listed in $\prec$ order.
\[pro:pro\_Rb\_odd1\] Let $b\geq1$ and odd, $k\leq n-2$ and ${\boldsymbol{s}}=s_1\ldots s_{k}$. If ${\boldsymbol{t}}$ is the $\prec$-last sequence in ${\boldsymbol{s}}\,|\,R_n(b)$, then ${\boldsymbol{t}}$ has one of the following forms:
1. ${\boldsymbol{t}}={\boldsymbol{s}}M0\ldots0$ if $\sum_{i=1}^{k} s_i$ is even and $M$ is odd,
2. ${\boldsymbol{t}}={\boldsymbol{s}}M(M+1)0\ldots0$ if $\sum_{i=1}^{k} s_i$ is even and $M$ is even,
3. ${\boldsymbol{t}}={\boldsymbol{s}}0\ldots0$ if $\sum_{i=1}^{k} s_i$ is odd,
where $M=\min\{b,\max\{s_i\}_{i=1}^k+1\}$.
Let ${\boldsymbol{t}}=s_1\dots s_kt_{k+1}\ldots t_n$ be the $\prec$-last sequence in ${\boldsymbol{s}}\,|\,R_n(b)$.
Referring to the definition of $\prec$ order in Definition \[de:RGCorder\], if $\sum_{i=1}^{k} s_i$ is even, then $t_{k+1}=\min\{b,\max\{s_i\}_{i=1}^k+1\}= M$, and based on the parity of $M$, two cases can occur.
- If $M$ is odd, then we have that $\sum_{i=1}^k s_i+t_{k+1}=\sum_{i=1}^{k} s_i+M$ is odd, thus $t_{k+2}\ldots t_n=0\ldots 0$, and we retrieve the form prescribed by the first point of the proposition.
- If $M$ is even, then $M\neq b$ and $\sum_{i=1}^k s_i+t_{k+1}=\sum_{i=1}^{k} s_i+M$ is even, thus $t_{k+2}=\max\{s_1,\ldots, s_k,t_{k+1}\}+1=M+1$, which is odd. Next, we have $\sum_{i=1}^k s_i+t_{k+1}+t_{k+2}=\sum_{i=1}^k s_i+2M+1$ is odd, and this implies as above that $t_{k+3}\ldots t_n=0\ldots 0$, and we retrieve the second point of the proposition.
For the case when $\sum_{i=1}^{k} s_i$ is odd, in a similar way we have $t_{k+1}\dots t_n=0\dots 0$.
The next proposition is the ‘first’ counterpart of the previous one. Its proof is similar by exchanging the parity of the summation from ‘odd’ to ‘even’ and vice-versa, and it is left to the reader.
\[pro:pro\_Rb\_odd2\] Let $b\geq1$ and odd, $k\leq n-2$ and ${\boldsymbol{s}}=s_1\ldots s_{k}$. If ${\boldsymbol{t}}$ is the $\prec$-first sequence in ${\boldsymbol{s}}\,|\,R_n(b)$, then ${\boldsymbol{t}}$ has one of the following forms:
1. ${\boldsymbol{t}}={\boldsymbol{s}}M0\ldots0$ if $\sum_{i=1}^{k} s_i$ is odd and $M$ is odd,
2. ${\boldsymbol{t}}={\boldsymbol{s}}M(M+1)0\ldots0$ if $\sum_{i=1}^{k} s_i$ is odd and $M$ is even,
3. ${\boldsymbol{t}}={\boldsymbol{s}}0\ldots0$ if $\sum_{i=1}^{k} s_i$ is even,
where $M=\min\{b,\max\{s_i\}_{i=1}^k+1\}$.
Based on Propositions \[pro:pro\_Rb\_odd1\] and \[pro:pro\_Rb\_odd2\], we have the following theorem.
\[k\_odd\] For any $n,b\geq 1$ and $b$ odd, $R_n(b)$ listed in $\prec$ order is a $3$-adjacent Gray code.
Let ${\boldsymbol{s}}=s_1s_2\dots s_n$ and ${\boldsymbol{t}}=t_1t_2\dots t_n$ be two consecutive sequences in $\prec$ ordered list for the set $R_n(b)$, with ${\boldsymbol{s}}\prec {\boldsymbol{t}}$, and let $k$ be the leftmost position where ${\boldsymbol{s}}$ and ${\boldsymbol{t}}$ differ. If $k\geq n-2$, then obviously ${\boldsymbol{s}}$ and ${\boldsymbol{t}}$ differ in at most three positions, otherwise let ${\boldsymbol{s}}'=s_1\ldots s_k$ and ${\boldsymbol{t}}'=t_1\ldots t_k$. Thus, ${\boldsymbol{s}}$ is the $\prec$-last sequence in ${\boldsymbol{s}}'\,|\,R_n(b)$ and ${\boldsymbol{t}}$ is the $\prec$-first sequence in ${\boldsymbol{t}}'\,|\,R_n(b)$. Combining Propositions \[pro:pro\_Rb\_odd1\] and \[pro:pro\_Rb\_odd2\] we have that, when $k\leq n-3$, $s_{k+3}s_{k+4}\dots s_n=t_{k+3}t_{k+4}\dots t_n=00\dots 0$. And since $s_i=t_i$ for $i=1\ldots,k-1$, the statement holds.
Theorem \[k\] below shows the Graycodeness of $R_n(b)$, $b\geq 1$ and even, listed in $\preccdot$ order, and as for Theorem \[k\_odd\] we need the next two propositions; in its proof we will make use of the Iverson bracket notation: $[P]$ is $1$ if the statement $P$ is true, and 0 otherwise. Thus, for a sequence $s_1s_2\dots s_n$ and a $k\leq n$, $| \{i\in \{1,2,\ldots,k\}:s_i\neq 0 \mbox{ and } s_i \mbox{ is even}\}|=\sum_{i=1}^k[s_i\neq0 {\rm\ and\ } s_i {\rm\ is\ even}]$.
\[pro:pro\_Rb\_even1\] Let $b\geq2$ and even, $k\leq n-2$ and ${\boldsymbol{s}}=s_1s_2\ldots s_k$. If ${\boldsymbol{t}}$ is the $\preccdot$-last sequence in ${\boldsymbol{s}}\,|\,R_n(b)$, then ${\boldsymbol{t}}$ has one of the following forms:
1. ${\boldsymbol{t}}={\boldsymbol{s}}M0\ldots0$ if $U_{k+1}$ is even and $M$ is even,
2. ${\boldsymbol{t}}={\boldsymbol{s}}M(M+1)0\ldots0$ if $U_{k+1}$ is even and $M$ is odd,
3. ${\boldsymbol{t}}={\boldsymbol{s}}0\ldots0$ if $U_{k+1}$ is odd,
where $M=\min\{b,\max\{s_i\}_{i=1}^k+1\}$ and $U_{k+1}=\sum_{i=1}^k[s_i\neq0 {\rm\ and\ } s_i {\rm\ is\ even}]$.
Let ${\boldsymbol{t}}=s_1\dots s_kt_{k+1}\dots t_n$ be the $\preccdot$-last sequence in ${\boldsymbol{s}}\,|\,R_n(b)$.
Referring to the definition of $\preccdot$ order in Definition \[de:coRGCorder\], if $U_{k+1}$ is even, then $t_{k+1}=\min\{b,\max\{s_i\}_{i=1}^k+1\}=M>0$, and based on the parity of $M$, two cases can occur.
- If $M$ is even, then $U_{k+1}+[t_{k+1}\neq0 {\rm\ and\ } t_{k+1} {\rm\ is\ even}]=U_{k+1}+1$ is odd, thus $t_{k+2}\dots t_n=0\dots 0$, and we retrieve the form prescribed by the first point of the proposition.
- If $M$ is odd, then $M\neq b$ and $U_{k+1}+[t_{k+1}\neq0 {\rm\ and\ } t_{k+1} {\rm\ is\ even}]=U_{k+1}$ is even, thus $t_{k+2}=\max\{s_1,\dots, s_k,t_{k+1}\}+1=M+1$, which is even. Next, we have $U_{k+1}+[t_{k+1}\neq0 {\rm\ and\ } t_{k+1} {\rm\ is\ even}]+ [t_{k+2}\neq0 {\rm\ and\ } t_{k+2} {\rm\ is\ even}]=U_{k+1}+1$ is odd, and this implies as above that $t_{k+3}\dots t_n=0\dots 0$, and we retrieve the second point of the proposition.
For the case when $U_{k+1}$ is odd, in a similar way we have $t_{k+1}\dots t_n=0\dots 0$.
The next proposition is the ‘first’ counterpart of the previous one.
\[pro:pro\_Rb\_even2\] Let $b\geq2$ and even, $k\leq n-2$ and ${\boldsymbol{s}}=s_1s_2\ldots s_k$. If ${\boldsymbol{t}}$ is the $\preccdot$-first sequence in ${\boldsymbol{s}}\,|\,R_n(b)$, then ${\boldsymbol{t}}$ has one of the following forms:
1. ${\boldsymbol{t}}={\boldsymbol{s}}M0\ldots0$ if $U_{k+1}$ is odd and $M$ is even,
2. ${\boldsymbol{t}}={\boldsymbol{s}}M(M+1)0\ldots0$ if $U_{k+1}$ is odd and $M$ is odd,
3. ${\boldsymbol{t}}={\boldsymbol{s}}0\ldots0$ if $U_{k+1}$ is even,
where $M=\min\{b,\max\{s_i\}_{i=1}^k+1\}$ and $U_{k+1}=\sum_{i=1}^k[s_i\neq0 {\rm\ and\ } s_i {\rm\ is\ even}]$.
Based on Propositions \[pro:pro\_Rb\_even1\] and \[pro:pro\_Rb\_even2\] we have the following theorem, its proof is similar with that of Theorem \[k\_odd\].
\[k\_even\] For any $n\geq 1$, $b\geq 2$ and even, $R_n(b)$ listed in $\preccdot$ order is a $3$-adjacent Gray code.
It is worth to mention that, neither $\prec$ for even $b$, nor $\preccdot$ for odd $b$ yields a Gray code on $R_n(b)$. Considering $b\geq n$ in Theorem \[k\_odd\] and \[k\_even\], the bound $b$ does not actually provide any restriction, and in this case $R_n(b)=R_n$, and we have the following corollary.
For any $n\geq 1$, $R_n$ listed in both $\prec$ and $\preccdot$ order are $3$-adjacent Gray codes.
\[k\] For any $b\geq 1$ and odd, $n>b$, $R_n^*(b)$ listed in $\prec$ order is a $5$-Gray code.
For two integers $a$ and $b$, $0<a\leq b$, we define $\tau_{a,b}$ as the length $b-a$ increasing sequence $(a+1)(a+2) \ldots (b-1)b$, and $\tau_{a,b}$ is vanishingly empty if $a=b$. Imposing to a sequence ${\boldsymbol{s}}$ in $R_n(b)$ to have its largest element equal to $b$ (so, to belong to $R^*_n(b)$) implies that either $b$ occurs in ${\boldsymbol{s}}$ before its last position, or ${\boldsymbol{s}}$ ends with $b$, and in this case the tail of ${\boldsymbol{s}}$ is $\tau_{a,b}$ for an appropriate $a<b$. More precisely, in the latter case, ${\boldsymbol{s}}$ has the form $s_1s_2\ldots s_j\tau_{a,b}$, for some $j$ and $a$, with $a=\max\{s_i\}_{i=1}^j$ and $j=n-(b-a)$.
Now let ${\boldsymbol{s}}=s_1s_2\ldots s_n\prec{\boldsymbol{t}}=t_1t_2\ldots t_n$ be two consecutive sequences in the $\prec$ ordered list for $R^*_n(b)$, and let $k\leq n-3$ be the leftmost position where ${\boldsymbol{s}}$ and ${\boldsymbol{t}}$ differ, thus $s_1s_2\ldots s_{k-1}=t_1t_2\ldots t_{k-1}$. It follows that ${\boldsymbol{s}}$ is the $\prec$-last sequence in $R^*_n(b)$ having the prefix $s_1s_2\ldots s_k$, and using Proposition \[pro:pro\_Rb\_odd1\] and the notations therein, by imposing that $\max\{s_i\}_{i=1}^n$ is equal to $b$, we have:
- if $\sum_{i=1}^ks_i$ is odd, then ${\boldsymbol{s}}$ has the form $s_1s_2\ldots s_k0\ldots0\,\tau_{a,b}$, where $a=\max\{s_i\}_{i=1}^k$,
- if $\sum_{i=1}^ks_i$ is even, then ${\boldsymbol{s}}$ has one of the following forms:
- $s_1s_2\ldots s_k M0\ldots0\,\tau_{M,b}$, or
- $s_1s_2\ldots s_kM(M+1)0\ldots0\,\tau_{M+1,b}$.
When the above $\tau$’s suffixes are empty, we retrieve precisely the three cases in Proposition \[pro:pro\_Rb\_odd1\].
Similarly, ${\boldsymbol{t}}$ is the $\prec$-first sequence in $R^*_n(b)$ having the prefix $t_1t_2\ldots t_{k-1}t_k=s_1s_2\ldots s_{k-1}t_k$. Since by the definition of $\prec$ order we have that $t_k=s_k+1$ or $t_k=s_k-1$, it follows that $\sum_{i=1}^kt_i$ and $\sum_{i=1}^ks_i$ have different parity (that is, $\sum_{i=1}^kt_i$ is odd if and only if $\sum_{i=1}^ks_i$ is even), and by Proposition \[pro:pro\_Rb\_odd2\] and replacing for notational convenience $M$ by $M'$, we have:
- if $\sum_{i=1}^ks_i$ is odd, then ${\boldsymbol{t}}$ has the form $t_1t_2\ldots t_k0\ldots0\,\tau_{a',b}$, where $a'=\max\{t_i\}_{i=1}^k$,
- if $\sum_{i=1}^ks_i$ is even, then ${\boldsymbol{t}}$ has one of the following forms:
- $t_1t_2\ldots t_k M'0\ldots0\,\tau_{M',b}$, or
- $t_1t_2\ldots t_kM'(M'+1)0\ldots0\,\tau_{M'+1,b}$.
With these notations, since $t_k\in \{s_k+1,s_k-1\}$, it follows that
- if $\sum_{i=1}^ks_i$ is odd, then $a'\in\{a-1,a,a+1\}$, and so the length of $\tau_{a,b}$ and that of $\tau_{a',b}$ differ by at most one; and
- if $\sum_{i=1}^ks_i$ is even, then $M'\in\{M-1,M,M+1\}$, and the length of the non-zero tail of ${\boldsymbol{s}}$ and that of ${\boldsymbol{t}}$ (defined by means of $\tau$ sequences) differ by at most two.
Finally, the whole sequences ${\boldsymbol{s}}$ and ${\boldsymbol{t}}$ differ in at most five (not necessarily adjacent) positions, and the statement holds.
Generating algorithms
=====================
An exhaustive generating algorithm is one generating all sequences in a combinatorial class, with some predefined properties ([*e.g.*]{}, having the same length). Such an algorithm is said to run in [*constant amortized time*]{} if it generates each object in $O(1)$ time, in amortized sense. In [@Ruskey] the author called such an algorithm [*CAT algorithm*]{} and shows that a recursive generating algorithm satisfying the following properties is precisely a CAT algorithm:
- Each recursive call either generates an object or produces at least two recursive calls;
- The amount of computation in each recursive call is proportional to the degree of the call (that is, to the number of subsequent recursive calls produced by current call).
Procedure [Gen1]{} in Fig. \[fig:alg\_Rnb\_odd\] generates all sequences belonging to $R_n(b)$ in Reflected Gray Code Order. Especially when $b$ is odd, the generation induces a 3-adjacent Gray code. The bound $b$ and the generated sequence $s=s_1s_2\ldots s_n$ are global. The $k$ parameter is the position where the value is to be assigned (see line 8 and 13); the $dir$ parameter represents the direction of sequencing for $s_k$, whether it is up (when $dir$ is even, see line 7) or down (when $dir$ is odd, see line 12); and $m$ is such that $m+1$ is the the maximum value that can be assigned to $s_k$, that is, $\min\{b-1,\max\{s_i\}_{i=1}^{k-1}\}$ (see line 5).
The algorithm initially sets $s_1=0$, and the recursive calls are triggered by the initial call [Gen1]{}$(2,0,0)$. For the current position $k$, the algorithm assigns a value to $s_k$ (line 8 or 13) followed by recursive calls in line 10 or 15. This scheme guarantees that each recursive call will produce subsequent recursive calls until $k=n+1$ (line 4), that is, when a sequence of length $n$ is generated and printed out by [Type()]{} procedure. This process eventually generates all sequences in $R_n(b)$. In addition, by construction, algorithm [Gen1]{} satisfies the previous CAT desiderata, and so it is en efficient exhaustive generating algorithm.
+-----------------------------------------------------------------------+
| \ |
| [01]{} [**procedure**]{} [Gen1]{}($k$, $dir$, $m$: integer)\ |
| [02]{} [**global**]{} $s$, $n$, $b$: integer;\ |
| [03]{} [**local**]{} $i$, $u$: integer;\ |
| [04]{} [**if**]{} $k=n+1$ [**then**]{} [Type()]{};\ |
| [05]{} [**else**]{} **if** $m=b$ [**then**]{} $m:=b-1$; |
| [**endif**]{}\ |
| [06]{} $dir$ mod $2=0$\ |
| [07]{} **for** =$i:=0$ [**to**]{} $m+1$ [**do**]{}\ |
| [08]{} $s_k:=i$;\ |
| [09]{} $m<s_k$ [**then**]{} $u:=s_k$; [**else**]{} $u:=m$; |
| [**endif**]{}\ |
| [10]{} ;\ |
| [11]{}\ |
| [12]{} $i:=m+1$ [**downto**]{} $0$ [**do**]{}\ |
| [13]{} $s_k:=i$;\ |
| [14]{} $m<s_k$ [**then**]{} $u:=s_k$; [**else**]{} $u:=m$; |
| [**endif**]{}\ |
| [15]{} $(k+1,i+1,u)$;\ |
| [16]{}\ |
| [17]{}\ |
| [18]{} [**endif**]{}\ |
| [19]{} [**end procedure.**]{} |
+:=====================================================================:+
+-----------------------------------------------------------------------+
Similarly, the call [Gen2]{}$(2,0,0)$ of the algorithm in Fig. \[fig:alg\_Rnb\_even\] generates sequences in $R_n(b)$ in co-Reflected Gray Code Order, and in particular when $b$ is even, a 3-adjacent Gray code for these sequences. And again it satisfies the CAT desiderata, and so it is en efficient exhaustive generating algorithm.
Finally, algorithm [Gen3]{} in Fig. \[fig:alg\_Pnb\_odd\] generates the set $R^*_n(b)$ in Reflected Gray Code Order and produces a 5-Gray code if $b$ is odd. It mimes algorithm [Gen1]{} and the only differences consist in an additional parameter $a$ and lines 5, 6, 13 and 19, and its main call is [Gen3]{}$(2,0,0,0)$. Parameter $a$ keeps track of the maximum value in the prefix $s_1s_2\dots s_{k-1}$ of the currently generated sequence, and it is updated in lines 13 and 19. Furthermore, when the current position $k$ belongs to a $\tau$-tail (see the proof of Theorem \[k\]), that is, condition $k=n+1+a-b$ in line 5 is satisfied, then the imposed value is written in this position, and similarly for the next two positions. Theorem \[k\] ensures that there are no differences between the current sequence and the previous generated one beyond position $k+2$, and thus a new sequence in $R^*_n(b)$ is generated. And as previously, [Gen3]{} is a CAT generating algorithm.
+-----------------------------------------------------------------------+
| \ |
| [**procedure**]{} [Gen2]{}($k$, $dir$, $m$: integer)\ |
| [**global**]{} $s$, $n$, $b$: integer;\ |
| [**local**]{} $i$, $u$: integer;\ |
| [**if**]{} $k=n+1$ [**then**]{} [Type()]{};\ |
| [**else**]{} **if** $m=b$ [**then**]{} $m:=b-1$; [**endif**]{}\ |
| $dir$ mod 2$=0$\ |
| **for** = $i:=0$ [**to**]{} $m+1$ [**do**]{}\ |
| $s_k:=i$;\ |
| $m<s_k$ [**then**]{} $u:=s_k$; [**else**]{} $u:=m$; [**endif**]{}\ |
| $s_k=0$ **then** [Gen2]{}$(k+1, 0,u)$;\ |
| $(k+1, i+1,u)$;\ |
| \ |
| \ |
| $i:=m+1$ [**downto**]{} $0$ [**do**]{}\ |
| $s_k:=i$;\ |
| $m<s_k$ [**then**]{} $u:=s_k$; [**else**]{} $u:=m$; [**endif**]{}\ |
| $s_k=0$ **then** [Gen2]{}$(k+1, 1,u)$;\ |
| $(k+1, i,u)$;\ |
| \ |
| \ |
| \ |
| [**endif**]{}\ |
| [**end procedure.**]{} |
+:=====================================================================:+
+-----------------------------------------------------------------------+
+-----------------------------------------------------------------------+
| \ |
| [01]{} [**procedure**]{} [Gen3]{}($k$, $dir$, $m$, $a$: integer)\ |
| [02]{} [**global**]{} $s$, $n$, $b$: integer;\ |
| [03]{} [**local**]{} $i$, $u$, $\ell$: integer;\ |
| [04]{} [**if**]{} $k=n+1$ [**then**]{} [Type()]{};\ |
| [05]{} [**else**]{} **if** $k=n+1 + a-b$\ |
| [06]{} **for** $i:=0$ [**to**]{} $2$ [**do**]{} [**if**]{} |
| $k+i\leq n$ [**then**]{} $s_{k+i}:=a+1+i$; [**endif**]{} |
| [**endfor**]{}\ |
| [07]{} ;\ |
| [08]{} **if** $m=b$ [**then**]{} $m:=b-1$; [**endif**]{}\ |
| [09]{} $dir$ mod 2$=0$\ |
| [10]{} **for** = $i:=0$ [**to**]{} $m+1$ [**do**]{}\ |
| [11]{} $s_k:=i$;\ |
| [12]{} $m<s_k$ [**then**]{} $u:=s_k$; [**else**]{} $u:=m$; |
| [**endif**]{}\ |
| [13]{} $a<s_k$ [**then**]{} $\ell:=s_k$; [**else**]{} $\ell:=a$; |
| [**endif**]{}\ |
| [14]{} ;\ |
| [15]{}\ |
| [16]{} $i:=m+1$ [**downto**]{} $0$ [**do**]{}\ |
| [17]{} $s_k:=i$;\ |
| [18]{} $m<s_k$ [**then**]{} $u:=s_k$; [**else**]{} $u:=m$; |
| [**endif**]{}\ |
| [19]{} $a<s_k$ [**then**]{} $\ell:=s_k$; [**else**]{} $\ell:=a$; |
| [**endif**]{}\ |
| [20]{} $(k+1,i+1,u,\ell)$;\ |
| [21]{}\ |
| [22]{}\ |
| [23]{}\ |
| [24]{} [**endif**]{}\ |
| [25]{} [**end procedure.**]{} |
+:=====================================================================:+
+-----------------------------------------------------------------------+
[**Final remarks.**]{} We suspect that the upper bounds 3 in Theorems \[k\_odd\] and \[k\_even\], and 5 in Theorem \[k\] are not tight, and a natural question arises: are there more restrictive Gray codes for $R_n(b)$ and for $R^*_n(b)$ with $b$ odd? Finally, is there a natural order relation inducing a Gray code on $R^*_n(b)$ when $b$ is even?
A. Bernini, S. Bilotta, R. Pinzani, A. Sabri, V. Vajnovszki, Reflected Gray codes for $q$-ary words avoiding a given factor, Acta Informatica, 52(7), 573-592 (2015).
F. Gray, Pulse code communication, U.S. Patent 2632058 (1953).
F. Ruskey, Combinatorial generation, Book in preparation.
A. Sabri, V. Vajnovszki, Reflected Gray code based orders on some restricted growth sequences, The Computer Journal, 58(5), 1099-1111 (2015).
A. Sabri, V. Vajnovszki, Bounded growth functions: Gray codes and exhaustive generation, The Japanese Conference on Combinatorics and its Applications, May 21-25, 2016, Kyoto, Japan.
N.J.A. Sloane, The On-line Encyclopedia of Integer Sequences, available electronically at [http://oeis.org]{}.
[^1]: In [@SV] a similar terminology is used for a slightly different notion
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Issues that are specific for formulating fermions in light-cone quantization are discussed. Special emphasis is put on the use of parity invariance in the non-perturbative renormalization of light-cone Hamiltonians.'
address: |
Department of Physics,New Mexico State University\
Las Cruces, New Mexico 88003, U.S.A.
author:
- 'M. BURKARDT'
title: 'FERMIONS ON THE LIGHT-FRONT'
---
\#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{}
Light-front (LF) quantization is the most physical approach to calculating parton distributions on the basis of QCD [@adv]. Before one can formulate QCD with quarks, it is necessary that one understands how to describe fermions in this framework. This in turn requires that one addresses the following issues
How is spontaneous symmetry breaking (chiral symmetry!) manifested in the LF framework, where the vacuum appears to be trivial?
Is it possible to preserve current conservation and parity invariance in this framework?
How does one formulate fermions on the transverse lattice, which seems to be a very promising approaches to pure glue LFQCD [@bardeen; @dalley]?[^1]
Spontaneous Symmetry Breaking
=============================
The first of the above issues has been addressed very often in the past and we will restrict ourselves here to a brief summary.[^2] In the LF framework, non-trivial vacuum structure can reside only in zero-modes ($k^+=0$ modes). Since these are high-energy modes (actually infinite energy in the continuum) one often does not include them as explicit degrees of freedom but assumes they have been integrated out, leaving behind an effective LF-Hamiltonian. The important points here are the following. If the zero-mode sector involves spontaneous symmetry breaking, this manifests itself as explicit symmetry breaking for the effective Hamiltonian. In general, these effective LF Hamiltonians thus have a much richer operator structure than the canonical Hamiltonian. Therefore, compared to a conventional Hamiltonian framework, the question of the vacuum has been shifted from the states to the operators and it should thus be clear that the issues of renormalization and the vacuum are deeply entangled in the LF framework.
Current Conservation
====================
Despite widespread confusion on this subject, vector current conservation (VCC) is actually not a problem in the LF framework. Many researchers avoid the subject of current conservation because the divergence of the vector current q\_j\^(q) = q\^-j\^+ + q\^+j\^- - [q]{}\_\_involves $j^-$, which is quadratic in the constrained fermion spinor component j\^-= \^\_[(-)]{}\_[(-)]{}, and thus $j^-$ contains quartic interactions making it at least as difficult to renormalize as the Hamiltonian.
We know already from renormalizing $P^-$ that the canonical relation between $\psi_{(-)}$ and $\psi_{(+)}$ is in general not preserved in composite operators (such as $\bar{\psi}\psi$). It is therefore clear that a canonical definition for $j^-$ will in general violate current conservation since it does not take into account integrating out zero-modes and other high-energy degrees of freedom.
However, in LF gauge $A^+=0$, $j^-$ does not enter the Hamiltonian and therefore one can address its definition separately from the construction of the Hamiltonian. In fact, it is very easy to find a pragmatic definition which obviously guarantees manifest current conservation, namely j\^-(q\^+) &=& - (1+1) \[eq:j-def\]\
j\^-(q\^+,[q]{}\_) &=& - { - [q]{}\_\_(q\^+,[q]{}\_)} (3+1)in 1+1 and 3+1 dimensions respectively. The corresponding expressions in coordinate space $(x^-,{\vec x}_\perp)$ can be obtained by Fourier transform. In summary,
Most importantly, VCC is no problem in LF quantization and is manifest at the operator level, provided $j^-$ is [*defined*]{} using Eq. (\[eq:j-def\]).
Since VCC can easily be made manifest (by using the above construction!), there is no point in [*testing*]{} its validity and it cannot be used as a non-perturbative renormalization condition either.
For a non-interacting theory, Eq. (\[eq:j-def\]) reduces to the canonical definition of $j^-$, but in general this is not the case when $P^-$ contains interactions or even non-canonical terms. Note that (as so often on the LF) $q^+$ appears in the denominator of Eq. (\[eq:j-def\]). Therefore, as usual, one should be very careful while taking the $q^+\rightarrow 0$ limit and while drawing any conclusions about this limit. An example of this kind are the pair creation terms in $j^-$. Naively they do not contribute for $q^+\rightarrow 0$, since the $q\bar{q}$ pair emanating from $j^-$ necessarily carries positive $q^+$. However, since $j^-$ often has very singular matrix elements for $q^+$, such seemingly vanishing terms nevertheless survive the $q^+\rightarrow 0$ limit, which often leads to confusion. For an early example of this kind see Ref. [@mb:1+1]. A more recent discussion can be found in Ref. [@recent].
Parity Invariance
=================
General Remarks
---------------
A parity transformation, $x^0\stackrel{P}{\longrightarrow}
x^0$, ${\vec x}\stackrel{P}{\longrightarrow}-{\vec x}$ leaves the quantization hyperplane ($x^0=0$) in equal time (ET) quantization invariant and therefore the parity operator is a kinematic operator in such a framework. It is thus very easy to ensure that parity is a manifest symmetry in ET quantization by tracking parity at each step in a calculation. The situation is completely different in the LF framework, where the same parity transformation exchanges LF-‘time’ ($x^+\equiv x^0+x^3$) and space ($x^-\equiv x^0-x^3$) directions, i.e. x\^+ x\^- and therefore the quantization hyperplane $x^+=0$ is [*not*]{} invariant. Hence, the parity operator is a dynamical operator on the LF and, except for a free field theory, it is probably impossible to write down a simple expression for it in terms of quark and gluon field operators. Thus, parity invariance is not a manifest symmetry in this framework. Note that the situation is the other way round for the boost operator (kinematic and manifest on the LF, dynamical and non-manifest in ET). It thus depends on the physics application one is interested in and the symmetries that one considers the most important ones for that particular physics problem, which framework is preferable.
For most applications of LF quantization, the lack of manifest parity actually does not constitute a problem — in fact, one can view it as an opportunity rather than a problem. The important point here is the following: due to the lack of manifest covariance in a Hamiltonian formulation, LF Hamiltonians in general contain more parameters than the corresponding Lagrangian. Parity invariance may be very sensitive to some of these parameters.
In order to illustrate this important point, let us consider the example of a 1+1 dimensional Yukawa model [@parity] $${\cal L}=\bar{\psi}\left( i\not \!\!\partial
-m_F-g\phi \gamma_5 \right)\psi -\frac{1}{2}
\phi\left( \Box +m_B^2\right)\phi .$$ This model actually has a lot in common with the kind of interactions that appear when one formulates QCD (with fermions) on a transverse lattice [@hala].
The main difference between scalar and Dirac fields in the LF formulation is that not all components of the Dirac field are dynamical: multiplying the Dirac equation $$\left( i\not \!\!\partial
-m_F-g\phi \gamma_5\right)\psi =0$$ by $\frac{1}{2}\gamma^+$ yields a constraint equation (i.e. an “equation of motion” without a time derivative) $$i\partial_-\psi_{(-)}=\left(m_F+g\phi\gamma_5\right)\gamma^+\psi_{(+)}
,
\label{eq:constr}$$ where $
\psi_{\pm}\equiv \frac{1}{2}\gamma^\mp \gamma^\pm \psi .
$ For the quantization procedure, it is convenient to eliminate $\psi_{(-)}$, using \_[(-)]{} = (m\_F+g\_5) \_[(+)]{} from the classical Lagrangian before imposing quantization conditions, yielding $$\begin{aligned}
{\cal L}&=&\sqrt{2}\psi_{(+)}^\dagger i\partial_+ \psi_{(+)}
-\phi\left( \Box +m_B^2\right)\phi
-\psi^\dagger_{(+)}\frac{m_F^2}{\sqrt{2}i\partial_-}
\psi_{(+)}
\label{eq:lelim}
\\
&-&\psi^\dagger_{(+)}\left(
g\phi
\frac{m_F\gamma_5}{\sqrt{2}i\partial_-}
+\frac{m_F\gamma_5}{\sqrt{2}i\partial_-}g\phi\right)
\psi_{(+)}
-\psi^\dagger_{(+)}g\phi\frac{1}{\sqrt{2}i\partial_-}
g\phi\psi_{(+)} .
\nonumber\end{aligned}$$ The rest of the quantization procedure very much resembles the procedure for self-interacting scalar fields.
The above canonical Hamiltonian contains a kinetic term for the fermions, a fermion boson vertex and a fermion 2-boson vertex. While the couplings of these three terms in the canonical Hamiltonian depend only on two independent parameters ($m$ and $g$), it turns out that these terms are renormalized independently from each other once zero-mode and other high-energy degrees of freedom are integrated out. More explicitly this means that one should make an ansatz for the renormalized LF Hamiltonian density of the form $$\begin{aligned}
{\cal P}^-&=&
\frac{m_B^2}{2}\phi^2
+\psi^\dagger_{(+)}\frac{c_2}{\sqrt{2}i\partial_-}
\psi_{(+)}
+c_3\psi^\dagger_{(+)}\left(
\phi
\frac{\gamma_5}{\sqrt{2}i\partial_-}
+\frac{\gamma_5}{\sqrt{2}i\partial_-}g\phi\right)
\psi_{(+)} \nonumber\\
&+&c_4\psi^\dagger_{(+)}\phi\frac{1}{\sqrt{2}i\partial_-}
\phi\psi_{(+)} ,
\label{eq:pren}\end{aligned}$$ where the $c_i$ do not necessarily satisfy the canonical relation $c_3^2=c_2c_4$. However, this does not mean that the $c_i$ are completely independent from each other. In fact, Eq.(\[eq:pren\]) will describe the Yukawa model only for specific combinations of $c_i$. It is only that we do not know the relation between the $c_i$. [^3]
Thus the bad new is that the number of parameters in the LF Hamiltonian has increased by one (compared to the Lagrangian). The good news is that a wrong combination of $c_i$ will in general give rise to a parity violating theory: formally this can be seen in the weak coupling limit, where the correct relation ($c_3^2=c_2c_4$) follows from a covariant Lagrangian. Any deviation from this relation can be described on the level of the Lagrangian (for free massive fields, equivalence between LF and covariant formulation is not an issue) by addition of a term of the form $\delta {\cal L} =
\bar{\psi}\frac{\gamma^+}{i\partial^+}\psi$, which is obviously parity violating, since parity transformations result in $A^\pm \stackrel{P}{\rightarrow} A^\mp$ for Lorentz vectors $A^\mu$; i.e. $\delta {\cal L}
\stackrel{P}{\rightarrow} \bar{\psi}\frac{\gamma^-}{i\partial^-}\psi \neq \delta {\cal L}$. This also affects physical observables, as can be seen by considering boson fermion scattering in the weak coupling limit of the Yukawa model. At the tree level, there is an instantaneous contact interaction, which is proportional to $\frac{1}{q^+}$. The (unphysical) singularity at $q^+=0$ is canceled by a term with fermion intermediate states, which contributes (near the pole) with an amplitude $\propto -\frac{m_v^2}{m_{kin}^2}\frac{1}{q^+}$. Obviously, the singularity cancels iff $m_V=m_{kin}$. Since the singularity involves the LF component $q^+$, this singular piece obviously changes under parity. This result is consistent with the fact that there is no zero-mode induced renormalization of $m_{kin}$ and thus $m_V=m_{kin}$ at the tree level. This simple example clearly demonstrates that a ‘false’ combination of $m_V$ and $m_{kin}$ leads to violations of parity for a physical observable, which is why imposing parity invariance as a renormalization condition may help reduce the dimensionality of coupling constant space.
Parity Sensitive Observables that “don’t work”
----------------------------------------------
Of course there are an infinite number of parity sensitive observables, but not all of them are easy to evaluate non-perturbatively in the Hamiltonian LF framework. Furthermore, we will see below that some relations among observables, which seem to be sensitive to parity violations, are actually ‘protected’ by manifest symmetries, such as charge conjugation, or by VCC.
From the brief discussion of the (to-be-canceled) singularity above it seems that the most sensitive observable to look for violations of parity in QCD would be Compton scattering cross sections between quarks and gluons (or the corresponding fermions and bosons in other field theories) because there one could tune the external momenta such that the potential singularity enters with maximum strength. However, this is not a very good choice: first of all non-perturbative scattering amplitudes are somewhat complicated to construct on the LF. Secondly, quarks and gluons are confined particles, which makes $qg$ Compton scattering an unphysical process.
A much better choice are any matrix elements between bound states. Bound states are non-perturbative and all possible momentum transfers occur in their time evolution. Therefore, any parity violating sub-amplitude would contribute at some point and would therefore affect physical observables. Secondly, since one of the primary goal of LFQCD is to explore the non-perturbative spectra and structure of hadrons, matrix elements in bound states are the kind of observables for which the whole framework has been tailored.
One conceivable set of matrix elements are those of the vector current operator $j^\pm$. For example, consider the vacuum to meson matrix elements 0 | j\^+ |n,p&=& p\^+ f\_n\^[(+)]{}\
0 | j\^- |n,p&=& p\^- f\_n\^[(-)]{}. \[eq:fpm\] By boost invariance, the couplings defined in Eq. (\[eq:fpm\]) must be independent of the momenta. Obviously, parity invariance requires $|f_n^{(+)}| =
|f_n^{(-)}|$. However, this relation also follows from current conservation $0=p^-\langle 0 | j^+ |n,p\rangle
+p^+\langle 0 | j^- |n,p\rangle $, which makes it a useless relation for the purpose of parity tests.
Similar statements can be made about elastic formfactors but we will omit the details here. The basic upshot is that the same relations between the matrix elements of $j^\pm$ that arise from parity invariance can often also be derived using only VCC, i.e. such relations are in general ‘protected’ by VCC
One may also consider non-conserved currents, such as the axial vector current. However, there one would have to face the issue of defining the ‘minus’ component before one can test any parity relations. Parity constraints are probably very helpful in this case when constructing the minus components, but then those relations can no longer be used to help constrain the coefficients in the LF-Hamiltonian.
Another class of potentially useful operators consists of the scalar and pseudoscalar densities $\bar{\psi}\psi$ and $\bar{\psi} \gamma_5 \psi$. Obviously, if parity is conserved then at most one of the two couplings f\_S&=& 0 | || n\
f\_P&=& 0 | |\_5| ncan be nonzero at the same time, since a state $|n\rangle$ cannot be both scalar and pseudoscalar. What restricts the usefulness of this criterion is the fact that the same ‘selection rule’ also follows from charge conjugation invariance ($\bar{\psi}$ and $\bar{\psi} \gamma_5 \psi$ have opposite charge parity!) which is a manifest symmetry on the LF. Therefore, only for theories with two or more flavors, where one can consider operators such as $\bar{u}s$ and $\bar{u}\gamma_5 s$, does one obtain parity constraints that are not protected by charge conjugation invariance. But even then one may have to face the issue of how to define these operators. Parity may of course be used in this process, but then it has again been ‘used up’ and one can no longer use these selection rules to test the Hamiltonian.
In summary, many parity relations are probably very useful to determine the LF representation of the operators involved, but may not be very useful to determine the Hamiltonian.
A useful observable to test parity
----------------------------------
We have seen above that relations between $j^+$ and $j^-$ are often protected by vector current conservation and are therefore not very useful as parity tests. A much more useful parity test can actually be obtained by considering the ‘plus’-component only: Let us now consider a vector form factor, $$\langle p^\prime, n| j^\mu | p, m \rangle
\stackrel{!}{=} \varepsilon^{\mu \nu}q_\nu F_{mn}(q^2),
\label{eq:form}$$ where $q=p^\prime -p$, between states of opposite parity. When writing the r.h.s. in terms of one invariant form factor, use was made of both vector current conservation and parity invariance. A term proportional to $p^\mu + {p^\prime}^\mu$ would also satisfy current conservation, but has the wrong parity. A term proportional to $\varepsilon^{\mu \nu}\left(p_\nu + p_\nu^\prime\right)$ has the right parity, but is not conserved and a term proportional to $q^\mu$ is both not conserved and violates parity. Other vectors do not exist for this example. The Lorentz structure in Eq. (\[eq:form\]) has nontrivial implications even if we consider only the “plus” component, yielding $$\frac{1}{q^+}\langle p^\prime, n| j^+ | p, m \rangle
=F_{mn}(q^2).
\label{eq:formplus}$$
1.cm
(10,12)(0,1.)
That this equation implies nontrivial constraints can be seen as follows: as a function of the longitudinal momentum transfer fraction $x\equiv q^+/p^+$, the invariant momentum transfer reads ($M_m^2$ and $M_n^2$ are the invariant masses of the in and outgoing meson) $$q^2=x\left(M_m^2 -\frac{M_n^2}{1-x}\right).$$ This quadratic equation has in general, for a given value of $q^2$, two solutions for $x$ which physically correspond to hitting the meson from the left and right respectively. The important point is that it is not manifestly true that these two values of $x$ in Eq. (\[eq:formplus\]) will give the same value for the form factor, which makes this an excellent parity test.
In Ref. [@parity], the coupling as well as the physical masses of both the fermion and the lightest boson where kept fixed, while the “vertex mass” was tuned (note that this required re-adjusting the bare kinetic masses). Figure \[fig:parity\] shows a typical example, where the calculation of the form factor was repeated for three values of the DLCQ parameter K (24, 32 and 40) in order to make sure that numerical approximations did not introduce parity violating artifacts.
For a given physical mass and boson-fermion coupling, there exists a “magic value” of the vertex mass and only for this value one finds that the parity condition (\[eq:formplus\]) is satisfied over the whole range of $q^2$ considered. This provides a strong self-consistency check, since there is only one free parameter, but the parity condition is not just one condition but a condition for every single value of $q^2$ (i.e. an infinite number of conditions). In other words, keeping the vertex mass independent from the kinetic mass is not only necessary, but also seems sufficient in order to properly renormalize Yukawa$_{1+1}$.
In the above calculation, it was sufficient to work with a vertex mass that was just a constant. However, depending on the interactions and the cutoffs employed, it ma be necessary to introduce counter-term functions[@brazil]. But even in cases where counter-term functions need to be introduced, parity constraints may be very helpful in determining the coefficient-functions for those more complex effective LF Hamiltonians non-perturbatively.
References {#references .unnumbered}
==========
[99]{} M. Burkardt, [*Advances Nucl. Phys.*]{} [**23**]{}, 1 (1996). W. A.Bardeen et al., , 1037 (1980). S. Dalley and B. van de Sande, , 1088 (1999); , 065008 (1999). M.Burkardt, , 762 (1989); , 613 (1992). S.J. Brodsky and D.S. Hwang, , 239 (1998); H.-M. Choi and C.R. Ji, , 071901 (1998). M. Burkardt, , 2913 (1996). M. Burkardt and H. El-Khozondar, , 054504 (1999). R.J. Perry, lectures given at NATO Advanced Study Institute on Confinement, Duality and Nonperturbative Aspects of QCD, Cambridge, U.K., 1997, hep-th/9710175.
[^1]: Because of lack of space, this important issue could not be discussed in these notes.
[^2]: See for example Ref. [@adv] for a more extensive discussion on this question.
[^3]: Coupling coherence [@brazil] does not help to determine the finite part (integration constant) for the mass if the original Lagrangian contains a nonzero bare mass!
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'J. Lillo-Box, D. Barrado, A. C. M. Correia'
bibliography:
- 'biblio2.bib'
date: 'Accepted on March 11th, 2016'
subtitle: 'Lack of hot-Jupiters and prevalence of multi-planetary systems'
title: 'Close-in planets around giant stars'
---
[Extrasolar planets abound in almost any possible configuration. However, until five years ago, there was a lack of planets orbiting closer than 0.5 au to giant or subgiant stars. Since then, recent detections have started to populated this regime by confirming 13 planetary systems. We discuss the properties of these systems in terms of their formation and evolution off the main sequence. Interestingly, we find that $70.0\pm6.6$% of the planets in this regime are inner components of multiplanetary systems. This value is 4.2$\sigma$ higher than for main-sequence hosts, which we find to be $42.4\pm0.1$%. [The properties of the known planets seem to indicate that the closest-in planets ($a<0.06$ au) to main-sequence stars are massive (i.e., hot Jupiters) and isolated and that they are subsequently engulfed by their host as it evolves to the red giant branch, leaving only the predominant population of multiplanetary systems in orbits $0.06<a<0.5$ au. We discuss the implications of this emerging observational trend in the context of formation and evolution of hot Jupiters.]{}]{}
Introduction
============
The large crop of extrasolar planets discovered so far offers a new possibility of studying the properties of these systems from a statistical point of view [e.g., @batalha14]. This bounty of planetary systems and their widely diverse properties allow us to start answering the question of how these bodies form and evolve from an observational point of view. However, while hundreds of planets have been found around solar-mass stars, there is still a scarcity of planets orbiting more massive hosts. Some of these hosts have already left the main sequence, now being in the giant or subgiant phase. In consequence, looking for planets around these more evolved stars (with much sharper absorption lines) can help to better constrain the demography of planets around early-type stars and so probing the efficiency of planet formation mechanisms for the different mass ranges of the host star. The discovery of planets around K and G giants and subgiants is therefore crucial for planet formation theories.
From the observational point of view, there is a dearth of planets with short periods around stars ascending the red giant branch (RGB, @johnson07). The reason could be twofold. On one hand, it could be explained by a scarcity of close-in planets around early-type main-sequence stars ($M_{\star} > 1.2 M_{\odot}$). This scarcity has been hypothesized to be related to different migration mechanisms for planets around stars of different masses (Udry et al., 2003), owing to the shorter dissipation timescales of the protoplanetary disks of these stars, which prevents the formed planets from migrating to close-in orbits [see, e.g., @burkert07; @currie09]. On the other hand, the paucity of planets around these evolved stars has been considered by theoretical studies as evidence of the planet engulfment or disruption even in the first stages of the evolution off the main sequence of their parents. [@villaver09] calculated how tidal interactions in the subgiant and giant stages can lead to the final engulfment of the close-in planets and how this process is more efficient for more massive planets. Likewise, [@villaver14] have analyzed the effects of the evolution of the planet’s orbital eccentricity, mass-loss rate, and planetary mass on the survivability of planets orbiting massive stars ($M_{\star} > 1.5 M_{\odot}$). They conclude that the planet mass is a key parameter for the engulfment during the subgiant phase, with the more massive planets more likely falling into the stellar envelope during this phase (for the same initial orbital parameters). Also, planets located at 2-3 $R_{\star}$, when the star begins to leave the main sequence, may suffer orbital decay from the influence of stellar tides. Any planet closer than this orbital distance may be engulfed by the star in the subgiant phase. However, these results are based on a limited sample of confirmed exoplanets around RGB stars. Therefore the detection of close-in planets around post main-sequence stars is crucial for constraining theoretical models of planet engulfment. Several long-term projects have therefore focused on finding these planets (e.g., TAPAS: @niedzielski15; EXPRESS: @jones11).
In this paper, we analyze the growing sample of close-in planets around giant stars [from an observational point of view]{}. [We analyze their properties and multiplicity and compare them to planets around main-sequence stars, trying to determine the evolution of these properties. ]{}[We discuss the possible explanations for the observed trend in the context of the formation and evolution of hot-Jupiter planets and the consequences of their inward migration.]{}
Properties of the known close-in planets around giant stars {#sec:properties}
===========================================================
From the sample of confirmed or validated extrasolar planets, we have selected those with close-in orbits around stars in the subgiant phase or ascending the RGB. We consider planets in this regime as those revolving in orbits closer than $a<0.5$ au around host stars with $\log{g}<3.8$. The limit in semi-major axis was chosen as the apparent limit closer than which no planets have been found until recent years. The limit in surface gravity was chosen to select both giant and subgiant stars. In Fig. \[fig:logg\_sma\], we show the location of all known planets in the $[a,\log{g}]$ parameter space. In total, 13 systems have been identified so far in the mentioned regime (including uncertainties): Kepler-56, Kepler-91, Kepler-108, Kepler-278, Kepler-368, Kepler-391, Kepler-432, 8Umi, 70Vir, HD11964, HD38529 , HD102956, and HIP67851. In Table \[tab:properties\] we present the main properties of the planets and host stars in these systems. We include all systems that lie inside the above-mentioned boundaries within their $3\sigma$ uncertainties. In total, the sample is composed of four su-giant hosts and nine stars already ascending the RGB.
![Semi-major axis of known planets against their host surface gravity. We include single planets (red plus symbols) and multiplanetary systems (circles) with the inner component being the dark blue and the outer planet(s) in light blue. [For reference, we also plot the stellar radius of the host star at every evolutionary stage (i.e., for the different surface gravities) for two masses that comprise the range of masses of the hosts analyzed in this paper, $1~M_{\odot}$(dotted) and $1.6~M_{\odot}$(dash-dotted).]{} []{data-label="fig:logg_sma"}](sma_logg2.pdf){width="50.00000%"}
Results {#sec:results}
=======
We can see from Fig. \[fig:logg\_sma\] that most of the planets with $\log{g}<3.8$ and $a<0.5$ au are multiplanetary systems with at least the inner component in this regime. In four cases, no additional planets are reported by the discovery papers. These are Kepler-91, HD102956, 8Umi, and 70Vir. In this section we calculate the actual rate (and uncertainty) of multiplanetary systems in this regime and analyze the RV observations of the single systems to put constraints on possibly undetected outer planets.
We retrieved the host masses and radii from the Exoplanet Catalogue[^1] and the NASA Exoplanet Archive[^2]. The surface gravity and its uncertainty is then computed from those values. The semi-major axis was also retrieved from the same source. We divide the $[\log{g},a]$ parameter space into three regimes (see Fig. \[fig:logg\_sma\]): close-in planets around main-sequence stars (MS$_{\rm close}$, $\log{g}>3.8$ and $a<0.5$ au), close-in planets around giant stars (GS$_{\rm close}$, $\log{(g)}<3.8$ and $a<0.5$ au), and far away planets around giant stars (GS$_{\rm far}$, $\log{g}<3.8$ and $a>0.5$ au). We bootstrap the values of all planets by considering Gaussian distribution of probability with standard deviation equal to the uncertainty in the parameters. In each step, we count the number of planets in each particular regime that are inner components in multiplanetary systems with respect to the total number of systems in that regime ($\zeta$). We ran $10^3$ steps and obtained the distribution of $\zeta$ for each regime (see Fig. \[fig:distributions\]).
The median and standard deviation of the distribution of the bootstrapped populations for each regime are kept. These values represent the ratio of inner planets in multi-planetary systems in each of the three regimes. We find that $\zeta_{\rm MS,close}=42.4\pm0.1$% of the planetary systems hosted by main sequence stars with at least one component closer than 0.5 au are multiplanetary systems. Regarding systems in orbits farther than 0.5 au, we find that $\zeta_{\rm GS,far}=7.1\pm0.1$% of the systems in this regime are multiple systems with their inner component farther than 0.5 au. Finally, we find that in $\zeta_{\rm GS,close}=70.0\pm6.6$% of the systems with the closer planet more interior than 0.5 au, this planet is the inner component of a multi-planetary system. According to this, the ratio of close-in planets that are inner components in multiplanetary systems is significantly greater ($>4.2\sigma$) in the case of giant stars than it is during the main-sequence stage. The implications of this significant difference are discussed in Sect. \[sec:discussion\].
We must note that we are aware that these numbers may suffer from some observational biases (specially $\zeta_{\rm GS,far}$) since, [for instance, detecting outer components to planets with $a>0.5$ au around giant stars by]{} both radial velocity and transit methods is much more difficult or, at the least, less likely. Also, detecting planets with the transit method is more difficult in the case of giants owing to the contrast radius ratio. These biases have not been taken into account in this calculation. However, it is important to point out that both biases would favor an increase of $\zeta_{\rm GS,close}$ over $\zeta_{\rm MS,close}$.
![image](all_percentages.pdf){width="100.00000%"}
![image](99_min_Mp_hd102956.pdf){width="33.00000%"} ![image](99_min_Mp_8umi.pdf){width="33.00000%"} ![image](99_min_Mp_70vir.pdf){width="33.00000%"}
Among the four planetary systems in close-in orbits around giant stars found to host only one planet, we have investigated the mass-period parameter space yet unexplored by the discovery RV data to put constraints on additional undetected planet. We have used the RV data published in the discovery papers of HD102956 (22 epochs in a 1164-days timespan; @johnson10), 8 Ursa Minor (21 measurements in a 1888-day timespan; @lee15), and 70 Vir (@naef03) to check for the limitations of these observations for detecting additional planets in the system. We used these data and the derived parameters in the discovery works to remove the contribution of the known planet and look for a second planet in the residuals.
This was done by following the prescriptions in [@lagrange09], by simulating a planet signal in a circular orbit with the corresponding $[m_p,P_{\rm orb}]$ and observed at the same dates, and by introducing the typical noise for the particular instrument used in the discovery of the inner planet. The results of the simulation are provided in Fig. \[fig:minimumMp\], where we set the properties of the planets that could have been missed by the discovery papers. Regarding Kepler-91 [@lillo-box14; @lillo-box14c], subsequent follow-up observations after its confirmation pointed out the possibility that additional bodies exist by detecting a possible radial velocity drift [@sato15] or by detecting additional dips in the *Kepler* phase-folded light curve.
In these four cases, the possibility of an additional outer planet is not discarded among the presented limits, but we count them as single for the purposes of this work.
Discussion and conclusions {#sec:discussion}
==========================
The results presented in the previous section indicate that [(according to observations)]{} the large majority of planetary systems in close-in orbits around giant stars are multiple. We have found that the ratio of these systems against the total number of systems is significantly higher in giant and subgiant stars than in main-sequence hosts. Even though we are in the low number statistics, there seems to be an increasing sample of multiplanetary systems around evolved stars with their inner component in an orbit closer than 0.5 au. We thus may wonder about the reasons for this higher rate against single planets [and about how this is related to the evolution of planetary systems.]{}
![image](MS_Radius_single_vs_multi.pdf){width="45.00000%"} ![image](MS_Mass_single_vs_multi.pdf){width="45.00000%"}
It has been theorized that giant planets formed in the outer regions of the disk are capable of destroying or swallowing smaller planets in their inward migration to the inner parts of the system [e.g., @mustill15]. This could explain the high concentration of single planets in orbits closer than 0.06 au around main-sequence stars (see Fig. \[fig:logg\_sma\]). [Indeed, 68.1% of the planets found in this regime are found to be single planets, while inner components of multiplanetary systems represent the 31.0% of the systems. By contrast, planets at 0.06-0.5 au are mostly multiple systems with at least the inner component in this range. According to the sample of known planets used in this paper, we find that 64.5% of the systems in this regime are inner planets in multiplanetary systems, while just 35.5% are single-planet systems.\
By focusing on the properties of the extremely close-in planets ($a<0.06$ au) around main-sequence stars, we see a clear segregation, with the single planets being Jupiter-like and the inner components of multiplanetary systems being small (rocky) planets. This segregation is clear in Fig. \[fig:closein\], where we show the mass and radius distribution of extremely close-in planets around main-sequence stars for the sample of single planetary systems and inner planets in multiplanetary systems. From this reasoning, it is clear that during the main-sequence stage, planets in these extremely close-in orbits are mainly single Jupiter-like planets (i.e., hot Jupiters). This observational result is in good agreement with the idea that the migration of massive planets from the outer regions of the disk tends to destroy any already formed rocky planets in the inner regions.\
It is important to point out that this destruction of the smaller planets may not always take place as shown in the case of the WASP-47 system, where a Neptune-sized outer planet and a super-Earth inner companion to a hot-Jupiter planet were found by [@becker15]. This discovery highlights that several effects may be taking place during this process.]{}
[The previously summarized conclusions are relevant for the results of this work since we have not found extremely close-in planets around giant or subgiant stars. However, we find planets in the 0.06-0.5 au regime around these evolved stars. And, more importantly, we find that they are mostly inner components in multiplanetary systems in a fraction of $70.0\pm6.6$ %, which is compatible with the 64.5% previously mentioned for the main-sequence systems in the same regime. The observational conclusion that can be extracted from these numbers is that once the star evolves off the main-sequence, the closest planets (usually massive and single) are swallowed by their host, leaving only the population of multiplanetary systems in orbits beyond 0.06 au. Numerical simulations by [@frewen16] have shown that this could indeed explain the deficiency of warm Jupiters in stars larger than two solar radii. They hypothesize that these gaseous planets with periods of 10-100 days may migrate owing to Kozai-Lidov oscillations with a subsequent decay via tidal interactions with the star. In the current sample, the single planets Kepler-91b and HD102956b are indeed single hot Jupiters in a very close-in orbit ($a<0.1$ au), so they seem to have (by now) survived this phase.]{}
[According to [@villaver14], more massive planets are more easily and more rapidly engulfed during the evolution of their host star off the main sequence. It is thus clear from observations that hot Jupiters should practically disappear at these stages. Low-mass planets, however, could survive at least during the first stages of the evolution of the star. But, their detection is much more challenging given the small radius ratio compared to the giant planets (i.e., it makes it difficult to detect their transits) and the low-mass ratios combined with stellar pulsations during this evolutionary stage (what difficult the detection by the radial velocity technique). However, small planets should be there, and future instrumentation (e.g., ESPRESSO/VLT) will be capable of detecting them, filling the new gap of extremely close-in planets around evolved stars.]{}
[There are other possibilities for explaining the prevalence of close-in multiplanetary systems around giant stars. For instance, gravitational interactions between the planets can play an important role during the subgiant phase, although they tend to destabilize the system [@voyatzis13; @veras13; @mustill14]. However, mean motion resonances (MMR) can help the inner orbit to migrate outward and thus escape being engulfed by the star, as in the formation of the solar system [e.g., @levison03; @dangelo12]. Among multiplanetary systems with the inner component being closer than 0.5 au, we checked for possible MMR. Only Kepler-56 has a near MMR of 2:1, as pointed out in the discovery paper [@steffen13]. In two other cases, Kepler-108 and Kepler-432, the planets are close to MMR 4:1 and 8:1. The remaining systems do not present any resonant or near-resonant periodicities. Although in a small sample, it is therefore more difficult to explain the major presence of multiplanetary systems than single planets around subgiant and giant stars only in terms of gravitational interactions between the planets in the system.]{}
Finally, we must note that the results published here are purely observational. They can feed theoretical studies of the evolution of planetary systems across the different stellar stages to explain the planet-star interactions once the star leaves the main sequence. However, these studies are beyond the scope of this observational work, which aims to highlight the need for detecting more close-in planets around giant stars and to provide some hints for theoretical analysis.
This research was partially funded by Spanish grant AYA2012-38897-C02-01. This work was co-funded under the Marie Curie Actions of the European Commission (FP7-COFUND). A.C. acknowledges support from CIDMA strategic project UID/MAT/04106/2013.
[^1]: [exoplanet.eu](exoplanet.eu)
[^2]: [exoplanetarchive.ipac.caltech.edu](exoplanetarchive.ipac.caltech.edu)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
In this paper we find viscosity solutions to an elliptic system governed by two different operators (the Laplacian and the infinity Laplacian) using a probabilistic approach. We analyze a game that combines the Tug-of-War with Random Walks in two different boars. We show that these value functions converge uniformly to a viscosity solution of the elliptic system as the step size goes to zero.
In addition, we show uniqueness for the elliptic system using pure PDE techniques.
address: 'Alfredo Miranda and Julio D. Rossi Departamento de Matem[á]{}tica, FCEyN, Universidad de Buenos Aires, Pabellon I, Ciudad Universitaria (1428), Buenos Aires, Argentina.'
author:
- 'Alfredo Miranda and Julio D. Rossi'
title: '**A game theoretical approach for an elliptic system with two different operators (the Laplacian and the infinity Laplacian)**'
---
Introduction
============
Our goal in this paper is to describe a probabilistic game whose value functions approximate viscosity solutions to the following elliptic system: $$\label{ED1}
\left\lbrace
\begin{array}{ll}
- \displaystyle {{\frac{1}{2}}}\Delta_{\infty}u(x) + u(x) - v(x)=0 \qquad & \ x \in \Omega, \\[10pt]
- \displaystyle \frac{\kappa}{2} \Delta v(x) + v(x) - u(x)=0 \qquad & \ x \in \Omega, \\[10pt]
u(x) = f(x) \qquad & \ x \in \partial \Omega, \\[10pt]
v(x) = g(x) \qquad & \ x \in \partial \Omega,
\end{array}
\right.$$ here $\kappa >0$ is a constant that can be chosen adjusting the parameters of the game. The domain $\Omega \subset {{\mathbb R}}^N$ is assumed to be bounded and satisfy the uniform exterior ball property, that is, there is $\theta > 0$ such that for all $y\in\partial{\Omega}$ there exists a closed ball of radius $\theta$ that only touches ${\overline}{{\Omega}}$ at $y$. This means that, for each $y\in\partial{\Omega}$ there exists a $z_{y}\in {{\mathbb R}}^{N}\backslash{\Omega}$ such that ${\overline}{B_{\theta}(z_{y})}\cap{\overline}{{\Omega}}= \{ y \}$. The boundary data $f$ and $g$ are assumed to be Lipschitz functions.
Notice that this system involves two differential operators, the usual Laplacian $$\Delta \phi = \sum\limits_{i=1}^{N} \partial_{x_{i}x_{i}}\phi$$ and the infinity Laplacian (see [@Cran]) $$\Delta_{\infty}\phi =
\langle D^{2}\phi \frac{\nabla \phi}{|\nabla \phi |},\frac{\nabla \phi}{| \nabla \phi |}\rangle
=\frac{1}{| \nabla \phi |^2}\sum\limits_{i,j=1}^{N} \partial_{x_{i}}\phi \partial_{x_{i}x_{j}}\phi
\partial_{x_{j}}\phi.$$
This system is not variational (there is no associated energy). Therefore, to find solutions one possibility is to use monotonicity methods (Perron’s argument). Here we will look at the system in a different way and to obtain existence of solutions we find an approximation using game theory. This approach not only gives existence of solutions but it also provide us with a description that yield some light on the behaveiour of the solutions. At this point we observe that we will understand solutions to the system in the viscosity sense, this is natural since the infinity Laplacian is not variational (see Section \[sect-prelim\] for the precise definition).
The fundamental works by Doob, Feller, Hunt, Kakutani, Kolmogorov and many others show the deep connection between classical potential theory and probability theory. The main idea that is behind this relation is that harmonic functions and martingales have something in common: the mean value formulas. A well known fact is that $u$ is harmonic, that is $u$ verifies the PDE $\Delta u =0$, if and only if it verifies the mean value property $
u(x) =
\frac{1}{|B_\varepsilon (x) |} \int_{B_\varepsilon (x) } u(y) \, dy.
$ In fact, we can relax this condition by requiring that it holds asymptotically $
u(x) =
\frac{1}{|B_\varepsilon (x) |} \int_{B_\varepsilon (x) } u(y) \, dy + o(\varepsilon^2),
$ as $\varepsilon\to 0$. The connection between the Laplacian and the Bownian motion or with the limit of random walks as the step size goes to zero is also well known, see [@Kac].
The ideas and techniques used for linear equations have been extended to cover nonlinear cases as well. Concerning nonlinear equations, for a mean value property for the $p-$Laplacian (including the infinity Laplacian) we refer to [@I], [@KMP], [@LM] and [@MPR]. For a probabilistic approximation of the infinity Laplacian there is a game (called Tug-of-War game in the literature) that was introduced in [@PSSW] and generalized in several directions to cover other equations, like the $p-$Laplacian, see [@TPSS; @AS; @BR; @ChGAR; @LPS; @MPR; @MPRa; @MPRb; @Mitake; @PS; @R] and the book [@BRLibro].
Now let us describe the game that is associated with . It is a two-player zero-sum game played in two different bards (two different copies of the set $\Omega \subset \mathbb{R}^N$). Fix a parameter, ${{\varepsilon}}>0$ and two final payoff functions $ {\overline}{f}, {\overline}{g} :\mathbb{R}^N \setminus \Omega \mapsto \mathbb{R}$ (one for each board, $ {\overline}{f}$ for the first board and $ {\overline}{g}$ for the second one). These payoff functions $ {\overline}{f}$ and $ {\overline}{g}$ are just two Lipschitz extensions to $\mathbb{R}^N \setminus \Omega$ of the boundary data $f$ and $g$ that appear in . The rules of the game are the following: the game starts with a token at an initial position $x_0 \in \Omega$, in one of the two boards. In the fist board, with probability $1-{{\varepsilon}}^2$, the players play Tug-of-War as described in [@PSSW; @MPR] (this game is associated with the infinity Laplacian). Playing Tug-of-War, the players toss a fair coin and the winner chooses a new position of the game with the restriction that $x_1 \in B_{{\varepsilon}}(x_0)$. When the token is in the first board, with probability ${{\varepsilon}}^2$ the token jumps to the other board (at the same position $x_0$). In the second board with probability $1-{{\varepsilon}}^2$ the token is moved at random (uniform probability) to some point $x_1 \in B_{{\varepsilon}}(x_0)$ and with probability ${{\varepsilon}}^2$ the token jumps back to the first board (without changing the position). The game continues until the position of the token leaves the domain and at this point $x_\tau$ the first player gets ${\overline}{f}(x_\tau)$ and the second player $- {\overline}{f}(x_\tau)$ if they are playing in the first board while they obtain ${\overline}{g}(x_\tau)$ and $- {\overline}{g}(x_\tau)$ if they are playing in the second board (we can think that Player ${\textrm{II}}$ pays to Player ${\textrm{I}}$ the amount given by ${\overline}{f}(x_\tau)$ or by ${\overline}{g}(x_\tau)$ according to the board in which the game ends). This game has a expected value (the best outcome of the game that both players expect to obtain playing their best, see Section \[sect-game2\] for a precise definition). In this case the value of the game is given by a pair of functions $(u^{{\varepsilon}}, v^{{\varepsilon}})$, defined in $\Omega$ that depends on the size of the steps, ${{\varepsilon}}$. For $x_0 \in \Omega$, the value of $u^{{\varepsilon}}(x_0)$ is the expected outcome of the game when it starts at $x_0$ in the first board and $v^{{\varepsilon}}(x_0)$ is the expected value starting at $x_0$ in the second board.
Our fist theorem ensures that this game has a well-defined value and that this pair of functions $(u^{{\varepsilon}}, v^{{\varepsilon}})$ verifies a system of equations (called the dynamic programming principle (DPP)) in the literature). Similar results are proved in [@BR; @LPS; @MPRa; @PSSW; @R].
\[teo.dpp2\] The game has value $
(u^{{\varepsilon}}, v^{{\varepsilon}})
$ that verifies $$\label{DPP}
\left\lbrace
\begin{array}{ll}
\displaystyle u^{{{\varepsilon}}}(x)={{\varepsilon}}^{2}v^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y)
\Big\} \quad & \ x \in \Omega, \\[10pt]
\displaystyle v^{{{\varepsilon}}}(x)={{\varepsilon}}^{2}u^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2}){\vint}_{B_{{{\varepsilon}}}(x)}v^{{{\varepsilon}}}(y)dy \quad & \ x \in \Omega, \\[10pt]
u^{{{\varepsilon}}}(x) = {\overline}{f}(x) \quad & \ x \in {{\mathbb R}}^{N} \backslash \Omega, \\[10pt]
v^{{{\varepsilon}}}(x) = {\overline}{g}(x) \quad & \ x \in {{\mathbb R}}^{N} \backslash \Omega.
\end{array}
\right.$$ Moreover, there is a unique solution to .
Notice that can be see as a sort of mean value property (or a discretization at scale of size ${{\varepsilon}}$) for the system . Let see intuitively why the DPP holds. Playing in the first board, at each step Player ${\textrm{I}}$ chooses the next position of the game with probability $\frac{1-{{\varepsilon}}^2}{2}$ and aims to obtain $\inf_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y)$ (recall this player seeks to minimize the expected payoff); with probability $\frac{1-{{\varepsilon}}^2}{2}$ it is Player ${\textrm{II}}$ who choses and aims to obtain $\sup_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y)$ and finally with probability ${{\varepsilon}}^2$ the board changes (and therefore $v^{{\varepsilon}}(x)$ comes into play). Playing in the second board, with probability $1-{{\varepsilon}}^2$ the point moves at random (but stays in the second board) and hence the term ${\vint}_{B_{{{\varepsilon}}}(x)}v^{{{\varepsilon}}}(y)dy $ appears, but with probability ${{\varepsilon}}^2$ the board is changed and hence we have $u^{{\varepsilon}}(x)$ in the second equation. The equations in the (DPP) follow just by considering all the possibilities. Finally, the final payoff at $x \not\in \Omega$ is given by $ {\overline}{f} (x)$ in the first board and by $ {\overline}{g}(x)$ in the second board, giving the exterior conditions $u^{{{\varepsilon}}}(x) = {\overline}{f}(x)$ and $v^{{{\varepsilon}}}(x) = {\overline}{g}(x)$.
Our next goal is to look for the limit as ${{\varepsilon}}\to 0$. Our main result in this paper is to show that, under our regularity conditions on the data ($\partial \Omega$ verifies the uniform exterior ball condition, and $f$ and $g$ are Lipschitz), these value functions $u^{{\varepsilon}}, v^{{\varepsilon}}$ converge uniformly in $\overline{\Omega}$ to a pair of continuous limits $u,v$ that are characterized as being the unique viscosity solution to .
\[teo.converge2\] Let $(u^{{\varepsilon}}, v^{{\varepsilon}})$ be the values of the game. Then, there exists a pair of continuous functions in $\overline{\Omega}$, $(u,v)$, such that $$u^{{\varepsilon}}\to u, \quad \mbox{and} \quad v^{{\varepsilon}}\to v, \qquad \mbox{ as } {{\varepsilon}}\to 0,$$ uniformly in $\overline{\Omega}$. Moreover, the limit $(u,v)$ is characterized as the unique viscosity solution to the system (with a constant $\kappa=\frac{1}{|B_{1}(0)|}\int_{B_{1}(0)}z_{j}^{2}dz.
$ that depends only on the dimension).
[If we impose that the probability of moving random in the second board is $1-K{{\varepsilon}}^2$ (and hence the probability of changing from the second to the first board is $K{{\varepsilon}}^2$) with the same computations we obtain $$v^{{{\varepsilon}}}(x)=K {{\varepsilon}}^{2}u^{{{\varepsilon}}}(x)+(1- K {{\varepsilon}}^{2}){\vint}_{B_{{{\varepsilon}}}(x)}v^{{{\varepsilon}}}(y)dy$$ as the second equation in the DPP (the first equation and the exterior data remain unchanged). Passing to the limit we get $$- \displaystyle \frac{\kappa}{2 K } \Delta v(x) + v(x) - u(x)=0 ,$$ and hence, choosing $K$, we can obtain any positive constant in front of the Laplacian in . ]{}
To prove that the sequences $\{u^{{\varepsilon}}, v^{{\varepsilon}}\}_{{\varepsilon}}$ converge we will apply an Arzelà-Ascoli type lemma. To this end we need to show a sort of asymptotic continuity that is based on estimates for both value functions $(u^{{\varepsilon}}, v^{{\varepsilon}})$ near the boundary (these estimates can be extended to the interior via a coupling probabilistic argument). In fact, to see an asymptotic continuity close to a boundary point, we are able to show that both players have strategies that enforce the game to end near a point $y\in \partial \Omega$ with high probability if we start close to that point no mater the strategy chosen by the other player. This allows us to obtain a sort of asymptotic equicontinuity close to the boundary leading to uniform convergence in the whole $\overline{\Omega}$. Note that, in general the value functions $(u^{{\varepsilon}},v^{{\varepsilon}})$ are discontinuous in $\Omega$ (this is due to the fact that we make discrete steps) and therefore showing uniform convergence to a continuos limit is a difficult task.
Let us see formally why a uniform limit $(u,v)$ is a solution to equation . By subtracting $u^{{\varepsilon}}(x)$ and dividing by ${{\varepsilon}}^2$ on both sides we get $$0=(v^{{{\varepsilon}}}(x)-u^{{{\varepsilon}}}(x))+(1-{{\varepsilon}}^{2})
\left\{ \frac{ {{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)} u^{{{\varepsilon}}}(y)
+ {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y) -u^{{{\varepsilon}}}(x) }{{{\varepsilon}}^{2}} \right\}.$$ which in the limit approximates the first equation in our system (the terms into brackets approximate the second derivative of $u$ in the direction of its gradient). Similarly, the second equation in the DPP can be written as $$0 = (u^{{{\varepsilon}}}(x)-v^{{{\varepsilon}}}(x))+(1-{{\varepsilon}}^{2}){\vint}_{B_{{{\varepsilon}}}(x)} \frac{(v^{{{\varepsilon}}}(y)- v^{{{\varepsilon}}}(x))}{{{\varepsilon}}^2} dy$$ that approximates solutions to the second equation in .
The paper is organized as follows: in Section \[sect-prelim\] we include the precise definition of what we will understand by a viscosity solution for our system and we state a key preliminary result from probability theory (the Optional Stopping Theorem); Section \[sect-game2\] contains a detailed description of the game, also in Section \[sect-game2\] we show that there is a value of the game that satisfies the DPP and prove uniqueness for the DPP (we prove Theorem \[teo.dpp2\]); next, in Section \[sect-convergence\] we analyze the game and show that value functions converge uniformly along subsequences to a pair of continuous functions; in Section \[sect-limiteviscoso\] we prove that the limit is a viscosity solution to our system (up to this point we obtain the first part of Theorem \[teo.converge2\]) and in Section \[sect-uniqueness\] we show uniqueness of viscosity solutions to the system, ending the proof of Theorem \[teo.converge2\]. Finally, in Section \[sect.extensiones\] we collect some comments on possible extensions of our results.
Preliminaries. {#sect-prelim}
==============
In this section we include the precise definition of what we understand as a viscosity solution for the system and we include the precise statement of the Optional Stopping Theorem that will be needed when dealing with the probabilistic part of our arguments.
Viscosity solutions
-------------------
We begin by stating the definition of a viscosity solution to a fully nonlinear second order elliptic PDE. We refer to [@CIL] for general results on viscosity solutions. Fix a function $$P:\Omega\times{{\mathbb R}}\times{{\mathbb R}}^N\times\mathbb{S}^N\to{{\mathbb R}}$$ where $\mathbb{S}^N$ denotes the set of symmetric $N\times N$ matrices. We want to consider the PDE $$\label{eqvissol}
P(x,u (x), Du (x), D^2u (x)) =0, \qquad x \in \Omega.$$
The idea behind Viscosity Solutions is to use the maximum principle in order to “pass derivatives to smooth test functions”. This idea allows us to consider operators in non divergence form. We will assume that $P$ is degenerate elliptic, that is, $P$ satisfies a monotonicity property with respect to the matrix variable, that is, $$X\leq Y \text{ in } \mathbb{S}^N \implies P(x,r,p,X)\geq P(x,r,p,Y)$$ for all $(x,r,p)\in \Omega\times{{\mathbb R}}\times{{\mathbb R}}^N$.
Here we have an equation that involves the $\infty$-laplacian that is not well defined when the gradient vanishes. In order to be able to handle this issue, we need to consider the lower semicontinous, $P_*$, and upper semicontinous, $P^*$, envelopes of $P$. These functions are given by $$\begin{array}{ll}
P^*(x,r,p,X)& \displaystyle =\limsup_{(y,s,w,Y)\to (x,r,p,X)}P(y,s,w,Y),\\
P_*(x,r,p,X)& \displaystyle =\liminf_{(y,s,w,Y)\to (x,r,p,X)}P(y,s,w,Y).
\end{array}$$ These functions coincide with $P$ at every point of continuity of $P$ and are lower and upper semicontinous respectively. With these concepts at hand we are ready to state the definition of a viscosity solution to .
\[def.sol.viscosa.1\] A lower semi-continuous function $ u $ is a viscosity supersolution of if for every $ \phi \in C^2$ such that $ \phi $ touches $u$ at $x \in \Omega$ strictly from below (that is, $u-\phi$ has a strict minimum at $x$ with $u(x) = \phi(x)$), we have $$P^*(x,\phi(x),D \phi(x),D^2\phi(x))\geq 0.$$
An upper semi-continuous function $u$ is a subsolution of if for every $ \psi \in C^2$ such that $\psi$ touches $u$ at $ x \in
\Omega$ strictly from above (that is, $u-\psi$ has a strict maximum at $x$ with $u(x) = \psi(x)$), we have $$P_*(x,\phi(x),D \phi(x),D^2\phi(x))\leq 0.$$
Finally, $u$ is a viscosity solution of if it is both a suoer- and a subsolution.
In our system we have two equations given by the functions $$F_1 (x,u,p,X) = - \frac12 \langle X\frac{p}{|p|} , \frac{p}{|p|} \rangle + u - v(x)=0$$ and $$F_2 (x,v,q,Y) = - \frac{\kappa}{2} trace (Y) + v - u(x)=0.$$
Then, the definition of a viscosity solution for the system that we will use here is the following.
\[def.sol.viscosa.system\] A pair of continuous functions $ u, v :\overline{\Omega} \mapsto \mathbb{R} $ is a viscosity solution of if $$u|_{\partial \Omega} = f, \qquad v|_{\partial \Omega} = g,$$ $$\mbox{$u$ is a viscosity solution to
$F_1 (x,u,D u, D^2u) = 0$}$$ $$and$$ $$\mbox{$v$ is a viscosity solution to
$
F_2 (x,v, D v, D^2v) = 0$}$$ in the sense of Definition \[def.sol.viscosa.1\].
We remark that, according to our definition, in the equation for $u$, as the other component $v$ is continuous, we have that $F_1$ depends on $x$ via $v(x)$ (and similarly for $F_2$ that depend on $x$ as $u(x)$). That is, we understand a solution to as a pair of continuous up to the boundary functions that satisfies the boundary conditions pointwise and such that $u$ is a viscosity solution to the first equation in the system in the viscosity sense (with $v$ as a fixed continuos function of $x$ in $F_1$) and $v$ solves the second equation in the system (regarding $u$ as a fixed function of $x$ in $F_2$).
Also notice that we have that both $u$ and $v$ are assumed to be continuous in $\overline{\Omega}$ and then the boundary data $f$ and $g$ are taken on $\partial \Omega$ with continuity.
Probability. The Optional Stopping Theorem.
-------------------------------------------
We briefly recall (see [@Williams]) that a sequence of random variables $\{M_{k}\}_{k\geq 1}$ is a supermartingale (submartingales) if $${{\mathbb E}}[M_{k+1}\arrowvert M_{0},M_{1},...,M_{k}]\leq M_{k} \ \ (\geq)$$ Then, the Optional Stopping Theorem, that we will call [*(OSTh)*]{} in what follows, says: given $\tau$ a stopping time such that one of the following conditions hold,
- The stopping time $\tau$ is bounded a.s.;
- It holds that ${{\mathbb E}}[\tau]<\infty$ and there exists a constant $c>0$ such that $${{\mathbb E}}[M_{k+1}-M_{k}\arrowvert M_{0},...,M_{k}]\leq c;$$
- There exists a constant $c>0$ such that $|M_{\min \{\tau,k\}}|\leq c$ a.s. for every $k$.
Then $${{\mathbb E}}[M_{\tau}]\leq {{\mathbb E}}[M_{0}] \ \ (\geq)$$ if $\{M_{k}\}_{k\geq 0}$ is a supermartingale (submartingale). For the proof of this classical result we refer to [@Doob; @Williams].
A two-player game {#sect-game2}
=================
In this section, we describe in detail the two-player zero-sum game presented in the introduction. Let $\Omega \subset{{\mathbb R}}^N$ be a bounded smooth domain and fix ${{\varepsilon}}>0$. The game takes place in two boards (that we will call board 1 and board 2), that are two copies of $\mathbb{R}^N$ with the same domain $\Omega$ inside. Fix two Lipschitz functions ${\overline}{f}:{{\mathbb R}}^{n} \backslash {\Omega}\rightarrow {{\mathbb R}}$ and ${\overline}{g}:{{\mathbb R}}^{N} \backslash {\Omega}\rightarrow {{\mathbb R}}$ that are going to give the final payoff of the game when we exit $\Omega$ in board 1 and 2 respectively.
A token is placed at $x_0\in\Omega $ in one of the two boards. When we play in the first board, with probability $1-{{\varepsilon}}^2$ we play *Tug-of-War*, the game introduced in [@PSSW], a fair coin (with probability $\frac12$ of heads and tails) is tossed and the player who win the coin toss chooses the next position of the game inside the ball $B_{{\varepsilon}}(x_0)$ in the first board. With probability ${{\varepsilon}}^2$ we jump to the other board, the next position of the toke in $x_0$ but now in board 2. If $x_0$ is in the second board then with probability $1-{{\varepsilon}}^2$ the new position of the game is chosen at random in the ball $B_{{\varepsilon}}(x_0)$ (with uniform probability) and with probability ${{\varepsilon}}^2$ the position jumps to the same $x_0$ but in the first board. The position of the token will be denoted by $(x,j)$ where $x \in \mathbb{R}^N$ and $j=1,2$ ($j$ encodes the boars in which the token is at position $x$). Then, after one movement, the players continue playing with the same rules from the new position of the token $x_1$ in its corresponding board, 1 or 2. The game ends when the position of the token leaves the domain $\Omega$. That is, let $\tau$ be the stopping time given by the first time at which $x_{\tau} \in {{\mathbb R}}^{N} \backslash \Omega$. If $x_{\tau}$ is in the first board then Player I gets ${\overline}{f}(x_{\tau})$ (and Player II pays that quantity), while in the token leaves $\Omega$ in the second board Player I gets ${\overline}{g}(x_{\tau})$ (and Player II pays that amount). We have that the game generates a sequence of states $$P=\{ (x_{0},j_{0}),(x_{1},j_{1}),...,(x_{\tau},j_{\tau})\}$$ with $j_{i}\in\{1,2\}$ and $x_{i}$ in the board $j_{i}$. The dependence of the position of the token in one of the boards, $j_i$, will be made explicit only when needed.
A strategy $S_{\textrm{I}}$ for Player I is a function defined on the partial histories that gives the next position of the game provided Player ${\textrm{I}}$ wins the coin toss (and the token is and stays in the first board) $$S_{\textrm{I}}{\left((x_0,j_{0}),(x_1,,j_{1}),\ldots,(x_n,,1)\right)}= (x_{n+1},1) \qquad \mbox{with } x_{n+1} \in B_{{\varepsilon}}(x_n).$$ Analogously, a strategy $S_{\textrm{II}}$ for Player II is a function defined on the partial histories that gives the next position of the game provided Player ${\textrm{II}}$ is who wins the coin toss (and the token stays at the first board).
When the two players fix their strategies $S_I$ and $S_{II}$ we can compute the expected outcome as follows: Given the sequence $x_0,\ldots,x_n$ with $x_k\in{\Omega}$, if $x_k$ belongs to the first board, the next game position is distributed according to the probability $$\pi_{S_{\textrm{I}},S_{\textrm{II}},1}((x_0,j_0),\ldots,(x_k,1),{A}, B)= \frac{1-{{\varepsilon}}^2}{2} \delta_{S_{\textrm{I}}((x_0,j_0),\ldots,(x_k,1))} (A) +
\frac{1-{{\varepsilon}}^2}{2} \delta_{S_{\textrm{II}}((x_0,j_0),\ldots,(x_k,1))} (A) + {{\varepsilon}}^2 \delta_{x_k} (B).$$ Here $A$ is a subset in the first board while $B$ is a subset in the second board. If $x_k$ belongs to the second board, the next game position is distributed according to the probability $$\pi_{S_{\textrm{I}},S_{\textrm{II}},2}((x_0,j_0),\ldots,(x_k,2),{A}, B)= (1-{{\varepsilon}}^2) U(B_{{\varepsilon}}(x_k)) (B) +
{{\varepsilon}}^2 \delta_{x_k} (A).$$
By using the Kolmogorov’s extension theorem and the one step transition probabilities, we can build a probability measure $\mathbb{P}^{x_0}_{S_{\textrm{I}},S_{\textrm{II}}}$ on the game sequences (taking onto account the two boards). The expected payoff, when starting from $(x_0,j_0)$ and using the strategies $S_{\textrm{I}},S_{\textrm{II}}$, is $$\label{eq:defi-expectation}
\mathbb{E}_{S_{{\textrm{I}}},S_{\textrm{II}}}^{(x_0,j_0)} [ h (x_\tau) ]=\int_{H^\infty} h (x_\tau) \, d
\mathbb{P}^{x_0}_{S_{\textrm{I}},S_{\textrm{II}}}$$ (here we use $h=f$ if $x_\tau$ is in the first board or $h=g$ if $x_\tau$ is in the second board).
The *value of the game for Player I* is given by $$u^{{\varepsilon}}_{\textrm{I}}(x_0)=\inf_{S_{\textrm{I}}}\sup_{S_{{\textrm{II}}}}\,
\mathbb{E}_{S_{{\textrm{I}}},S_{\textrm{II}}}^{(x_0,1)}\left[h (x_\tau) \right]$$ for $x_0\in\Omega$ in the first board ($j_0 =1$), and by $$v^{{\varepsilon}}_{\textrm{I}}(x_0)=\inf_{S_{\textrm{I}}}\sup_{S_{{\textrm{II}}}}\,
\mathbb{E}_{S_{{\textrm{I}}},S_{\textrm{II}}}^{(x_0,2)}\left[h (x_\tau) \right]$$ for $x_0\in\Omega$ in the second board ($j_0 =2$).
The *value of the game for Player II* is given by the same formulas just reversing the $\inf$–$\sup$, $$u^{{\varepsilon}}_{\textrm{II}}(x_0)=\sup_{S_{{\textrm{II}}}}\inf_{S_{\textrm{I}}}\,
\mathbb{E}_{S_{{\textrm{I}}},S_{\textrm{II}}}^{(x_0,1)}\left[h (x_\tau) \right],$$ for $x_0$ in the first board and $$v^{{\varepsilon}}_{\textrm{II}}(x_0)=\sup_{S_{{\textrm{II}}}}\inf_{S_{\textrm{I}}}\,
\mathbb{E}_{S_{{\textrm{I}}},S_{\textrm{II}}}^{(x_0,2)}\left[h (x_\tau) \right],$$ for $x_0$ in the second board.
Intuitively, the values $u_{\textrm{I}}(x_0)$ and $u_{\textrm{II}}(x_0)$ are the best expected outcomes each player can guarantee when the game starts at $x_0$ in the first board while $v_{\textrm{I}}(x_0)$ and $v_{\textrm{II}}(x_0)$ are the best expected outcomes for each player in the second board.
If $u^{{\varepsilon}}_{\textrm{I}}= u^{{\varepsilon}}_{\textrm{II}}$ and $v^{{\varepsilon}}_{\textrm{I}}= v^{{\varepsilon}}_{\textrm{II}}$, we say that the game has a value.
Before proving that the game has a value, let us observe that the game ends almost surely no matter the strategies used by the players, that is ${{\mathbb P}}(\tau =+\infty) =0$, and therefore the expectation is well defined. This fact is due to the random movements that we make in the second board (that kicks us out of the domain in a finite number of plays without changing boards with positive probability).
\[prop.juego.termina\] We have that $$\mathbb{P} \Big(\mbox{the game ends in a finite number of plays}\Big)=1.$$
Let us start by showing that the game ends in a finite number of plays if we start with the token in the second board. Let $\xi\in{{\mathbb R}}^{n}$ with $| \xi |=1$ be a fixed direction. Consider the set $$T_{\xi,x_{k}}=\Big\{ y\in{{\mathbb R}}^{n}:y\in B_{{{\varepsilon}}}(x_{k})\wedge \langle y-(x_{k}+\frac{{{\varepsilon}}}{2}\xi),\xi\rangle\geq 0 \Big\}$$ that is a part of the ball where the points are at distance $\frac{{{\varepsilon}}}{2}$ from the center and are in the same direction. Then, starting from any point in ${\Omega}$ if in every play we choose a point in $T_{\xi,x_{k}}$ (without changing boards) in at most $\lceil \frac{4R}{{{\varepsilon}}}\rceil$ steps we will be a out of ${\Omega}$ (here $R=diam({\Omega})$). As the set $T_{\xi,x_{k}}$ has positive measure it holds that $$\mathbb{P}(x_{k+1}\in T_{\xi,x_{k}}\arrowvert x_{k}):=\alpha > 0.$$ Therefore, we have a positive probability of ending the game in less than $\lceil \frac{4R}{{{\varepsilon}}}\rceil$ plays $$\mathbb{P}(\mbox{the game ends in $\lceil \frac{4R}{{{\varepsilon}}}\rceil$ plays})\geq [(1-{{\varepsilon}}^{2})\alpha]^{K}=r>0.$$ Hence $$\mathbb{P}(\mbox{the game continues after $\lceil \frac{4R}{{{\varepsilon}}}\rceil$ plays})\leq 1-r,$$ and then $$\mathbb{P}(\mbox{the game does not end in a finite number of plays}) =0.$$
Now, if we start in the first board the probability of not changing the board in $n$ plays is $(1-{{\varepsilon}}^{2})^{n}$. Therefore, we will change to the second board (or end the game) with probability one in a finite number of plays. Hence, we end the game or we are in the previous situation with probability one
This implies that the game ends almost surely in a finite number of plays.
To see that the game has a value, we first observe that we have existence of $(u^{{\varepsilon}}, v^{{\varepsilon}})$, a pair of functions that satisfies the DPP. The existence of such a pair can be obtained by Perron’s method. In fact, let us start considering the following set (that is composed by pairs of functions that are sub solutions to our DPP). Let $$\label{kkk}
C=\max \Big\{ \|{\overline}{f}\|_\infty , \|{\overline}{g}\|_\infty
\Big\},$$ and consider the set of functions $$\label{A}
{A} =\displaystyle \Big\{ (z^{{{\varepsilon}}},w^{{{\varepsilon}}}) : \mbox{ are bounded above by $C$ and verify (\textbf{e}) } \Big\},$$ with $$\label{e} \tag{\bf{e}}
\displaystyle \left\lbrace
\begin{array}{ll}
\displaystyle z^{{{\varepsilon}}}(x)\leq{{\varepsilon}}^{2}w^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)}z^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}z^{{{\varepsilon}}}(y)\Big\} \qquad & \ x \in \Omega, \\[10pt]
\displaystyle w^{{{\varepsilon}}}(x)\leq{{\varepsilon}}^{2}z^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2}){\vint}_{B_{{{\varepsilon}}}(x)}w^{{{\varepsilon}}}(y)dy \qquad & \ x \in \Omega, \\[10pt]
z^{{{\varepsilon}}}(x) \leq {\overline}{f}(x) \qquad & \ x \in {{\mathbb R}}^{N} \backslash \Omega, \\[10pt]
w^{{{\varepsilon}}}(x) \leq {\overline}{g}(x) \qquad & \ x \in {{\mathbb R}}^{N} \backslash \Omega.
\end{array}
\right.$$
[Notice that we need to impose that $(z^{{{\varepsilon}}},w^{{{\varepsilon}}})$ are bounded since $$z^{{{\varepsilon}}} (x) =
\left\{
\begin{array}{ll}
+\infty \qquad & x \in \Omega \\[5pt]
{\overline}{f} \qquad & x \not\in \Omega
\end{array}
\right.
\qquad
\mbox{and}
\qquad
w^{{{\varepsilon}}} (x) =
\left\{
\begin{array}{ll}
+\infty \qquad & x \in \Omega \\[5pt]
{\overline}{g} \qquad & x \not\in \Omega
\end{array}
\right.$$ satisfy **e**.]{}
Observe that ${A} \neq \emptyset$. To see this fact, we just take $z^{{{\varepsilon}}}=-C$ and $ w^{{{\varepsilon}}}=-C$ with $C$ given by . Now we let $$\label{u}
u^{{{\varepsilon}}}(x)=\sup_{(z^{{{\varepsilon}}},w^{{{\varepsilon}}})\in {A}}z^{{{\varepsilon}}}(x)
\qquad \mbox{and} \qquad
v^{{{\varepsilon}}}(x)=\sup_{(z^{{{\varepsilon}}},w^{{{\varepsilon}}})\in {A}}w^{{{\varepsilon}}}(x) .$$ Our goal is to show that in this way we find a solution to the DPP.
\[prop.DPP.tiene.sol\] The pair $(u^{{{\varepsilon}}},v^{{{\varepsilon}}})$ given by is a solution to the DPP
First, let us see that $(u^{{{\varepsilon}}},v^{{{\varepsilon}}})$ belongs to the set ${A}$. To this end we first observe that $u^{{{\varepsilon}}}$ y $v^{{{\varepsilon}}}$ are bounded by $C$ and verify $u^{{{\varepsilon}}}(x)\leq {\overline}{f}(x)$ and $v^{{{\varepsilon}}}(x)\leq {\overline}{g}(x)$ for $x\in{{\mathbb R}}^{N}\backslash{\Omega}$. Hence we need to check for $x\in{\Omega}$. Take $(z^{{{\varepsilon}}},w^{{{\varepsilon}}})\in {A}$ and fix $x\in{\Omega}$. Then, $$z^{{{\varepsilon}}}(x)\leq{{\varepsilon}}^{2}w^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)}z^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}z^{{{\varepsilon}}}(y)\Big\}.$$ As $z^{{{\varepsilon}}}\leq u^{{{\varepsilon}}}$ and $w^{{{\varepsilon}}}\leq v^{{{\varepsilon}}}$ we obtain $$z^{{{\varepsilon}}}(x)\leq{{\varepsilon}}^{2}v^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y)\Big\}.$$ Taking supremum in the left hand sise we obtain $$u^{{{\varepsilon}}}(x)\leq{{\varepsilon}}^{2}v^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y)\Big\}.$$ In an analogous way we obtain $$v^{{{\varepsilon}}}(x)\leq{{\varepsilon}}^{2}u^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2}){\vint}_{B_{{{\varepsilon}}}(x)}v^{{{\varepsilon}}}(y)dy,$$ and we conclude that $(u^{{{\varepsilon}}},v^{{{\varepsilon}}}) \in {A}$.
To end the proof we need to see that $(u^{{{\varepsilon}}},v^{{{\varepsilon}}})$ verifies the equalities in the equations in condition . We argue by contradiction and assume that there is a point $x_{0}\in{{\mathbb R}}^{n}$ where an inequality in is strict. First, assume that $x_{0}\in{{\mathbb R}}^{n}\backslash{\Omega}$, and that we have $u^{{{\varepsilon}}}(x_{0})<{\overline}{f}(x_{0})$. Then, take $u^{{{\varepsilon}}}_{0}$ defined by $u^{{{\varepsilon}}}_{0}(x)=u^{{{\varepsilon}}}(x)$ for $x\neq x_0$ and $u^{{{\varepsilon}}}_{0}(x_{0})={\overline}{f}(x_{0})$. The pair $(u^{{{\varepsilon}}}_{0},v^{{{\varepsilon}}})$ belongs to ${A}$ but $u^{{{\varepsilon}}}_{0}(x_{0})>u^{{{\varepsilon}}}(x_{0})$ which is a contradiction. We can argue is a similar way if $v^{{{\varepsilon}}}(x_{0})<{\overline}{g}(x_{0})$. Next, we consider a point $x_{0}\in{\Omega}$ with one of the inequalities in **e** strict. Assume that $$u^{{{\varepsilon}}}(x_{0})<{{\varepsilon}}^{2}v^{{{\varepsilon}}}(x_{0})+(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x_{0})}u^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x_{0})}u^{{{\varepsilon}}}(y)\Big\}.$$ Let $$\delta={{\varepsilon}}^{2}v^{{{\varepsilon}}}(x_{0})+(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x_{0})}u^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x_{0})}u^{{{\varepsilon}}}(y)\Big\}-u^{{{\varepsilon}}}(x_{0})>0,$$ and consider the function $u^{{{\varepsilon}}}_{0}$ given by; $$u^{{{\varepsilon}}}_{0} (x) =\left\lbrace
\begin{array}{ll}
u^{{{\varepsilon}}}(x) & \ \ x \neq x_{0}, \\[5pt]
\displaystyle u^{{{\varepsilon}}}(x)+\frac{\delta}{2} & \ \ x =x_{0} . \\
\end{array}
\right.$$ Observe that $$u^{{{\varepsilon}}}_{0}(x_{0})=u^{{{\varepsilon}}}(x_{0})+\frac{\delta}{2}<{{\varepsilon}}^{2}v^{{{\varepsilon}}}(x_{0})+(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x_{0})}u^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x_{0})}u^{{{\varepsilon}}}(y)\Big\}$$ and hence $$u^{{{\varepsilon}}}_{0}(x_{0})<{{\varepsilon}}^{2}v^{{{\varepsilon}}}(x_{0})+(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x_{0})}u^{{{\varepsilon}}}_{0}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x_{0})}u^{{{\varepsilon}}}_{0}(y)\Big\}.$$ Then we have that $(u^{{{\varepsilon}}}_{0},v^{{{\varepsilon}}})\in {A}$ but $u^{{{\varepsilon}}}_{0}(x_{0})>u^{{{\varepsilon}}}(x_{0})$ reaching again a contradiction.
In an analogous way we can show that when $$v^{{{\varepsilon}}}(x_{0})<{{\varepsilon}}^{2}u^{{{\varepsilon}}}(x_{0})+(1-{{\varepsilon}}^{2}){\vint}_{B_{{{\varepsilon}}}(x_{0})}v^{{{\varepsilon}}}(y)dy,$$ we also reach a contradiction.
Now, concerning the value functions of our game, we know that $u^{{\varepsilon}}_{\textrm{I}}\geq u^{{\varepsilon}}_{\textrm{II}}$ and $v^{{\varepsilon}}_{\textrm{I}}\geq v^{{\varepsilon}}_{\textrm{II}}$ (this is immediate from the definitions). Hence, to obtain uniqueness of solutions of the DPP and existence of value functions for our game, it is enough to show that $u^{{\varepsilon}}_{\textrm{II}}\geq u^{{\varepsilon}}\geq u^{{\varepsilon}}_{\textrm{I}}$ and $v^{{\varepsilon}}_{\textrm{II}}\geq v^{{\varepsilon}}\geq v^{{\varepsilon}}_{\textrm{I}}$. To show this result we will use the *OSTh* for sub/supermartingales (see Section \[sect-prelim\]).
Gigen ${{\varepsilon}}>0$ let $(u^{{{\varepsilon}}},v^{{{\varepsilon}}})$ a pair of functions that verifies the DPP , then it holds that $$u^{{{\varepsilon}}}(x_{0})=\sup_{S_{I}}\inf_{S_{II}}{{\mathbb E}}^{(x_{0},1)}_{S_{I},S_{II}}[h( x_{\tau})]$$ if $x_{0} \in {\Omega}$ is in the first board and $$v^{{{\varepsilon}}}(x_{0})=\sup_{S_{I}}\inf_{S_{II}}{{\mathbb E}}^{(x_{0},2)}_{S_{I},S_{II}}[h( x_{\tau})]$$ if $x_{0} \in {\Omega}$ is in the second board.
Moreover, we can interchange $\inf$ with $\sup$ in the previous identities, that is, the game has a value. This value can be characterized as the unique solution to the DPP.
Given ${{\varepsilon}}>0$ we have proved the existence of a solution to the DPP $(u^{{{\varepsilon}}},v^{{{\varepsilon}}})$. Fix $\delta>0$. Assume that we start with $(x_{0},1)$, that is, the initial position is at board 1. We choose a strategy for Player I as follows: $$x_{k+1}^{I}=S_{I}^{*}(x_{0},...,x_{k})
\qquad
\mbox{is such that} \qquad
\sup_{y\in B_{{{\varepsilon}}}(x_{k})}u^{{{\varepsilon}}}(y) - \frac{\delta}{2^{k}}\leq u^{{{\varepsilon}}}(x_{k+1}^{I}).$$ Given this strategy for Player I and any strategy $S_{II}$ for Player II we consider the sequence of random variables given by $$M_{k}=\left\lbrace
\begin{array}{ll}
\displaystyle u^{{{\varepsilon}}}(x_{k})-\frac{\delta}{2^{k}} & \ \ {if } \ (j_{k}=1), \\[10pt]
\displaystyle v^{{{\varepsilon}}}(x_{k})-\frac{\delta}{2^{k}} & \ \ {if } \ (j_{k} = 2).
\end{array}
\right.$$ Let us see that $(M_{k})_{\kappa\geq 0}$ is a submartingale. To this end we need to estimate $${{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert M_{k}]={{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},j_{k})].$$ We consider two cases:
**Case 1:** Assume that $j_{k}=1$, then $$\begin{array}{l}
\displaystyle
{{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)]
\displaystyle =(1-{{\varepsilon}}^{2}){{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)\wedge j_{k+1}=1]+{{\varepsilon}}^{2}{{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)\wedge j_{k+1}=2].
\end{array}$$ Her we used that the probability of staying in the same board is $(1-{{\varepsilon}}^{2})$ and the probability of jumping to the other board is ${{\varepsilon}}^{2}$. Now, if $j_{k}=1$ and $j_{k+1}=2$ then $x_{k+1}=x_{k}$ (we just changed boards). On the other hand, if we stay in the first board we obtain $${{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)]=(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}u^{{{\varepsilon}}}(x_{k+1}^{I})+{{\frac{1}{2}}}u^{{{\varepsilon}}}(x_{k+1}^{II})-\frac{\delta}{2^{k+1}}\Big\}+{{\varepsilon}}^{2}(v^{{{\varepsilon}}}(x_{k})-\frac{\delta}{2^{k+1}}).$$ Since we are using the strategies $S_{I}^{*}$ and $S_{II}$, it holds that $$\sup_{y\in B_{{{\varepsilon}}}(x_{k})}u^{{{\varepsilon}}}(y) - \frac{\delta}{2^{k}}\leq u^{{{\varepsilon}}}(x_{k+1}^{I})
\qquad
\mbox{and}\qquad
\inf_{y\in B_{{{\varepsilon}}}(x_{k})}u^{{{\varepsilon}}}(y)\leq u^{{{\varepsilon}}}(x_{k+1}^{II}).$$ Therefore, we arrive to $${{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)]\geq(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}(\sup_{y\in B_{{{\varepsilon}}}(x_{k})}u^{{{\varepsilon}}}(y) - \frac{\delta}{2^{k}})+{{\frac{1}{2}}}\inf_{y\in B_{{{\varepsilon}}}(x_{k})}u^{{{\varepsilon}}}(y)\Big\}+{{\varepsilon}}^{2}v^{{{\varepsilon}}}(x_{k})-\frac{\delta}{2^{k+1}},$$ that is, $$\begin{array}{l}
\displaystyle
{{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)]
\displaystyle \geq(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y\in B_{{{\varepsilon}}}(x_{k})}u^{{{\varepsilon}}}(y)+{{\frac{1}{2}}}\inf_{y\in B_{{{\varepsilon}}}(x_{k})}u^{{{\varepsilon}}}(y)\Big\}+{{\varepsilon}}^{2}v^{{{\varepsilon}}}(x_{k})-(1-{{\varepsilon}}^{2}) \frac{\delta}{2^{k+1}}-\frac{\delta}{2^{k+1}}.
\end{array}$$ As $u^{{{\varepsilon}}}$ is a solution to the DPP we obtain $${{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},1)]\geq u^{{{\varepsilon}}}(x_{k})-\frac{\delta}{2^{k}}=M_{k}$$ as we wanted to show.
**Case 2:** Assume that $j_{k}=2$. With the same ideas used before we get $$\begin{array}{l}
\displaystyle
{{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)]
\displaystyle =(1-{{\varepsilon}}^{2}){{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)\wedge j_{k+1}=2]+{{\varepsilon}}^{2}{{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)\wedge j_{k+1}=1].
\end{array}$$ Remark that when $j_{k}=j_{k+1}=2$ (this means that we play in the second board) with $x_{k}\in{\Omega}$, then $x_{k+1}$ is chosen with uniform probability in the ball $B_{{{\varepsilon}}}(x_{k})$. Hence, $$\begin{array}{l}
\displaystyle {{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)\wedge j_{k+1}=2]\ \displaystyle ={{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[v^{{{\varepsilon}}}(x_{k+1})-\frac{\delta}{2^{k+1}}\arrowvert (x_{k},2)\wedge j_{k+1}=2]
\displaystyle ={\vint}_{B_{{{\varepsilon}}}(x_{k})}v^{{{\varepsilon}}}(y)dy-\frac{\delta}{2^{k+1}}.
\end{array}$$ On the other hand, $${{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)\wedge j_{k+1}=1]=u^{{{\varepsilon}}}(x_{k})-\frac{\delta}{2^{k+1}}.$$ Collecting these estimates we obtain $$\begin{array}{l}
\displaystyle
{{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)]=(1-{{\varepsilon}}^{2})\left({\vint}_{B_{{{\varepsilon}}}(x_{k})}v^{{{\varepsilon}}}(y)dy-\frac{\delta}{2^{k+1}}
\right)+{{\varepsilon}}^{2}(u^{{{\varepsilon}}}(x_{k})-\frac{\delta}{2^{k+1}}) \\[10pt]
\qquad \displaystyle \geq(1-{{\varepsilon}}^{2}){\vint}_{B_{{{\varepsilon}}}(x_{k})}v^{{{\varepsilon}}}(y)dy+{{\varepsilon}}^{2}u^{{{\varepsilon}}}(x_{k})-\frac{\delta}{2^{k}},
\end{array}$$ that is, $${{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{k+1}\arrowvert (x_{k},2)]\geq v^{{{\varepsilon}}}(x_{k})-\frac{\delta}{2^{k}}=M_{k}.$$ Here we used that $v^{{{\varepsilon}}}$ is a solution to the DPP, . This ends the second case.
Therefore $(M_{k})_{k\geq 0}$ is a *submartingale*. Using the *OSTh* (recall that we have proved that $\tau$ is finite a.s. and that we have that $M_k$ is uniformly bounded) we conclude that $${{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[M_{\tau}]\geq M_{0}$$ where $\tau$ is the first time such that $x_{\tau}\notin{\Omega}$ in any of the two boards. Then, $${{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[\mbox{final payoff}]\geq u^{{\varepsilon}}(x_{0})-\delta.$$ We can compute the infimum in $S_{II}$ and then the supremum in $S_{I}$ to obtain $$\sup_{S_{I}}\inf_{S_{II}}{{\mathbb E}}^{(x_{0},1)}_{S_{I},S_{II}}[\mbox{final payoff}]\geq u^{{{\varepsilon}}}(x_{0})-\delta.$$
We just observe that if we have started in the second board the previous computations show that $$\sup_{S_{I}}\inf_{S_{II}}{{\mathbb E}}^{(x_{0},2)}_{S_{I},S_{II}}[\mbox{final payoff}]\geq v^{{{\varepsilon}}}(x_{0})-\delta.$$
Now our goal is to prove the reverse inequality (interchanging inf and sup). To this end we define an strategy for Player II with $$x_{k+1}^{II}=S_{II}^{*}(x_{0},...,x_{k})
\qquad
\mbox{is such that}
\qquad
\inf_{B_{{{\varepsilon}}}(x_{k})}u^{{{\varepsilon}}}(x_{k+1}^{II})+\frac{\delta}{2^{k}}\geq u^{{{\varepsilon}}}(x_{k+1}^{II}),$$ and consider the sequence of random variables $$N_{k}=\left\lbrace
\begin{array}{ll}
\displaystyle u^{{{\varepsilon}}}(x_{k})+\frac{\delta}{2^{k}} & \ \ \mbox{if } j_{k}=1 \\[10pt]
\displaystyle v^{{{\varepsilon}}}(x_{k})+\frac{\delta}{2^{k}} & \ \ \mbox{if } j_{k}=2.
\end{array}
\right.$$ Arguing as before we obtain that this sequence is a *supermartingale*. From the *OSTh* we get $${{\mathbb E}}^{(x_{0},1)}_{S_{I}^{*},S_{II}}[N_{\tau}]\leq N_{0}$$ where $\tau$ is the stopping time for the game. Then, $${{\mathbb E}}^{(x_{0},1)}_{S_{I},S_{II}^{*}}[\mbox{final payoff}]\leq u^{{\varepsilon}}(x_{0})+\delta.$$ Taking supremum in $S_{I}$ and then infimum in $S_{II}$ we obtain $$\inf_{S_{II}}\sup_{S_{I}}{{\mathbb E}}^{(x_{0},1)}_{S_{I},S_{II}}[\mbox{final payoff}]\leq u^{{{\varepsilon}}}(x_{0})+\delta.$$ As before, the same ideas starting at $(x_{0},2)$ give us $$\inf_{S_{II}}\sup_{S_{I}}{{\mathbb E}}^{(x_{0},1)}_{S_{I},S_{II}}[\mbox{final payoff}]\leq v^{{{\varepsilon}}}(x_{0})+\delta.$$
To end the proof we just observe that $$\sup_{S_{I}}\inf_{S_{II}}{{\mathbb E}}_{S_{I},S_{II}}[\mbox{final payoff}]\leq \inf_{S_{II}}\sup_{S_{I}}{{\mathbb E}}_{S_{I},S_{II}}[\mbox{final payoff}].$$ Therefore, $$u^{{{\varepsilon}}}(x_{0})-\delta\leq\sup_{S_{I}}\inf_{S_{II}}{{\mathbb E}}_{S_{I},S_{II}}^{(x_{0},1)}[\mbox{final payoff}]
\leq \inf_{S_{II}}\sup_{S_{I}}{{\mathbb E}}_{S_{I},S_{II}}^{(x_{0},1)}[\mbox{final payoff}]\leq u^{{{\varepsilon}}}(x_{0})+\delta$$ and $$v^{{{\varepsilon}}}(x_{0})-\delta\leq\sup_{S_{I}}\inf_{S_{II}}{{\mathbb E}}_{S_{I},S_{II}}^{(x_{0},2)}[\mbox{final payoff}]\leq
\inf_{S_{II}}\sup_{S_{I}}{{\mathbb E}}_{S_{I},S_{II}}^{(x_{0},2)}[\mbox{final payoff}]\leq v^{{{\varepsilon}}}(x_{0})+\delta.$$ As $\delta >0$ is arbitrary the proof is finished.
[One can obtain existence for the DPP considering, $$\displaystyle (\textbf{e*}) : \left\lbrace
\begin{array}{ll}
\displaystyle z^{{{\varepsilon}}}(x)\geq{{\varepsilon}}^{2}w^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)}z^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}z^{{{\varepsilon}}}(y)\Big\} \qquad & \ x \in \Omega, \\[10pt]
\displaystyle w^{{{\varepsilon}}}(x)\geq{{\varepsilon}}^{2}z^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2}){\vint}_{B_{{{\varepsilon}}}(x)}w^{{{\varepsilon}}}(y)dy \qquad & \ x \in \Omega, \\[10pt]
z^{{{\varepsilon}}}(x) \geq {\overline}{f}(x) \qquad & \ x \in {{\mathbb R}}^{n} \backslash \Omega, \\[10pt]
w^{{{\varepsilon}}}(x) \geq {\overline}{g}(x) \qquad & \ x \in {{\mathbb R}}^{n} \backslash \Omega.
\end{array}
\right.$$ and the associated set of functions $${B} =\displaystyle \Big\{ (z^{{{\varepsilon}}},w^{{{\varepsilon}}}) / \mbox{ are bounded functions such that} \ (\textbf{e*}) \Big\}.$$ Now, we compute infimums, $$\label{u2}
u^{{{\varepsilon}},*}(x)=\inf_{(z^{{{\varepsilon}}},w^{{{\varepsilon}}})\in {B}}z^{{{\varepsilon}}}(x)
\qquad \mbox{ and } \qquad
v^{{{\varepsilon}},*}(x)=\inf_{(z^{{{\varepsilon}}},w^{{{\varepsilon}}})\in {B}}w^{{{\varepsilon}}}(x),$$ that are solutions to the DPP (this fact can be proved as we did for supremums). Then, by the uniqueness to solutions to the DPP we have $$u^{{{\varepsilon}},*} = u^{{{\varepsilon}}} \qquad \mbox{and} \qquad v^{{{\varepsilon}},*} = v^{{{\varepsilon}}}.$$ ]{}
Uniform Convergence {#sect-convergence}
===================
Now our aim is to pass to the limit in the values of the game $$u^{{\varepsilon}}\to u, \ v^{{\varepsilon}}\to v \qquad \mbox{as } {{\varepsilon}}\to 0$$ and then in the next section to obtain that this limit pair $(u,v)$ is a viscosity solution to our system .
To obtain a convergent subsequence $u^{{\varepsilon}}\to u$ we will use the following Arzela-Ascoli type lemma. For its proof see Lemma 4.2 from [@MPRb].
\[lem.ascoli.arzela\] Let $\{u^{{\varepsilon}}: \overline{\Omega}
\to {{\mathbb R}},\ {{\varepsilon}}>0\}$ be a set of functions such that
1. there exists $C>0$ such that ${\left| u^{{\varepsilon}}(x) \right|}<C$ for every ${{\varepsilon}}>0$ and every $x \in \overline{\Omega}$,
2. \[cond:2\] given $\delta >0$ there are constants $r_0$ and ${{\varepsilon}}_0$ such that for every ${{\varepsilon}}< {{\varepsilon}}_0$ and any $x, y \in \overline{\Omega}$ with $|x - y | < r_0 $ it holds $$|u^{{\varepsilon}}(x) - u^{{\varepsilon}}(y)| < \delta.$$
Then, there exists a uniformly continuous function $u:
\overline{\Omega} \to {{\mathbb R}}$ and a subsequence still denoted by $\{u^{{\varepsilon}}\}$ such that $$\begin{split}
u^{{{\varepsilon}}}\to u \qquad\textrm{ uniformly in }\overline{\Omega},
\mbox{ as ${{\varepsilon}}\to 0$.}
\end{split}$$
So our task now is to show that $u^{{\varepsilon}}$ and $v^{{\varepsilon}}$ both satisfy the hypotheses of the previous lemma. First, we observe that they are uniformly bounded.
\[lem.ascoli.arzela.acot\] There exists a constant $C>0$ independent of ${{\varepsilon}}$ such that $${\left| u^{{\varepsilon}}(x) \right|}\leq C, \qquad {\left| v^{{\varepsilon}}(x) \right|}\leq C,$$ for every ${{\varepsilon}}>0$ and every $x \in \overline{\Omega}$.
It follows form our proof of existence of a solution to the DPP. In fact, we can take $$C = \max \{ \| g\|_\infty, \| f \|_\infty \},$$ since the final payoff in any of the boards is bounded by this $C$.
To prove the second hypothesis of Lemma \[lem.ascoli.arzela\] we will need some key estimates according to the board in which we are playing.
Estimates for the Tug-of-War game
---------------------------------
In this case we are going to assume that we are playing in board 1 (with the Tug-of-War game) all the time (without changing boards).
\[lema.estim.ToW\] Given $\eta>0$ and $a>0$, there exist $r_{0}>0$ and ${{\varepsilon}}_{0}>0$ such that, given $y\in\partial{\Omega}$ and $x_{0}\in{\Omega}$ with $| x_{0}-y |<r_{0}$, any of the two players has a strategy $S^{*}$ with which we obtain $$\mathbb{P}\Big(x_{\tau} : | x_{\tau}-y | < a \Big) \geq 1 - \eta \qquad \mbox{and} \qquad
\mathbb{P}\Big(\tau \geq \frac{a}{{{\varepsilon}}^2}\Big)< \eta$$ for ${{\varepsilon}}<{{\varepsilon}}_{0}$ and $x_{\tau}\in{{\mathbb R}}^{N}\backslash{\Omega}$ the first position outside ${\Omega}$.
This Lemma says that if we start playing close enough to $y\in\partial{\Omega}$ we will finish quickly (in a number of steps less than a small constant times ${{\varepsilon}}^2$) and at a final position close to $y\in\partial{\Omega}$ with high probability.
We can assume without loss of generality that $y=0\in\partial{\Omega}$. In this case we will define the strategy $S^{*}$ (this strategy can be used by any of the two players) point to the point $y=0$” as follows $$x_{k+1}=S^{*}(x_{0},x_{1},...,x_{k})=x_{k}+(\frac{{{\varepsilon}}^{3}}{2^{k}}-{{\varepsilon}})\frac{x_{k}}{ |x_{k} |}$$ Now let us consider the random variables $$N_{k}= | x_{k}|+\frac{{{\varepsilon}}^{3}}{2^{k}}$$ for $k\geq 0$ and play assuming that one of the players uses the $S^{*}$ strategy. The goal is to prove that $\{N_{k}\}_{k\geq 0}$ is *supermartingale*, i.e., $${{\mathbb E}}[N_{k+1}\arrowvert N_{k}]\leq N_{k}.$$ Note that with probability 1/2 we obtain $$x_{k+1}=x_{k} + (\frac{{{\varepsilon}}^{3}}{2^{k}}-{{\varepsilon}})\frac{x_{k}}{ | x_{k} |}$$ this is the case when the player who uses the $S^{*}$ strategy wins the coin toss. On the other hand, we have $$| x_{k+1}|\leq | x_{k}| + {{\varepsilon}},$$ when the other player wins. Then, we obtain $${{\mathbb E}}\Big[| x_{k+1}| \arrowvert x_{k} \Big]\leq {{\frac{1}{2}}}\Big(| x_{k}| +(\frac{{{\varepsilon}}^{3}}{2^{k}}-{{\varepsilon}})\Big) + {{\frac{1}{2}}}(| x_{k} | + {{\varepsilon}})= | x_{k} |
+\frac{{{\varepsilon}}^{3}}{2^{k+1}}.$$ Hence, we get $${{\mathbb E}}\Big[N_{k+1}\arrowvert N_{k}\Big]={{\mathbb E}}\Big[ | x_{k+1}| + \frac{{{\varepsilon}}^{3}}{2^{k+1}}\arrowvert | x_{k} | + \frac{{{\varepsilon}}^{3}}{2^{k}}\Big]
\leq |x_{k}| + \frac{{{\varepsilon}}^{3}}{2^{k+1}}+\frac{{{\varepsilon}}^{3}}{2^{k+1}}=N_{k}.$$ We just proved that $\{N_{k}\}_{k\geq 0}$ is a *supermartingale*. Now, let us consider the random variables $$(N_{k+1}-N_{k})^{2},$$ and the event $$\label{Fk}
F_{k}=\{ the \ player \ who \ points \ to \ 0\in\partial{\Omega}\ wins \ the \ coin \ toss \}.$$ Then we have the following $$\begin{array}{l}
\displaystyle {{\mathbb E}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k}]
\displaystyle ={{\frac{1}{2}}}{{\mathbb E}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k} \wedge F_{k}]+{{\frac{1}{2}}}{{\mathbb E}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k} \wedge F_{k}^{c}]
\\[10pt]
\qquad \displaystyle \geq
\displaystyle {{\frac{1}{2}}}{{\mathbb E}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k} \wedge F_{k}].
\end{array}$$ Let us observe that $$\begin{array}{l}
\displaystyle
{{\frac{1}{2}}}{{\mathbb E}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k} \wedge F_{k}]
\displaystyle ={{\frac{1}{2}}}{{\mathbb E}}\Big[( |x_{k}| -{{\varepsilon}}+\frac{{{\varepsilon}}^{3}}{2^{k}}+\frac{{{\varepsilon}}^{3}}{2^{k+1}}- | x_{k}| -\frac{{{\varepsilon}}^{3}}{2^{k}})^{2} \Big]
\displaystyle ={{\frac{1}{2}}}{{\mathbb E}}\Big[(-{{\varepsilon}}+\frac{{{\varepsilon}}^{3}}{2^{k+1}})^{2} \Big]\geq\frac{{{\varepsilon}}^{2}}{3}
\end{array}$$ if ${{\varepsilon}}<{{\varepsilon}}_{0}$ for ${{\varepsilon}}_{0}$ small enough. With this estimate in mind we obtain $$\label{e/3b}
{{\mathbb E}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k}] \geq \frac{{{\varepsilon}}^{2}}{3}.$$ Now we will analyze $N_{k}^{2}-N_{k+1}^{2}$. We have $$\label{M2b}
N_{k}^{2}-N_{k+1}^{2}=(N_{k+1}-N_{k})^{2}+2N_{k+1}(N_{k}-N_{k+1}).$$ Let us prove that ${{\mathbb E}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}]\geq 0$ using the set $F_{k}$ defined by . It holds that $$\begin{array}{l}
\displaystyle
{{\mathbb E}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}]
\\[10pt]
\qquad \displaystyle ={{\frac{1}{2}}}{{\mathbb E}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}\wedge F_{k}]+{{\frac{1}{2}}}{{\mathbb E}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}\wedge F_{k}^{c}]
\\[10pt]
\qquad \displaystyle =
{{\frac{1}{2}}}\Big[(|x_{k}|-{{\varepsilon}}+\frac{{{\varepsilon}}^{3}}{2^{k}}+\frac{{{\varepsilon}}^{3}}{2^{k+1}})(|x_{k}|+\frac{{{\varepsilon}}^{3}}{2^{k}}-|x_{k}|+{{\varepsilon}}-\frac{{{\varepsilon}}^{3}}{2^{k}}-\frac{{{\varepsilon}}^{3}}{2^{k+1}})\Big] \\[10pt]
\qquad \displaystyle \qquad +
{{\frac{1}{2}}}\Big[(|x_{k+1}|+\frac{{{\varepsilon}}^{3}}{2^{k+1}})(|x_{k}|+\frac{{{\varepsilon}}^{3}}{2^{k}}-|x_{k+1}|-\frac{{{\varepsilon}}^{3}}{2^{k+1}})\Big]
\\[10pt]
\qquad \displaystyle
\geq
{{\frac{1}{2}}}\Big(|x_{k}|-{{\varepsilon}}+\frac{{{\varepsilon}}^{3}}{2^{k}}+\frac{{{\varepsilon}}^{3}}{2^{k+1}}\Big)\Big({{\varepsilon}}-\frac{{{\varepsilon}}^{3}}{2^{k+1}}\Big)
\\[10pt]
\qquad \displaystyle \qquad +{{\frac{1}{2}}}\Big[(|x_{k}|-{{\varepsilon}}+\frac{{{\varepsilon}}^{3}}{2^{k+1}})(|x_{k}|+\frac{{{\varepsilon}}^{3}}{2^{k}}-
|x_{k}|-{{\varepsilon}}-\frac{{{\varepsilon}}^{3}}{2^{k+1}})\Big]
\end{array}$$ here we used that $|x_{k}|-{{\varepsilon}}\leq|x_{k+1}|\leq|x_{k}|+{{\varepsilon}}$. Thus $${{\mathbb E}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}]\geq{{\frac{1}{2}}}(|x_{k}|-{{\varepsilon}}+\frac{{{\varepsilon}}^{3}}{2^{k+1}}+\frac{{{\varepsilon}}^{3}}{2^{k}})({{\varepsilon}}-\frac{{{\varepsilon}}^{3}}{2^{k+1}})+{{\frac{1}{2}}}(|x_{k}|-{{\varepsilon}}+\frac{{{\varepsilon}}^{3}}{2^{k+1}})(-{{\varepsilon}}+\frac{{{\varepsilon}}^{3}}{2^{k+1}}),$$ and then $${{\mathbb E}}[N_{k+1}(N_{k}-N_{k+1})\arrowvert N_{k}]\geq{{\frac{1}{2}}}\Big[\frac{{{\varepsilon}}^{3}}{2^{k}}({{\varepsilon}}-\frac{{{\varepsilon}}^{3}}{2^{k+1}})\Big]\geq 0.$$ If we go back to and use and the result we have just obtained we arrive to $${{\mathbb E}}[N_{k}^{2}-N_{k+1}^{2}\arrowvert N_{k}]\geq{{\mathbb E}}[(N_{k+1}-N_{k})^{2}\arrowvert N_{k}]\geq \frac{{{\varepsilon}}^{2}}{3}.$$ Therefore, for the sequence of random variables $${{\mathbb W}}_{k}=N^{2}_{k}+\frac{k{{\varepsilon}}^{2}}{3}$$ we have $${{\mathbb E}}[{{\mathbb W}}_{k}-{{\mathbb W}}_{k+1}\arrowvert {{\mathbb W}}_{k}]={{\mathbb E}}[N_{k}^{2}-N_{k+1}^{2}-\frac{{{\varepsilon}}^{2}}{3}\arrowvert {{\mathbb W}}_{k}]\geq 0.$$ As ${{\mathbb E}}[{{\mathbb W}}_{k}\arrowvert{{\mathbb W}}_{k}]={{\mathbb W}}_{k}$ then $${{\mathbb E}}[{{\mathbb W}}_{k+1}\arrowvert {{\mathbb W}}_{k}]\leq{{\mathbb W}}_{k},$$ that is, the sequence $\{{{\mathbb W}}_{k}\}_{k\geq 1}$ is a *supermartingale*. In order to use the *OSTh*, given a fixed integer $m\in{{\mathbb N}}$ we define the stopping time $$\tau_{m}=\tau \wedge m := \min\{\tau,m\}$$ Now this new stopping time verifies $ \tau_{m}\leq m $ which is the first hypothesis of the *OSTh*. Then, using the *OSTh* we obtain $${{\mathbb E}}[{{\mathbb W}}_{\tau_{m}}]\leq {{\mathbb W}}_{0}.$$ Observe that $\lim\limits_{m\rightarrow\infty}\tau\wedge m = \tau $ almost surely. Then, using *Fatou’s Lemma*, we arrive to $${{\mathbb E}}[{{\mathbb W}}_{\tau}]={{\mathbb E}}[\liminf_{m} {{\mathbb W}}_{\tau\wedge m}]\underbrace{\leq}_{Fatou} \liminf_{m} {{\mathbb E}}[{{\mathbb W}}_{\tau\wedge m}]\underbrace{\leq}_{OSTh} {{\mathbb W}}_{0}$$ Thus, we obtain $ {{\mathbb E}}[{{\mathbb W}}_{\tau}]\leq {{\mathbb W}}_{0}$, i.e., $$\label{OST2}
{{\mathbb E}}[N^{2}_{\tau}+\frac{\tau{{\varepsilon}}^{2}}{3}]\leq N_{0}^{2}.$$ Then, $${{\mathbb E}}[\tau]\leq 3(|x_{0}|+{{\varepsilon}}^{3})^{2}{{\varepsilon}}^{-2}\leq 4|x_{0} |^{2}{{\varepsilon}}^{-2}$$ if ${{\varepsilon}}$ is small enough. On the other hand, if we go back to we have $${{\mathbb E}}[N_{\tau}^{2}]\leq N_{0}^{2},$$ i.e. $${{\mathbb E}}[|x_{\tau}|^{2}]\leq{{\mathbb E}}[(|x_{\tau}|+\frac{{{\varepsilon}}^{3}}{2^{\tau}})^{2}]\leq (|x_{0}|+{{\varepsilon}}^{3})^{2}\leq 2 |x_{0}|^{2}.$$ What we have so far is that $$\label{etau}
{{\mathbb E}}[\tau]\leq 4|x_{0}|^{2}{{\varepsilon}}^{-2}
\qquad \mbox{and} \qquad {{\mathbb E}}[|x_{\tau}|^{2}]\leq 2 |x_{0}|^{2}.$$ We will use these two estimates to prove $${{\mathbb P}}\Big(\tau \geq \frac{a}{{{\varepsilon}}^2}\Big)< \eta
\qquad \mbox{and} \qquad {{\mathbb P}}\Big(|x_{\tau} |\geq a\Big)< \eta.$$ Given $\eta > 0$ and $a > 0$, we take $x_{0}\in{\Omega}$ such that $|x_{0} |< r_{0}$ with $r_{0}$ that will be choosed later (depending on $\eta$ and $a$). We have $$C r_{0}^{2}{{\varepsilon}}^{-2}\geq C |x_{0}-y |^{2}{{\varepsilon}}^{-2} \geq {{\mathbb E}}^{x_{0}}[\tau]\geq {{\mathbb P}}\Big(\tau \geq \frac{a}{{{\varepsilon}}^{2}}\Big)\frac{a}{{{\varepsilon}}^{2}}.$$ Thus $${{\mathbb P}}(\tau \geq \frac{a}{{{\varepsilon}}^{2}})\leq C \frac{r_{0}^{2}}{a}< \eta$$ which holds true if $r_{0}<\sqrt{\frac{\eta a}{C}}$.
Also we have $$C r_{0}^{2} \geq C |x_{0} |^{2}\geq {{\mathbb E}}^{x_{0}}[|x_{\tau} |^{2}]\geq a^{2}{{\mathbb P}}(|x_{\tau} |^{2}\geq a^{2}).$$ Then $${{\mathbb P}}(|x_{\tau} |\geq a)\leq C\frac{r_{0}^{2}}{a^{2}}< \eta$$ which holds true if $r_{0}< \sqrt{\frac{\eta a^{2}}{C}}$. Observe that if we take $a<1$ we have $\sqrt{\frac{\eta a^{2}}{C}}<\sqrt{\frac{\eta a}{C}}$, then if we choose $r_{0}<\sqrt{\frac{\eta a^{2}}{C}}$ both conditions are fulfilled at the same time.
Estimates for the Random Walk game
----------------------------------
In this case we are going to assume that we are permanently playing on board 2, with the random walk game. The estimates for this game follow the same ideas as before, and are even simpler since there are no strategies of the players involved in this case. We include the details for completeness.
Given $\eta>0$ and $a>0$, there exists $r_{0}>0$ and ${{\varepsilon}}_{0}>0$ such that, given $y\in\partial{\Omega}$ and $x_{0}\in{\Omega}$ with $|x_{0}-y|<r_{0}$, if we play random we obtain $$\mathbb{P} \Big(|x_{\tau}-y|< a \Big) \geq 1 - \eta
\qquad \mbox{and} \qquad\mathbb{P} \Big(\tau \geq \frac{a}{{{\varepsilon}}^2} \Big)< \eta$$ for ${{\varepsilon}}<{{\varepsilon}}_{0}$ and $x_{\tau}\in{{\mathbb R}}^{N}\backslash{\Omega}$ the first position outside ${\Omega}$.
Recall that we assumed that $ {\Omega}$ satisfies the uniform exterior ball property for a certain $ \theta_ {0}> 0 $.
For $ N \geq 3 $, given $ \theta <\theta_{0} $, and $ y \in {\Omega}$ we are going to assume that $ z_{y} = 0 $ is chased so that we have $ {\overline}{B_{\theta}(0)} \cap {\overline}{{\Omega}} = \{ y \}$. We define the set $${\Omega}_{{{\varepsilon}}}=\{x\in{{\mathbb R}}^{N}:d(x,{\Omega})<{{\varepsilon}}\}$$ for $ {{\varepsilon}}$ small enough. Now, we consider the function $\mu:{\Omega}_{{{\varepsilon}}}\rightarrow{{\mathbb R}}$ given by $$\label{mu}
\mu(x)=\frac{1}{\theta^{N-2}}-\frac{1}{|x |^{N-2}}.$$ This function is positive in $ {\overline}{{\Omega}}\backslash \{ y \} $, radially increasing and harmonic in $ {\Omega}$. Also it holds that $ \mu (y) = 0 $. For $N=2$ we take $ \mu(x)=\ln (\theta )-\ln (|x |)$ and we leave the details to the reader.
We will take the first position of the game, $ x_{0} \in {\Omega}$, such that $ |x_{0} -y |<r_{0} $ with $ r_{0} $ to be choosed later. Let $ (x_{k})_{k \geq 0} $ be the sequence of positions of the game playing random walks. Consider the sequence of random variables $$N_{k}=\mu(x_{k})$$ for $ k \geq 0 $. Let us prove that $ N_{k} $ is a *martingale*. Indeed $${{\mathbb E}}[N_{k + 1} \arrowvert N_{k}] = {\vint}_{B _{{{\varepsilon}}} (x_{k})} \mu(y) dy = \mu (x_{k}) = N_{k}.$$ Here we have used that $ \mu $ is harmonic. Since $ \mu $ is bounded in $\Omega$, the third hypothesis of [*OSTh*]{} is fulfilled, hence we obtain $$\label{muxo}
{{\mathbb E}}[\mu(x_{\tau})] = \mu(x_ {0}).$$ Let us estimate the value $\mu(x_{0})$ $$\label{acotmu}
\mu(x_{0})=\frac{1}{\theta^{n-2}}-\frac{1}{|x_{0} |^{n-2}}=\frac{|x_{0}|^{n-2}-\theta^{n-2}}{\theta^{n-2}|x_{0}|^{n-2}}=\frac{(|x_{0}|-\theta)}{\theta^{n-2}|x_{0}|^{n-2}}\Big(\sum\limits_{j=1}^{N-2}|x_{0}|^{N-2-j}\theta^{j-1}\Big).$$ The first term can be bounded as $$(|x_{0}|-\theta)=(|x_{0}|-|y |)\leq |x_{0}-y|<r_{0}.$$ To deal with the second term we will ask $\theta<1$ and $|x_{0}|^{l}\leq R^{N-2}$ where $R=\max_{x\in{\Omega}}\{|x |\}$ (suppose $R>1$). Then, we obtain $$\sum\limits_{j=1}^{N-2}|x_{0}|^{n-2-j}\theta^{j-1}\leq R^{N-2}(N-2).$$ Finally, we will use that $|x_{0}|>\theta$. Plugging all these estimates in we obtain $$\mu(x_{0})\leq r_{0}(\frac{R^{N-2}(N-2)}{\theta^{2(N-2)}}).$$ If we call $c({\Omega},\theta)=\frac{R^{N-2}(N-2)}{\theta^{2(N-2)}}$ and come back to we get $$\label{espmu}
{{\mathbb E}}[\mu(x_{\tau})]<c({\Omega},\theta)r_{0}.$$ We need to establish a relation between $ \mu (x_{\tau}) $ and $ |x_{\tau} -y |$. To this end, we take the function $ b: [\theta, + \infty) \rightarrow {{\mathbb R}}$ given by $$\label{funb}
b({\overline}{a})=\frac{1}{\theta^{N-2}}-\frac{1}{{\overline}{a}^{N-2}}.$$ Note that this function is the radial version of $ \mu $. It is positive and increasing, then, it has an inverse (also increasing) that is given by the formula $${\overline}{a}(b)=\frac{\theta}{(1-\theta^{N-2}b)^{\frac{1}{N-2}}}.$$ This function is positive, increasing and convex, since $ {\overline}{a}''> 0 $. Then for $ b <1 $ we obtain $$\label{bmay}
{\overline}{a}(b)\leq \theta + ({\overline}{a}(1)-\theta)b.$$ Let us call $K(\theta)=({\overline}{a}(1)-\theta)>0$ (this constant depends only on $\theta$). Using the relationship between ${\overline}{a}$ and $b$ we obtained the following: given $ {\overline}{a}> \theta $ there is $ b> 0 $ such that $$\mbox{if } \mu(x_{\tau})<b \mbox{ then } |x_{\tau} |<{\overline}{a}.$$ Here we are using that the function $ b ({\overline}{a}) $ is increasing. Now one can check that, for all $ a> 0 $ , there are $ {\overline}{a}> \theta $ and $ {{\varepsilon}}_{0}> 0 $ such that, if $$|x_{\tau}|< {\overline}{a} \qquad \mbox{and} \qquad d(x_{\tau},{\Omega})<{{\varepsilon}}_{0},$$ then $$|x_{\tau}-y|<a.$$
Putting everything together we obtained that, given $a>0$, exist ${\overline}{a}>\theta$, $b>0$ and ${{\varepsilon}}_{0}>0$ such that $$\mbox{if } \mu(x_{\tau})<b \Rightarrow |x_{\tau}-y|<a \ , \ d(x_{\tau},{\Omega})<{{\varepsilon}}_{0}.$$ We ask for $
0<b<a
$ that we will used later. Then, we have $${{\mathbb P}}(\mu(x_{\tau})\geq b)\geq {{\mathbb P}}(|x_{\tau}-y|\geq a).$$ Coming back to we get $$\label{desb}
c({\Omega},\theta)r_{0}>{{\mathbb E}}[\mu(x_{\tau})]\geq {{\mathbb P}}(\mu(x_{\tau})\geq b)b\geq {{\mathbb P}}(|x_{\tau}-y|\geq a)b$$ Using that ${\overline}{a}-\theta \leq K(\theta)b$ we obtain $$c({\Omega},\theta)r_{0}>{{\mathbb P}}(|x_{\tau}-y|\geq a)\frac{{\overline}{a}-\theta}{K(\theta)}$$ Then $$\label{desnorma}
{{\mathbb P}}(|x_{\tau}-y|\geq a)<\frac{c({\Omega},\theta)r_{0}K(\theta)}{{\overline}{a}-\theta}<\eta$$ which holds true if $$r_{0}<\frac{\eta({\overline}{a}-\theta)}{c({\Omega},\theta)K(\theta)}.$$ This is one of the inequalities we wanted to prove.
Now let us compute $$\label{ENK}
{{\mathbb E}}[N_{k+1}^{2}-N_{k}^{2}\arrowvert N_{k}]={\vint}_{B_{{{\varepsilon}}}(x_{k})}(\mu^{2}(w)-\mu^{2}(x_{k}))dw.$$ Let us call $\varphi=\mu^{2}$. If we made the Taylor expansion of order two we obtain $$\varphi(w)=\varphi(x_{k})+\langle\nabla\varphi(x_{k}),(w-x_{k})\rangle+{{\frac{1}{2}}}\langle D^{2}\varphi(x_{k})(w-x_{k}),(w-x_{k})\rangle+O(|w-x_{k}|^{3}).$$ Then $$\begin{array}{l}
\displaystyle
{\vint}_{B_{{{\varepsilon}}}(x_{k})}(\varphi(w)-\varphi(x_{k}))dw
\displaystyle ={\vint}_{B_{{{\varepsilon}}}(x_{k})}\langle\nabla\varphi(x_{k}),(w-x_{k})\rangle dw
\\[10pt]
\qquad \qquad \displaystyle +{{\frac{1}{2}}}{\vint}_{B_{{{\varepsilon}}}(x_{k})}\langle D^{2}\varphi(x_{k})(w-x_{k}),(w-x_{k})\rangle dw
\displaystyle +{\vint}_{B_{{{\varepsilon}}}(x_{k})}O(|w-x_{k}|^{3})dw.
\end{array}$$ Let us analyze these integrals $${\vint}_{B_{{{\varepsilon}}}(x_{k})}\langle\nabla\varphi(x_{k}),(w-x_{k})\rangle dw=0.$$ On the other hand, for $\langle D^{2}\varphi(x_{k})(w-x_{k}),(w-x_{k})\rangle$, changing variables as $w=x_{k}+{{\varepsilon}}z$, it holds that $${\vint}_{B_{{{\varepsilon}}}(x_{k})}\langle D^{2}\varphi(x_{k})(w-x_{k}),(w-x_{k})\rangle dw
=\sum\limits_{j=1}^{N}\partial_{x_{j}x_{j}}^{2}\varphi(x_{k}){{\varepsilon}}^{2}{\vint}_{B_{1}(0)}z_{j}^{2}dz=\kappa{{\varepsilon}}^{2}\sum\limits_{j=1}^{N}\partial_{x_{j}x_{j}}^{2}\varphi(x_{k}).$$ Here we find the constant $\kappa $ that appears in the second equation in . Let us compute the second derivatives of $\varphi$. As $\varphi=\mu^{2}$, $$\sum\limits_{j=1}^{N}\partial_{x_{j}x_{j}}^{2}\varphi(x_{k})=2\sum\limits_{j=1}^{N}(\partial_{x_{j}}\mu(w))^{2}+2\mu(x_{k})\sum\limits_{j=1}^{n}\partial_{x_{j}x_{j}}^{2}\mu(x_{k}).$$ The second term is zero because $\mu$ is harmonic in ${\Omega}$. Hence, we arrived to $$\sum\limits_{j=1}^{N}\partial_{x_{j}x_{j}}^{2}\varphi(x_{k})=2\sum\limits_{j=1}^{N}(\partial_{x_{j}}\mu(w))^{2}.$$ Using the definition of $\mu$ we get $$\sum\limits_{j=1}^{N}\partial_{x_{j}x_{j}}^{2}\varphi(x_{k})=\frac{2(N-2)^{2}}{|x_{k} |^{2(N-2)}}.$$ Putting everything together $$\begin{array}{l}
\displaystyle
{\vint}_{B_{{{\varepsilon}}}(x_{k})}(\varphi(w)-\varphi(x_{k}))dw
={{\frac{1}{2}}}\kappa{{\varepsilon}}^{2}\frac{2(N-2)^{2}}{|x_{k} |^{2(N-2)}}+O(|w-x_{k}|^{3})
\geq {{\varepsilon}}^{2}\frac{\kappa(N-2)^{2}}{R^{2(n-2)}}-\gamma{{\varepsilon}}^{3}\displaystyle \geq {{\varepsilon}}^{2}\frac{\kappa(N-2)^{2}}{2R^{2(N-2)}},
\end{array}$$ if ${{\varepsilon}}$ is small enough (here $R=\max_{x\in{\Omega}} \{ |x |\}$). Let us call $$\sigma({\Omega})=\frac{\kappa(N-2)^{2}}{2R^{2(N-2)}}.$$ Then, if we go back to we get $${{\mathbb E}}[N_{k+1}^{2}-N_{k}^{2}\arrowvert N_{k}]\geq \sigma({\Omega}){{\varepsilon}}^{2}.$$ Let us consider the sequence of random variables $({{\mathbb W}}_{k})_{k\geq 0}$ given by $${{\mathbb W}}_{k}=-N_{k}^{2}+\sigma({\Omega})k{{\varepsilon}}^{2}.$$ Then $${{\mathbb E}}[{{\mathbb W}}_{k+1}-{{\mathbb W}}_{k}\arrowvert {{\mathbb W}}_{k}]={{\mathbb E}}[-(N_{k+1}^{2}-N_{k}^{2})+\sigma{{\varepsilon}}^{2}\arrowvert N_{k}]\leq 0$$ That is, ${{\mathbb W}}_{k}$ is a *supermartingale*. Using the *OSTh* in the same way as before we get $${{\mathbb E}}[-\mu^{2}(x_{\tau})+\sigma\tau{{\varepsilon}}^{2}]\leq -\mu^{2}(x_{0}).$$ Therefore, $$\label{paratau}
{{\mathbb E}}[\sigma\tau{{\varepsilon}}^{2}]\leq -\mu^{2}(x_{0})+{{\mathbb E}}[\mu^{2}(x_{\tau})]\leq {{\mathbb E}}[\mu^{2}(x_{\tau})].$$ Hence, we need a bound for ${{\mathbb E}}[\mu^{2}(x_{\tau})]$. We have $${{\mathbb E}}[\mu^{2}(x_{\tau})]={{\mathbb E}}[\mu^{2}(x_{\tau})\arrowvert\mu(x_{\tau})<b]{{\mathbb P}}(\mu(x_{\tau})<b)+{{\mathbb E}}[\mu^{2}(x_{\tau})\arrowvert\mu(x_{\tau})\geq b]{{\mathbb P}}(\mu(x_{\tau})\geq b).$$ It holds that ${{\mathbb E}}[\mu^{2}(x_{\tau})\arrowvert\mu(x_{\tau})<b]\leq b^{2}$ and ${{\mathbb P}}(\mu(x_{\tau})<b)\leq 1$. If we call $M({{\varepsilon}}_{0})=\max_{x\in{\Omega}_{{{\varepsilon}}_{0}}}\arrowvert\mu(x)\arrowvert$ it holds ${{\mathbb E}}[\mu^{2}(x_{\tau})\arrowvert\mu(x_{\tau})\geq b]\leq M({{\varepsilon}}_{0})^{2}$. Finaly using we obtain ${{\mathbb P}}(\mu(x_{\tau})\geq b)\leq \frac{c({\Omega},\theta)r_{0}}{b}$. Thus $${{\mathbb E}}[\mu^{2}(x_{\tau})]\leq b^{2}+M({{\varepsilon}}_{0})^{2}\frac{c({\Omega},\theta)r_{0}}{b}.$$ Recall that we imposed $0<b<a$. Then $$\label{bmasb}
{{\mathbb E}}[\mu^{2}(x_{\tau})]\leq a^{2}+M({{\varepsilon}}_{0})^{2}\frac{c({\Omega},\theta)r_{0}}{b}.$$ On the other hand, we have $$\sigma{{\mathbb E}}[\tau{{\varepsilon}}^{2}]\geq{{\mathbb P}}(\tau{{\varepsilon}}^{2}\geq a)a\sigma.$$ Using and we get $${{\mathbb P}}\Big(\tau\geq\frac{a}{{{\varepsilon}}^{2}}\Big)\leq \frac{a}{\sigma}+M({{\varepsilon}}_{0})^{2}\frac{c({\Omega},\theta)r_{0}}{b\sigma a}.$$ If we ask $$\frac{a}{\sigma} < \frac{\eta}{2}$$ we arrive to $${{\mathbb P}}\Big(\tau\geq\frac{a}{{{\varepsilon}}^{2}}\Big) \leq \frac{\eta}{2}+M({{\varepsilon}}_{0})^{2}\frac{c({\Omega},\theta)r_{0}}{b a\sigma}<\eta$$ which is true if we impose that $$r_{0}< \frac{b\eta a \sigma}{2M({{\varepsilon}}_{0})c({\Omega},\theta)}.$$ Thus we achieve the second inequality of the lemma, and the proof is finished.
Now we are ready to prove the second condition in the Arzela-Ascoli type lemma.
\[lem.ascoli.arzela.asymp\] Given $\delta>0$ there are constants $r_0$ and ${{\varepsilon}}_0$ such that for every ${{\varepsilon}}< {{\varepsilon}}_0$ and any $x, y \in \overline{\Omega}$ with $|x - y | < r_0 $ it holds $$|u^{{\varepsilon}}(x) - u^{{\varepsilon}}(y)| < \delta \qquad \mbox{and} \qquad |v^{{\varepsilon}}(x) - v^{{\varepsilon}}(y)| < \delta.$$
We deal with the estimate for $u^{{\varepsilon}}$. Recall that $u^{{{\varepsilon}}}$ is the value of the game playing in the first board (where we play Tug-of-War). The computations for $v^{{\varepsilon}}$ are similar.
First, we start with two close points $x$ and $y$ with $y\not\in \Omega$ and $x\in \Omega$. We have that $u^{{{\varepsilon}}}(y)={\overline}{f}(y)$ for $y\in\partial{\Omega}$. Given $\eta >0$ we take $a$, $r_{0}$, ${{\varepsilon}}_{0}$ and $S^{*}_{I}$ the strategy as in Lemma \[lema.estim.ToW\]. Let $$A=\Big\{\mbox{the position does not change board in the first } \ \lceil \frac{a}{{{\varepsilon}}^{2}}\rceil \mbox{ plays and } \tau < \lceil \frac{a}{{{\varepsilon}}^{2}}\rceil \Big\}.$$
We consider two cases.
**1st case:** We are going to show that $u^{{{\varepsilon}}}(x_{0})-{\overline}{f}(y) \geq - A(a,\eta)$ with $A(a,\eta)\searrow 0$ if $a\rightarrow 0$ and $\eta\rightarrow 0$. We have $$u^{{{\varepsilon}}}(x_{0})\geq \inf_{S_{II}}{{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[h(x_{\tau})].$$ Now $$\begin{array}{l}
\displaystyle
{{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[ h (x_{\tau})] = {{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[h( x_{\tau})\arrowvert A]{{\mathbb P}}(A)
+ {{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[h( x_{\tau})\arrowvert A^{c}]{{\mathbb P}}(A^{c})
\\[10pt]
\qquad \displaystyle \geq {{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[{\overline}{f}(x_{\tau})\arrowvert A]{{\mathbb P}}(A)-\max\{\lvert{\overline}{f}|,\lvert{\overline}{g}|\}{{\mathbb P}}(A^{c}).
\end{array}$$ Now we estimate ${{\mathbb P}}(A)$ and ${{\mathbb P}}(A^{c})$. We have that $${{\mathbb P}}(A^{c})\leq {{\mathbb P}}\Big(\mbox{the game changes board before }\lceil \frac{a}{{{\varepsilon}}^{2}}\rceil \mbox{ plays}\Big)
+{{\mathbb P}}(\tau\geq\lceil \frac{a}{{{\varepsilon}}^{2}}\rceil).$$ Hence we are left with two bounds. First, we have $$\label{Ac1}
{{\mathbb P}}\Big(\mbox{the game changes board before }\lceil \frac{a}{{{\varepsilon}}^{2}}\rceil \mbox{ plays} \Big)=1-(1-{{\varepsilon}}^{2})^{\frac{a}{{{\varepsilon}}^{2}}} \leq (1-e^{-a})+\eta$$ for ${{\varepsilon}}$ small enough. Here we are using that $(1-{{\varepsilon}}^{2})^{\frac{a}{{{\varepsilon}}^{2}}}\nearrow e^{-a}$.
Now, we observe that using Lemma \[lema.estim.ToW\] we get $$\label{Ac2}
{{\mathbb P}}\Big(\tau \geq \frac{a}{{{\varepsilon}}^{2}}\Big) \leq {{\mathbb P}}\Big(\tau \geq \frac{a}{{{\varepsilon}}_{0}^{2}}\Big)\leq \eta,$$ for ${{\varepsilon}}< {{\varepsilon}}_{0}$. From and we obtain $${{\mathbb P}}(A^{c})\leq (1-e^{-a})+\eta +\eta= (1-e^{-a})+2\eta$$ and hence $${{\mathbb P}}(A) =1-{{\mathbb P}}(A^{c}) \geq 1-[(1-e^{-a})+2\eta] .$$ Then we obtain $$\label{arriba}
\begin{array}{l}
\displaystyle {{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[h(x_{\tau})]
\displaystyle \geq {{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[{\overline}{f}(x_{\tau})\arrowvert A] (1-[(1-e^{-a})+2\eta])-\max\{\lvert{\overline}{f}|,\lvert{\overline}{g}|\}[(1-e^{-a})+2\eta] .
\end{array}$$ Let us analyze the expected value ${{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[{\overline}{f}(x_{\tau})\arrowvert A]$. Again we need to consider two events, $$A_{1}=A\cap \{ |x_{\tau}-y|< a \} \qquad \mbox{and} \qquad A_{2}=A\cap \{ |x_{\tau}-y|\geq a\}.$$ We have that $ A=A_{1}\cup A_{2}$. Then $$\label{retomo}
{{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[{\overline}{f}(x_{\tau})\arrowvert A]={{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[{\overline}{f}(x_{\tau})\arrowvert A_{1}]{{\mathbb P}}(A_{1})+{{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[{\overline}{f}(x_{\tau})\arrowvert A_{2}]{{\mathbb P}}(A_{2}).$$ Now we observe that $$\label{a2}
{{\mathbb P}}(A_{2})\leq {{\mathbb P}}( |x_{\tau}-y|\geq a)\leq \eta .$$ To get a bound for the other case we observe that $ A_{1}^{c}=A^{c}\cup \{ |x_{\tau}-y|\geq a\}$. Therefore $${{\mathbb P}}(A_{1})=1-{{\mathbb P}}(A_{1}^{c})\geq 1-[{{\mathbb P}}(A^{c})+{{\mathbb P}}(|x_{\tau}-y|\geq a)],$$ and we arrive to $$\label{a1}
{{\mathbb P}}(A_{1})\geq 1-[(1-e^{-a})+2\eta+\eta]=1-[(1-e^{-a})+3\eta].$$ If we go back to and use and we get $$\label{retomo2}
{{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[{\overline}{f}(x_{\tau})\arrowvert A]\geq{{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[{\overline}{f}(x_{\tau})\arrowvert A_{1}](1-[(1-e^{-a})+3\eta])-\max \{\lvert{\overline}{f}|\}\eta .$$ Using that ${\overline}{f}$ is Lipschitz we obtain $${\overline}{f}(x_{\tau})\geq {\overline}{f}(y)-L|x_{\tau}-y|\geq {\overline}{f}(y)-La ,$$ and then we obtain (using that $({\overline}{f}(y)-La)$ does not depend on the strategies) $${{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[{\overline}{f}(x_{\tau})\arrowvert A]\geq({\overline}{f}(y)-La)(1-[(1-e^{-a})+3\eta])-\max \{\lvert{\overline}{f}|\}\eta.$$ Recalling we obtain $$\begin{array}{l}
\displaystyle {{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[h ( x_{\tau})] \displaystyle \\[10pt]
\quad \geq (({\overline}{f}(y)-La)(1-[(1-e^{-a})+3\eta]) -\max \{\lvert{\overline}{f}|\}\eta ) (1-[(1-e^{-a})+2\eta])
\displaystyle -\max\{\lvert{\overline}{f}|,\lvert{\overline}{g}|\}[(1-e^{-a})+2\eta].
\end{array}$$ Notice that when $\eta \rightarrow 0$ and $a\rightarrow 0$ the the right hand side goes to ${\overline}{f}(y)$, hence we have obtained $${{\mathbb E}}^{x_{0}}_{S^{*}_{I},S_{II}}[h( x_{\tau})]\geq {\overline}{f}(y)- A(a,\eta)$$ with $A(a,\eta)\to 0$. Taking the infimum over all possible strategies $S_{II}$ we get $$u^{{{\varepsilon}}}(x_{0})\geq {\overline}{f}(y)- A(a,\eta)$$ with $A(a,\eta)\to 0$ as $\eta\rightarrow 0$ and $a\rightarrow 0$ as we wanted to show.
**2nd case:** Now we want to show that $u^{{{\varepsilon}}}(x_{0})-{\overline}{f}(y)\leq B(a,\eta)$ with $B(a,\eta)\searrow 0$ as $\eta\rightarrow 0$ and $a\rightarrow 0$. In this case we just use the strategy $S^*$ from Lemma \[lema.estim.ToW\] as the strategy for the second player $S^{*}_{II}$ and we obtain $$u^{{{\varepsilon}}}(x_{0})\leq \sup_{S_{II}}{{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[h(x_{\tau})].$$ Using again the set $A$ that we considered in the previous case we obtain $${{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[ h( x_{\tau})]= {{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}( x_{\tau})\arrowvert A]{{\mathbb P}}(A)+
{{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[h( x_{\tau})\arrowvert A^{c}]{{\mathbb P}}(A^{c}).$$ We have that ${{\mathbb P}}(A) \leq 1$ and ${{\mathbb P}}(A^{c})\leq (1-e^{-a})+2\eta $. Hence we get $$\label{retomo3}
{{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[h( x_{\tau})]\leq {{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}( x_{\tau})\arrowvert A]+\max\{\lvert{\overline}{f}|,\lvert{\overline}{g}|\}[(1-e^{-a})+2\eta].$$ To bound ${{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}(x_{\tau})\arrowvert A]$ we will use again the sets $A_{1}$ and $A_{2}$ as in the previous case. We have $${{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}(x_{\tau})\arrowvert A]={{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}(x_{\tau})\arrowvert A_{1}]{{\mathbb P}}(A_{1})+{{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}(x_{\tau})\arrowvert A_{2}]{{\mathbb P}}(A_{2}).$$ Now we use that ${{\mathbb P}}(A_{1}) \leq 1$ and ${{\mathbb P}}(A_{2})\leq c\eta$ to obtain $${{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}(x_{\tau})\arrowvert A]\leq {{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}(x_{\tau})\arrowvert A_{1}]+\max\{\lvert{\overline}{f}|\}\eta .$$ Now for $ {{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}(x_{\tau})\arrowvert A_{1}]$ we use that ${\overline}{f}$ is Lipschitz to obtain $${{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}(x_{\tau})\arrowvert A]\leq {{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[{\overline}{f}(y)+La\arrowvert A_{1}]+\max\{\lvert{\overline}{f}|\}\eta .$$ As $({\overline}{f}(y)+La)$ does not depend on the strategies we have $${{\mathbb E}}^{x_{0}}_{S_{I},S^{**}_{II}}[{\overline}{f}(x_{\tau})\arrowvert A]\leq({\overline}{f}(y)+La)+\max\{\lvert{\overline}{f}|\}\eta ,$$ and therefore we conclude that $${{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[h( x_{\tau} )]\leq {\overline}{f}(y)+La+\max\{\lvert{\overline}{f}|\}\eta + \max\{\lvert{\overline}{f}|,\lvert{\overline}{g}|\}[(1-e^{-a})+2\eta].$$ We have proved that $${{\mathbb E}}^{x_{0}}_{S_{I},S^{*}_{II}}[ h( x_{\tau})]\leq {\overline}{f}(y) + B(a,\eta)$$ with $B(a,\eta)\to 0$. Taking supremum over the strategies for Player I we obtain $$u^{{{\varepsilon}}}(x_{0})\leq {\overline}{f}(y)+B(a,\eta)$$ with $B(a,\eta)\rightarrow 0$ as $\eta\rightarrow 0$ and $a\rightarrow 0$.
Therefore, we conclude that $$|u^{{{\varepsilon}}}(x_{0})-{\overline}{f}(y)|< \max\{A(a,\eta),B(a,\eta)\},$$ that holds when $y \not \in\Omega$ and $x_0$ is close to $y$.
An analogous estimate holds for $v^{{\varepsilon}}$.
Now, given two points $x_0$ and $z_0$ inside $\Omega$ with $|x_0-z_0|<r_0$ we couple the game starting at $x_0$ with the game starting at $z_0$ making the same movements and also changing board simultaneously. This coupling generates two sequences of positions $(x_i,j_i)$ and $(z_i,k_i)$ such that $|x_i - z_i|<r_0$ and $j_i=k_i$ (since they change boars at the same time both games are at the same board at every turn). This continues until one of the games exits the domain (say at $x_\tau \not\in \Omega$). At this point for the game starting at $z_0$ we have that its position $z_\tau$ is close to the exterior point $x_\tau \not\in \Omega$ (since we have $|x_\tau - z_\tau|<r_0$) and hence we can use our previous estimates for points close to the boundary to conclude that $$|u^{{{\varepsilon}}}(x_{0})- u^{{\varepsilon}}(z_0)|< \delta, \qquad \mbox{ and }
\qquad |v^{{{\varepsilon}}}(x_{0})- v^{{\varepsilon}}(z_0)|< \delta.$$ This ends the proof.
As a consequence, we have convergence of $(u^{{{\varepsilon}}},v^{{{\varepsilon}}})$ as ${{\varepsilon}}\to 0$ along subsequences.
\[teo.conv.unif\] Let $(u^{{{\varepsilon}}},v^{{{\varepsilon}}})$ be solutions to the DPP, then there exists a subsequence ${{\varepsilon}}_k \to 0$ and a pair on functions $(u,v)$ continuous in $\overline{\Omega}$ such that $$u^{{{\varepsilon}}_k} \to u, \qquad \mbox{ and } \qquad v^{{{\varepsilon}}_k} \to v,$$ uniformly in $\overline{\Omega}$.
Lemma \[lem.ascoli.arzela.acot\] and Lemma \[lem.ascoli.arzela.asymp\] imply that we can use the Arzela-Ascoli type lemma, Lemma \[lem.ascoli.arzela\].
Existence of viscosity solutions {#sect-limiteviscoso}
================================
Now, we prove that any possible uniform limit of $(u^{{\varepsilon}},v^{{\varepsilon}})$ is a viscosity solution to the limit PDE problem .
\[teo.converge.222\] Any uniform limit of the values of the game $(u^{{\varepsilon}},v^{{\varepsilon}})$, $(u,v)$, is a viscosity solution to $$\label{ED1.th}
\left\lbrace
\begin{array}{ll}
- \displaystyle {{\frac{1}{2}}}\Delta_{\infty}u(x) + u(x) - v(x)=0 \qquad & \ x \in \Omega, \\[10pt]
- \displaystyle \frac{\kappa}{2} \Delta v(x) + v(x) - u(x)=0 \qquad & \ x \in \Omega, \\[10pt]
u(x) = f(x) \qquad & \ x \in \partial \Omega, \\[10pt]
v(x) = g(x) \qquad & \ x \in \partial \Omega.
\end{array}
\right.$$
Since $u^{{\varepsilon}}={\overline}{f}$ and $v^{{\varepsilon}}={\overline}{g}$ in ${{\mathbb R}}^N \setminus \Omega$ we have that $u = f$ and $v= g$ on $\partial \Omega$.
[*Infinity Laplacian.*]{} let us start by showing that $u$ is a viscosity subsolution to $$- {{\frac{1}{2}}}\Delta_{\infty}u(x)+u(x)-v(x)=0.$$ Let $x_{0} \in {\Omega}$ and $\phi \in {C}^{2}({\Omega})$ auch that $u(x_{0})-\phi(x_{0})=0$ and $u-\phi$ has an absolute maximum at $x_{0}$. Then, there exists a sequence $(x_{{{\varepsilon}}})_{{{\varepsilon}}>0}$ with $ x_{{{\varepsilon}}} \rightarrow x_{0}$ as ${{\varepsilon}}\rightarrow 0$ verifying $$u^{{{\varepsilon}}}(y)-\phi(y)\leq u^{{{\varepsilon}}}(x_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}})+{{\varepsilon}}^{3}.$$ Then we obtain $$\label{ast}
u^{{{\varepsilon}}}(y)-u^{{{\varepsilon}}}(x_{{{\varepsilon}}}) \leq \phi(y)-\phi(x_{{{\varepsilon}}})+{{\varepsilon}}^{3}$$
Now, using the DPP, we get $$u^{{{\varepsilon}}}(x_{{{\varepsilon}}})={{\varepsilon}}^{2}v^{{{\varepsilon}}}(x_{{{\varepsilon}}})+(1-{{\varepsilon}}^{2})
\left\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}u^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}u^{{{\varepsilon}}}\right\}$$ and hence $$0={{\varepsilon}}^{2}(v^{{{\varepsilon}}}(x_{{{\varepsilon}}})-u^{{{\varepsilon}}}(x_{{{\varepsilon}}}))+(1-{{\varepsilon}}^{2})
\left\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}(u^{{{\varepsilon}}}(y)-u^{{{\varepsilon}}}(x_{{{\varepsilon}}}))
+ {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}(u^{{{\varepsilon}}}(y)-u^{{{\varepsilon}}}(x_{{{\varepsilon}}}))\right\}.$$ Using and that $\phi$ is smooth we obtain $$\label{grad}
0 \leq {{\varepsilon}}^{2}(v^{{{\varepsilon}}}(x_{{{\varepsilon}}})-u^{{{\varepsilon}}}(x_{{{\varepsilon}}}))+(1-{{\varepsilon}}^{2})\left\{{{\frac{1}{2}}}\max_{y \in {\overline}{B}_{{{\varepsilon}}}(x_{{{\varepsilon}}})}(\phi(y)-\phi(x_{{{\varepsilon}}})) + {{\frac{1}{2}}}\min_{y \in {\overline}{B}_{{{\varepsilon}}}(x_{{{\varepsilon}}})}(\phi(y)-\phi(x_{{{\varepsilon}}}))\right\}+{{\varepsilon}}^{3}.$$
Now, assume that $\nabla \phi (x_0) \neq 0$. Then, by continuity $\nabla \phi \neq 0$ in a ball $B_{r}(x_{0})$ for $r$ small. In particular, we have $\nabla \phi(x_{{{\varepsilon}}}) \neq 0$. Call $w_{{{\varepsilon}}}= \frac{\nabla \phi(x_{{{\varepsilon}}})}{|\nabla \phi(x_{{{\varepsilon}}}) |}$ and let $z_{{{\varepsilon}}}$ with $| z_{{{\varepsilon}}}|=1$ be such that $$\max_{y\in\partial B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}\phi(y)=\phi(x_{{{\varepsilon}}}+{{\varepsilon}}z_{{{\varepsilon}}}).$$ We have $$\begin{array}{l}
\displaystyle
\phi(x_{{{\varepsilon}}}+{{\varepsilon}}z_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}}) ={{\varepsilon}}\langle\nabla\phi(x_{{{\varepsilon}}}),z_{{{\varepsilon}}}\rangle+o({{\varepsilon}})
\displaystyle
\leq {{\varepsilon}}\langle\nabla\phi(x_{{{\varepsilon}}}),w_{{{\varepsilon}}}\rangle+o({{\varepsilon}})
=\phi(x_{{{\varepsilon}}}+{{\varepsilon}}w_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}})+o({{\varepsilon}}).
\end{array}$$ On the other hand $$\phi(x_{{{\varepsilon}}}+{{\varepsilon}}w_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}})={{\varepsilon}}\langle\nabla\phi(x_{{{\varepsilon}}}),w_{{{\varepsilon}}}\rangle+o({{\varepsilon}})\leq \phi(x_{{{\varepsilon}}}+{{\varepsilon}}z_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}}).$$ Therefore, we get $${{\varepsilon}}\langle\nabla\phi(x_{{{\varepsilon}}}),w_{{{\varepsilon}}}\rangle+o({{\varepsilon}})\leq {{\varepsilon}}\langle\nabla\phi(x_{{{\varepsilon}}}),z_{{{\varepsilon}}}\rangle+o({{\varepsilon}})\leq {{\varepsilon}}\langle\nabla\phi(x_{{{\varepsilon}}}),w_{{{\varepsilon}}}\rangle+o({{\varepsilon}}).$$ multiplying by ${{\varepsilon}}^{-1}$ and taking the limit we arrive to $$\langle\nabla\phi(x_{0}),w_{0}\rangle=\langle\nabla\phi(x_{0}),z_{0}\rangle$$ with $w_{0}=\frac{\nabla \phi(x_{0})}{|\nabla \phi(x_{0}) |}$ and we conclude that $$z_{0}=w_{0}=\frac{\nabla \phi(x_{0})}{|\nabla \phi(x_{0}) |}.$$
Going back to we obtain $$\label{zeps}
0 \leq {{\varepsilon}}^{2}(v^{{{\varepsilon}}}(x_{{{\varepsilon}}})-u^{{{\varepsilon}}}(x_{{{\varepsilon}}}))+(1-{{\varepsilon}}^{2})
\left\{{{\frac{1}{2}}}(\phi(x_{{{\varepsilon}}}+{{\varepsilon}}z_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}})) + {{\frac{1}{2}}}(\phi(x_{{{\varepsilon}}}-{{\varepsilon}}z_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}}))\right\} +{{\varepsilon}}^{3}.$$ Making Taylor expansions we get $$\left\{{{\frac{1}{2}}}(\phi(x_{{{\varepsilon}}}+{{\varepsilon}}z_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}})) + {{\frac{1}{2}}}(\phi(x_{{{\varepsilon}}}-{{\varepsilon}}z_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}}))\right\}= {{\frac{1}{2}}}{{\varepsilon}}^{2} \langle D^{2}\phi(x_{{{\varepsilon}}})z_{{{\varepsilon}}},z_{{{\varepsilon}}}\rangle+ \textit{o}({{\varepsilon}}^{2}).$$ Then, from , $$0 \leq v^{{{\varepsilon}}}(x_{{{\varepsilon}}})-u^{{{\varepsilon}}}(x_{{{\varepsilon}}})+(1-{{\varepsilon}}^{2}) {{\frac{1}{2}}}\langle D^{2}\phi(x_{{{\varepsilon}}})z_{{{\varepsilon}}},z_{{{\varepsilon}}}\rangle+ \frac{\textit{o}({{\varepsilon}}^{2})}{{{\varepsilon}}^{2}},$$ and taking the limit as ${{\varepsilon}}\rightarrow 0$ we get $$0 \leq v(x_{0})-u(x_{0})+ {{\frac{1}{2}}}\langle D^{2}\phi(x_{0})w_{0},w_{0}\rangle,$$ that is, $$-{{\frac{1}{2}}}\Delta_{\infty}\phi(x_{0}) + u(x_{0})-v(x_{0}) \leq 0.$$
Now, if $\nabla \phi(x_{0}) =0$ we have to use the upper and lower semicontinuous envelopes of the equation (notice that $\Delta_\infty u$ is not well defined when $\nabla u=0$). For a symmetric matrix $M \in {{\mathbb R}}^{N\times N}$ and $\xi \in {{\mathbb R}}^{N}$, we define $$F_1 (\xi, M) =
\left\{
\begin{array}{ll}
\displaystyle -\langle M \frac{\xi}{|\xi |} ; \frac{\xi}{|\xi |} \rangle \qquad & \xi \neq 0 \\[5pt]
0 \qquad & \xi = 0
\end{array}
\right.$$ The semicontinuous envelopes of $F_1$ are defined as $$F_1^{\ast}(\xi, M) =
\left\{
\begin{array}{ll}
\displaystyle -\langle M \frac{\xi}{|\xi |} ; \frac{\xi}{|\xi |} \rangle \qquad & \xi \neq 0 \\[5pt]
\displaystyle \max \Big\{ \limsup_{\eta \rightarrow 0}-\langle M \frac{\eta}{|\eta |} ; \frac{\eta}{|\eta |} \rangle; 0 \Big\} \qquad & \xi = 0.
\end{array}
\right.$$ and $$F_{1,\ast}(\xi, M) =
\left\{
\begin{array}{ll}
\displaystyle -\langle M \frac{\xi}{ | \xi |} ; \frac{\xi}{|\xi |} \rangle \qquad & \xi \neq 0 \\[5pt]
\displaystyle \min \Big\{ \liminf_{\eta \rightarrow 0}-\langle M \frac{\eta}{|\eta |} ; \frac{\eta}{|\eta |} \rangle; 0 \Big\} \qquad & \xi = 0.
\end{array}
\right.$$ Now, we just remark that $$-\max_{1 \leq i \leq N} \{ \lambda_{i} \} \leq - \langle M\frac{\xi}{|\xi |},\frac{\xi}{|\xi |}\rangle \leq -\min_{1\leq i \leq N} \{ \lambda_{i} \}$$ and hence we obtain $$F_1^{\ast}(\xi, M) =
\left\{
\begin{array}{ll}
\displaystyle -\langle M \frac{\xi}{ |\xi |} ; \frac{\xi}{ |\xi |} \rangle \qquad & \xi \neq 0 \\[5pt]
\displaystyle \max \Big\{ - \min_{1\leq i\leq N}\{\lambda_{i}\} ; 0 \Big\} \qquad & \xi = 0.
\end{array}
\right.$$ and $$F_{1,\ast}(\xi, M) =
\left\{
\begin{array}{ll}
\displaystyle -\langle M \frac{\xi}{|\xi |} ; \frac{\xi}{|\xi |} \rangle \qquad & \xi \neq 0 \\[5pt]
\displaystyle \min \Big\{ -\max_{1\leq i\leq N}\{\lambda_{i}\}; 0 \Big\} \qquad & \xi = 0.
\end{array}
\right.$$
Now, let us go back to the proof and show that $${{\frac{1}{2}}}F_{1,\ast}(0,D^{2}\phi(x_{0}))+u(x_{0})-v(x_{0}) \leq 0.$$ As before, we have a sequence $(x_{{{\varepsilon}}})_{{{\varepsilon}}>0}$ such that $x_{{{\varepsilon}}} \rightarrow x_{0}$ $$\label{ast22}
u^{{{\varepsilon}}}(y)-u^{{{\varepsilon}}}(x_{{{\varepsilon}}}) \leq \phi(y)-\phi(x_{{{\varepsilon}}})+{{\varepsilon}}^{3}$$ Using the DPP, that $\phi$ is smooth and we obtain $$\label{**}
0 \leq (1-{{\varepsilon}}^{2}) \Big\{ {{\frac{1}{2}}}\max_{{\overline}{B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}}(\phi(y)-\phi(x_{{{\varepsilon}}}))+{{\frac{1}{2}}}\min_{{\overline}{B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}}(\phi(y)-\phi(x_{{{\varepsilon}}}))
\Big\}+{{\varepsilon}}^{2}(v^{{{\varepsilon}}}(x_{{{\varepsilon}}})-u^{{{\varepsilon}}}(x_{{{\varepsilon}}})) +{{\varepsilon}}^{3} .$$ Let $ w_{{{\varepsilon}}} \in {\overline}{B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}$ be such that $$\phi(w_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}})=\max_{{\overline}{B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}}(\phi(y)-\phi(x_{{{\varepsilon}}})) .$$ Let ${\overline}{w_{{{\varepsilon}}}}$ be the symmetric point to $w_{{{\varepsilon}}}$ in the ball $B_{{{\varepsilon}}}(x_{{{\varepsilon}}})$. Then we obtain $$0 \leq (1-{{\varepsilon}}^{2}) \Big\{ {{\frac{1}{2}}}(\phi(w_{{{\varepsilon}}})-\phi(x_{{{\varepsilon}}}))+{{\frac{1}{2}}}(\phi({\overline}{w_{{{\varepsilon}}}})-\phi(x_{{{\varepsilon}}})) \Big\} +{{\varepsilon}}^{2}(v^{{{\varepsilon}}}(x_{{{\varepsilon}}})-u^{{{\varepsilon}}}(x_{{{\varepsilon}}})) .$$ Using again Taylor’s expansions $$0 \leq (1-{{\varepsilon}}^{2}) {{\frac{1}{2}}}\langle D^{2}\phi(x_{{{\varepsilon}}})\frac{(w_{{{\varepsilon}}}-x_{{{\varepsilon}}})}{{{\varepsilon}}},\frac{(w_{{{\varepsilon}}}-x_{{{\varepsilon}}})}{{{\varepsilon}}}\rangle+v^{{{\varepsilon}}}(x_{{{\varepsilon}}})-u^{{{\varepsilon}}}(x_{{{\varepsilon}}}) + o(1) .$$ If for a sequence ${{\varepsilon}}\to 0$ we have $$\left | \frac{(w_{{{\varepsilon}}}- x_{{{\varepsilon}}})}{{{\varepsilon}}} \right | =1,$$ then, extracting a subsequence if necessary, we have $z\in{{\mathbb R}}^{n}$ with $\Vert z \Vert =1 $ such that $$\frac{(w_{{{\varepsilon}}}- x_{{{\varepsilon}}})}{{{\varepsilon}}} \to z.$$ Passing to the limit we get $$0 \leq {{\frac{1}{2}}}\langle D^2 \phi (x_{0}) z ,
z \rangle+v(x_{0})-u(x_{0}) .$$ Then $$-{{\frac{1}{2}}}\max_{1\leq i\leq n}\{\lambda_{i}\}+u(x_{0})-v(x_{0}) \leq 0,$$ that is, ${{\frac{1}{2}}}F_{1,\ast}(0,D^{2}\phi(x_{0}))+u(x_{0})-v(x_{0})\leq 0$.
Now, if we have $$\left | \frac{(w_{{{\varepsilon}}}- x_{{{\varepsilon}}})}{{{\varepsilon}}} \right | <1$$ for ${{\varepsilon}}$ small, we just observe that at those points we have that $D^2 \phi (w_{{{\varepsilon}}})$ is negative semidefinite. Hence, passing to the limit we obtain that $D^2 \phi (x_0)$ is also negative semidefinite and then every eigenvalue of $D^2 \phi (x_0)$ is less or equal to $0$. We conclude that $$F_{1,\ast}(0,D^2 \phi (x_{0}))=\min \{ -\max_{1\leq i \leq n}\{ \lambda_{i}\};0\}=0.$$ Moreover, for ${{\varepsilon}}$ small we have that $\langle D^{2}\phi(x_{{{\varepsilon}}})\frac{(w_{{{\varepsilon}}}-x_{{{\varepsilon}}})}{{{\varepsilon}}},\frac{(w_{{{\varepsilon}}}-x_{{{\varepsilon}}})}{{{\varepsilon}}}\rangle \leq 0$. Then, $$0\leq v^{{{\varepsilon}}}(x_{{{\varepsilon}}})-u^{{{\varepsilon}}}(x_{{{\varepsilon}}}) + o(1).$$ Taking the limit as ${{\varepsilon}}\rightarrow 0$ we obtain $$u(x_{0})-v(x_{0})\leq 0.$$ Therefore we arrive to $${{\frac{1}{2}}}F_{1,\ast}(x_0,D^{2}\phi(x_{0}))+u(x_{0})-v(x_{0})\leq 0,$$ that is what we wanted to show.
The fact that $u$ is a supersolution can be proved in an analogous way. In this case we need to show that $${{\frac{1}{2}}}F_1^{\ast}(\nabla\phi(x_{0}),D^{2}\phi(x_{0}))+u(x_{0})-v(x_{0})\geq 0,$$ for $x_{0} \in {\Omega}$ and $\phi \in {C}^{2}({\Omega})$ such that $u(x_{0})-\phi(x_{0})=0$ and $u-\phi$ has a strict minimum at $x_{0}$.
[*Laplacian.*]{} Now, let us show that $v$ is a viscosity solution to $$-\frac{\kappa}{2}\Delta v(x)+v(x)-u(x)=0.$$
Let us start by showing that $u$ is a subsolution. Let $\psi \in C^{2}({\Omega})$ such that $v(x_{0})-\psi(x_{0})=0$ and has a maximum of $v-\psi$ at $x_{0} \in {\Omega}$. As before, we have the existence of a sequence $(x_{{{\varepsilon}}})_{{{\varepsilon}}>0}$ such that $x_{{{\varepsilon}}} \rightarrow x_{0}$ and $v^{{{\varepsilon}}}-\psi$ and $$\label{ast44}
u^{{{\varepsilon}}}(y)-u^{{{\varepsilon}}}(x_{{{\varepsilon}}}) \leq \psi(y)-\psi(x_{{{\varepsilon}}})+{{\varepsilon}}^{3}.$$ Therefore, from the DPP, we obtain $$0\leq (u^{{{\varepsilon}}}(x_{{{\varepsilon}}})-v^{{{\varepsilon}}}(x_{{{\varepsilon}}}))+(1-{{\varepsilon}}^{2})\frac{1}{{{\varepsilon}}^{2}}{\vint}_{B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}(\psi(y)-\psi(x_{{{\varepsilon}}}))dy.$$ From Taylor’s expansions we obtain $$\frac{1}{{{\varepsilon}}^{2}}{\vint}_{B_{{{\varepsilon}}}(x_{{{\varepsilon}}})}(\psi(y)-\psi(x_{{{\varepsilon}}}))dy=\frac{\kappa}{2} \sum\limits_{j=1}^{N} \partial_{x_{j}x_{j}}\psi(x_{{{\varepsilon}}})=\frac{\kappa}{2}\Delta \psi(x_{{{\varepsilon}}}),$$ with $\kappa = \frac{1}{{{\varepsilon}}^{n}|B_{1}(0)|}\int_{B_{1}(0)}z_{j}^{2}{{\varepsilon}}^{N}dz=\frac{1}{|B_{1}(0)|}\int_{B_{1}(0)}z_{j}^{2}dz. $ Taking limits as ${{\varepsilon}}\rightarrow 0$ we get $$-\frac{\kappa}{2}\Delta \psi(x_{0})+v(x_{0})-u(x_{0}) \leq 0.$$
The fact that $v$ is a supersolution is similar.
Uniqueness for viscosity solutions {#sect-uniqueness}
==================================
Our goal is to show uniqueness for viscosity solutions to our system . To this end we follow ideas from [@BB; @Mitake] (see also [@Jen] for uniqueness results concerning the infinity Laplacian). This uniqueness result implies that the whole sequence $u^{{\varepsilon}},v^{{\varepsilon}}$ converge as ${{\varepsilon}}\to 0$. The main idea behind the proof (as in [@BB; @Mitake]) is to make a change of variable $U =\psi (u)$, $V = \psi (v)$ which transforms our system in a system in which both equations are coercive in their respective variables $U$ and $V$ when $DU \neq 0$ and $DV\neq 0$. Next we use the fact that one can take $\psi$ as close to the identity as we want.
First, we state the Hopf Lemma. We only state the result for supersolutions (the result for subsolutions is the same with the obvious changes).
\[Hopf-lemma\] Let $V$ be an open set with $\overline{V} \subset \Omega$. Let $(u, v)$ be a viscosity supersolution of and assume that there exists $x_0 \in \partial V$ such that $$u (x_0) = \min \Big\{ \min_\Omega u(x); \min_\Omega v(x) \Big\}
\qquad
\mbox{and} \qquad
u (x_0) < \min_{x \in V} \{u(x),v(x)\}.$$ Assume further that $V$ satisfies the interior ball condition at $x_0$, namely, there exists an open ball $B_R \subset V$ with $x_0 \in \partial B_R$. Then, $$\liminf_{s\to 0} \frac{ u(x_0 - s \nu (x_0)) - u(x_0)}{s} > 0,$$ where $\nu (x_0)$ is the outward normal vector to $\partial B_R$ at $x_0$.
[An analogous statement holds for the second component of the system, $v$. If we have that $$v (x_0) = \min \Big\{ \min_\Omega u(x); \min_\Omega v(x) \Big\}
\qquad
\mbox{and} \qquad
v (x_0) < \min_{x \in V} \{u(x),v(x)\}.$$ Then we have $$\liminf_{s\to 0} \frac{ v(x_0 - s \nu (x_0)) - v(x_0)}{s} > 0.$$ ]{}
See the Appendix in [@Mitake]. In fact one can take $$w(x):=e^{-\alpha |x|^2} - e^{-\alpha R^2}$$ and show that $w$ is a strict subsolution of any of the two equations in in the annulus $\{x : R/2<|x|<R\}$.
The Strong Maximum Principle follows form Hopf Lemma.
\[SMP-theo\] Let $(u, v)$ be a viscosity supersolution of . Assume that $\min_\Omega \min\{ u, v\}$ is attained at an interior point of $\Omega$. Then $u = v = C$ for some constant $C$ in the whole $\Omega$.
Again we refer to the Appendix in [@Mitake].
Now we can proceed with the proof of the Comparison Principle.
\[teo-compar\] Assume that $(u_1, v_1)$ and $(u_2, v_2)$ are a bounded viscosity subsolution and a bounded viscosity supersolution of , respectively, and also assume that $u_1 \leq u_2$ and $v_1 \leq v_2$ on $\partial \Omega$. Then $$u_1 \leq u_2 \qquad \mbox{and} \qquad v_1 \leq v_2,$$ in $\Omega$.
This comparison result implies the desired uniqueness for .
\[corl-unicidad\] There exists a unique viscosity solution to .
We argue by contradiction and assume that $$c := \max \Big\{ \max_\Omega (u_1(x) -u_2(x)) ; \max_\Omega (v_1(x) -v_2(x)) \Big\} > 0.$$ We replace $(u_1, v_1)$ by $(u_1 - c/2, v_1- c/2)$. We may assume further that $u_1$ and $v_1$ are semi-convex and $u_2$ and $v_2$ are semi-concave by using sup and inf convolutions and restricting the problem to a slightly smaller domain if necessary (see [@Mitake] for extra details). We now perturb $u_1$ and $v_1$ as follows. For $\alpha > 0$, take $\Omega_\alpha := \{x \in \Omega :
dist(x,\partial \Omega) > \alpha\}$ and for $|h|$ sufficiently small, define $$M(h):=\max \Big\{ \max_{x \in \Omega} (u_1(x+h)- u_2(x)) ; \max_{x \in \Omega} (v_1(x+h)- v_2(x))\Big\}
= w_1(x_h+h)- w_2(x_h)$$ for $w=u \mbox{ or } v$ (we will call $w$ the component at which the maximum is achieved) and some $x_h \in \Omega_{|h|}$. Since $M(0) > 0$, for $|h|$ small enough, we have $M(h)>0$ and the above maximum is the same if we take it over $\Omega_\alpha$ any $\alpha >0$ sufficiently small and fixed. Note that from the equations we get that at $x_h$ we have $$u_1(x_h + h) - u_2(x_h) = v_1(x_h + h) - v_2(x_h).$$ Now, we claim that there exists a sequence $h_n \to 0$ such that at any maximum point $y \in \Omega_{|h_n|}$ of $$\max \Big\{ \max_{x\in \Omega_{|h_n|}} (u_1(x + h_n) - u_2(x)) ; \max_{x\in \Omega_{|h_n|}} (u_1(x + h_n) - u_2(x))\Big\},$$ we have $$Dw_1(y + h_n) =
Dw_2(y) \neq 0$$ for $ n \in \mathbb{N}$. To prove this claim we argue again by contradiction and assume that there exists, for each $h$ with $|h|$ small, $x_h$ which is a maximum point so that $Dw_1 (x_h + h) = Dw_2 (x_h) = 0$. As $u_1 - u_2$ and $v_1-v_2$ are semi-convex, $M(h)$ is semi-convex for $h$ small. Now for any $k$ close to $h$, one has that, thanks to the fact that $Dw_1 (x_h + h) = 0$, $$M(k) \geq w_1 (x_h+k)- w_2(x_h) \geq w_1(x_h+h)-C|h-k|^2-w(x_h) = M(h)-C|h-k|^2.$$ Thus, $0 \in \partial M(h)$ for every $|h|$ small. This implies that $M(h) = M(0)$ for $|h|$ small. Now take $x_0 \in \Omega$ a maximum point of $\max \{ \max_{x\in\Omega} (u_1(x) - u_2(x)); \max_{x\in\Omega} (v_1(x) - v_2(x))$. For $|h|$ sufficiently small we have that $x_0 \in \Omega_{|h|}$, and, $w_1(x_0) - w_2(x_0) = M(0) = M(h) \geq w_1(x_0 + h) - w_2(x_0)$. Hence, $x_0$ is a local maximum of $u_1$, $v_1$. Now, the strong maximum principle, Theorem \[SMP-theo\], implies that $u_1$, $v_1$ are constant in $\Omega$, which gives the desired contradiction and proves the claim.
Now we recall that for a semi-convex function $a$ and a semi-concave function $b$ we have that both $a$ and $b$ are differentiable at any local maximum points of $b-a$ and if the function $a$ (or $b$) is differentiable at $x_0$ and $\{x_n\}$ is a sequence of differentiable points such that $x_n \to x_0$, then $Da(x_n)\to
Da(x_0)$ (or $Db(x_n)\to
Db(x_0)$). Then, thanks to these properties and the previous claim, we have the existence of a positive constant $\delta (n) > 0$ so that $|Dw_1(y+h_n)| = |Dw_2(y)| > \delta (n)$ for all $y$ such that the maximum in the claim is attained.
Now we consider, as in [@BB], the functions $\varphi_{{\varepsilon}}$ defined by $$\varphi_{{\varepsilon}}' (t) = \exp \left( \int_0^t \exp \Big( - \frac{1}{{{\varepsilon}}} (s- \frac{1}{{{\varepsilon}}}) \Big) ds\right).$$ These functions $\varphi_{{\varepsilon}}$ are close to the identity, $\varphi_{{\varepsilon}}' > 0$, $\varphi_{{\varepsilon}}'$ converge to $1$ as ${{\varepsilon}}\to 0$ and $\varphi_{{\varepsilon}}''$ converge to $0$ as ${{\varepsilon}}\to 0$ with $(\varphi_{{\varepsilon}}''(s))^2 > \varphi_{{\varepsilon}}''' (s) \varphi_{{\varepsilon}}' (s)$, see [@BB].
With $\psi_{{\varepsilon}}= \varphi_{{\varepsilon}}^{-1}$ we perform the changes of variables $$U_{i}^{{\varepsilon}}=\psi_{{\varepsilon}}(u_i), \qquad V_i^{{\varepsilon}}= \psi_{{\varepsilon}}(v_i), \qquad i=1,2.$$ It is clear to see that $U_1$, $V_1$ are semi-convex and $U_2$, $V_2$ are semi-concave. We have that $\max \{ \max_x (U_1^{{\varepsilon}}(x+h_n)- U_2^{{\varepsilon}}(x)) ; \max_x (U_1^{{\varepsilon}}(x+h_n)- U_2^{{\varepsilon}}(x)) \}$ is achieved at some point $x_{{\varepsilon}}$ and by passing to a subsequence if necessary, $x_{{\varepsilon}}\to x_{h_n}$ as ${{\varepsilon}}\to 0$. Since we have $|Dw_1(x_n + h_n)| = |Dw_2 (x_n)| > \delta(n)$, we deduce that for ${{\varepsilon}}$ sufficiently small, it holds that $|DW_1^{{\varepsilon}}(x_{{\varepsilon}}+ h_n)| = |DW_2^{{\varepsilon}}(x_{{\varepsilon}})| \geq \delta(n)/2$.
Now, omitting the dependence on ${{\varepsilon}}$ in what follows, we observe that, after the change of variables $$u_1 = \varphi (U_1), \qquad v_1 = \varphi (V_1)$$ the pair of new unknowns $(U_1,V_1)$ verifies the equations (in the viscosity sense) $$\begin{array}{rl}
\displaystyle 0 & \displaystyle =- \displaystyle {{\frac{1}{2}}}\Delta_{\infty}u_1(x) + u_1(x) - v_1(x) \\[10pt]
& \displaystyle = - \frac12 \varphi ' (U_1)\Delta_\infty U_1(x) - \frac12 \varphi''(U_1)|DU_1|^2 (x) + \varphi(U_1(x)) - \varphi(V_1 (x))\\[10pt]
& \displaystyle = \varphi ' (U_1) \Big( - \frac12 \Delta_\infty U_1(x) - \frac12 \frac{\varphi''(U_1)}{\varphi ' (U_1)}|DU_1|^2 (x) +
\frac{\varphi(U_1(x)) - \varphi(V_1 (x))}{\varphi ' (U_1)}\Big),
\end{array}$$ and $$\begin{array}{rl}
\displaystyle 0 & \displaystyle = - \displaystyle \frac{\kappa}{2} \Delta v(x) + v(x) - u(x) \\[10pt]
& = \displaystyle - \frac{\kappa}{2} \Big( \varphi ' (V_1)\Delta V_1(x) + \varphi''(V_1)|DV_1|^2 (x) \Big) + \varphi(V_1 (x)) - \varphi(U_1 (x) ) \\[10pt]
& = \displaystyle \varphi ' (V_1) \Big(- \frac{\kappa}{2} \Delta V_1(x) - \frac{\kappa}{2} \frac{\varphi''(V_1)}{\varphi ' (V_1)}
|DV_1|^2 (x) + \frac{\varphi(V_1 (x)) - \varphi(U_1 (x) )}{ \varphi ' (V_1)} \Big),
\end{array}$$ and similar equations also hold for $(U_2,V_2)$.
Since $|DU_1^{{\varepsilon}}(x_{{\varepsilon}}+ h_n)| = |DV_2^{{\varepsilon}}(x_{{\varepsilon}})| \geq \delta(n)/2$ this system is strictly monotone (the first equation is monotone in $U_1$ and the second in $V_1$. Here we use that $(\varphi_{{\varepsilon}}''(s))^2 > \varphi_{{\varepsilon}}''' (s) \varphi_{{\varepsilon}}' (s)$ that implies that $\varphi_{{\varepsilon}}'' /\varphi_{{\varepsilon}}'$ is increasing, i.e. $-(\varphi_{{\varepsilon}}'' /\varphi_{{\varepsilon}}')' >0$. Thus, from the strict monotonicity, we get the desired contradiction. See the proof of [@BB], Lemma 3.1, for a more detailed discussion.
Possible extensions of our results {#sect.extensiones}
==================================
In this section we gather some comments on more general systems that can be studied using the same techniques.
Coefficients with spacial dependence
------------------------------------
We can look at the case in which the probability of jumping from one board to the other depends on the spacial location, that is, we can take the probability to jump from board 1 to 2 as $a(x) {{\varepsilon}}^2$ and from 2 to 1 as $b(x) {{\varepsilon}}^2$, for two given nonnegative functions $a(x)$, $b(x)$. In this case the DPP is given by $$\left\lbrace
\begin{array}{ll}
\displaystyle u^{{{\varepsilon}}}(x)= a(x) {{\varepsilon}}^{2}v^{{{\varepsilon}}}(x)+(1- a(x) {{\varepsilon}}^{2})\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y) + {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y)
\Big\} \qquad & \ x \in \Omega, \\[10pt]
\displaystyle v^{{{\varepsilon}}}(x)= b(x) {{\varepsilon}}^{2}u^{{{\varepsilon}}}(x)+(1- b(x) {{\varepsilon}}^{2}){\vint}_{B_{{{\varepsilon}}}(x)}v^{{{\varepsilon}}}(y)dy \qquad & \ x \in \Omega, \\[10pt]
u^{{{\varepsilon}}}(x) = {\overline}{f}(x) \qquad & \ x \in {{\mathbb R}}^{n} \backslash \Omega, \\[10pt]
v^{{{\varepsilon}}}(x) = {\overline}{g}(x) \qquad & \ x \in {{\mathbb R}}^{n} \backslash \Omega.
\end{array}
\right.$$ and the limit system is $$\left\lbrace
\begin{array}{ll}
- \displaystyle {{\frac{1}{2}}}\Delta_{\infty}u(x) + a(x) u(x) - a(x) v(x)=0 \qquad & \ x \in \Omega, \\[10pt]
- \displaystyle \frac{\kappa}{2} \Delta v(x) + b(x) v(x) - b(x) u(x)=0 \qquad & \ x \in \Omega, \\[10pt]
u(x) = f(x) \qquad & \ x \in \partial \Omega, \\[10pt]
v(x) = g(x) \qquad & \ x \in \partial \Omega,
\end{array}
\right.$$
$n\times n$ systems
-------------------
We can deal with a system of $n$ equations $N$ unknowns, $u_1,...,u_n$, of the form $$\left\lbrace
\begin{array}{ll}
- \displaystyle L_i u_i(x) + b_i u_i(x) - \sum_{j\neq i} a_{ij} u_j(x)=0 \qquad & \ x \in \Omega, \\[10pt]
u_i(x) = f_i(x) \qquad & \ x \in \partial \Omega.
\end{array}
\right.$$ Here $L_i$ is $\Delta_\infty$ or $\Delta$, and the coefficients $b_i$, $a_{ij}$ are nonnegative and verify $$b_i = \sum_{j\neq i} a_{ij}.$$
To handle this case we have to play in $n$ different boards and take the probability of jumping from board $i$ to board $j$ as $a_{ij} {{\varepsilon}}^2$ (notice that then the probability of continue playing in the same board $i$ is $1- \sum_{j\neq i} a_{ij} {{\varepsilon}}^2$). The associated DPP is given by $$\left\lbrace
\begin{array}{l}
\displaystyle u_i^{{{\varepsilon}}}(x)= {{\varepsilon}}^{2} \sum_{j\neq i} a_{ij} u_j^{{{\varepsilon}}}(x)+(1- b_i {{\varepsilon}}^{2})
\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)}u_i^{{{\varepsilon}}}(y)
+ {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}u_i^{{{\varepsilon}}}(y)
\Big\} \\
\mbox{ or } \\
\displaystyle u_i^{{{\varepsilon}}}(x)= {{\varepsilon}}^{2} \sum_{j\neq i} a_{ij} u_j^{{{\varepsilon}}}(x)+(1- b_i {{\varepsilon}}^{2})
{\vint}_{B_{{{\varepsilon}}}(x)}u_i^{{{\varepsilon}}}(y)dy \qquad\qquad\qquad \qquad \ x \in \Omega, \\[10pt]
u_i^{{{\varepsilon}}}(x) = {\overline}{f}(x) \qquad \qquad \ x \in {{\mathbb R}}^{N} \backslash \Omega.
\end{array}
\right.$$
Systems with normalized $p-$Laplacians
--------------------------------------
The normalized $p-$Laplacian is given by $$\Delta_p^N u (x) = \alpha \Delta_\infty u (x) + \beta \Delta u(x),$$ with $\alpha (p)$, $\beta (p)$ verifying $
\alpha + \beta =1$ (see [@MPRb]). Notice that this operator is $1-$homogeneous. With the same ideas used here we can also handle the system $$\left\lbrace
\begin{array}{ll}
- \displaystyle \Delta_{p}^N u(x) + u(x) - v(x)=0 \qquad & \ x \in \Omega, \\[10pt]
- \displaystyle \Delta_{q}^N v(x) + v(x) - u(x)=0 \qquad & \ x \in \Omega, \\[10pt]
u(x) = f(x) \qquad & \ x \in \partial \Omega, \\[10pt]
v(x) = g(x) \qquad & \ x \in \partial \Omega.
\end{array}
\right.$$
The associated game runs as follows, in the first board, when the token does not jump, a biased coin is towed (with probabilities $\alpha(p)$ os heads and $\beta(p)$ of tails), if we get heads then we play Tug-of-War and if we get tails then we move at random. In the second board the rules are the same but we use a biased coin with different probabilities $\alpha(q)$ and $\beta(q)$, see [@MPRa], [@MPRb], [@PS] and [@R] for a similar game for a scalar equation (playing in only one board). The corresponding DPP is: $$\left\lbrace
\begin{array}{l}
\displaystyle u^{{{\varepsilon}}}(x)={{\varepsilon}}^{2}v^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2}) \left[ \alpha (p)
\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y)
+ {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y)
\Big\} + \beta(p) {\vint}_{B_{{{\varepsilon}}}(x)}u^{{{\varepsilon}}}(y)dy \right] \qquad \ x \in \Omega, \\[10pt]
\displaystyle v^{{{\varepsilon}}}(x)={{\varepsilon}}^{2}u^{{{\varepsilon}}}(x)+(1-{{\varepsilon}}^{2}) \left[ \alpha (q)
\Big\{{{\frac{1}{2}}}\sup_{y \in B_{{{\varepsilon}}}(x)}v^{{{\varepsilon}}}(y)
+ {{\frac{1}{2}}}\inf_{y \in B_{{{\varepsilon}}}(x)}v^{{{\varepsilon}}}(y)
\Big\} + \beta(q) {\vint}_{B_{{{\varepsilon}}}(x)}v^{{{\varepsilon}}}(y)dy \right] \qquad \ x \in \Omega, \\[10pt]
u^{{{\varepsilon}}}(x) = {\overline}{f}(x) \qquad \ x \in {{\mathbb R}}^{N} \backslash \Omega, \\[10pt]
v^{{{\varepsilon}}}(x) = {\overline}{g}(x) \qquad \ x \in {{\mathbb R}}^{N} \backslash \Omega.
\end{array}
\right.$$
[**Acknowledgements.**]{} Partially supported by CONICET grant PIP GI No 11220150100036CO (Argentina), by UBACyT grant 20020160100155BA (Argentina) and by MINECO MTM2015-70227-P (Spain).
[BH]{}
T. Antunovic, Y. Peres, S. Sheffield and S. Somersille. [*Tug-of-war and infinity Laplace equation with vanishing Neumann boundary condition*]{}. Comm. Partial Differential Equations, 37(10), 2012, 1839–1869.
A. Arroyo and J. G. Llorente. [*On the asymptotic mean value property for planar p-harmonic functions*]{}. Proc. Amer. Math. Soc. 144 (2016), no. 9, 3859–3868.
S. N. Armstrong and C. K. Smart. [*An easy proof of Jensen’s theorem on the uniqueness of infinity harmonic functions.*]{} Calc. Var. Partial Differential Equations 37(3-4) (2010), 381–384.
G. Barles and J. Busca. [*Existence and comparison results for fully nonlinear degenerate elliptic equations without zeroth-order term.*]{} Comm. Partial Differential Equations 26 (2001), no. 11-12, 2323–2337.
P. Blanc and J. D. Rossi. [*Games for eigenvalues of the Hessian and conca- ve/convex envelopes.*]{} J. Math. Pures et Appliquees. 127, (2019), 192–215.
P. Blanc and J. D. Rossi. [Game Theory and Partial Differential Equations.]{} De Gruyter Series in Nonlinear Analysis and Applications Vol. 31. 2019. ISBN 978-3-11-061925-6. ISBN 978-3-11-062179-2 (eBook).
F. Charro, J. Garcia Azorero and J. D. Rossi. [*A mixed problem for the infinity laplacian via Tug-of-War games.*]{} Calc. Var. Partial Differential Equations, 34(3), (2009), 307–320.
M.G. Crandall. [*A visit with the $\infty$-Laplace equation*]{}, Calculus of variations and nonlinear partial differential equations, Lecture Notes in Math., vol. 1927, Springer, Berlin, 2008, pp. 75–122.
M.G. Crandall, H. Ishii and P.L. Lions. [*User’s guide to viscosity solutions of second order partial differential equations*]{}. Bull. Amer. Math. Soc. 27 (1992), 1–67.\[CIL\]
J.L. Doob, [*What is a martingale ?*]{}, Amer. Math. Monthly, 78 (1971), no. 5, 451–463.
M. Ishiwata, R. Magnanini and H. Wadade. [*A natural approach to the asymptotic mean value property for the p-Laplacian*]{}. Calc. Var. Partial Differential Equations, 56 (2017), no. 4, Art. 97, 22 pp.
R. Jensen. [*Uniqueness of Lipschitz extensions: minimizing the sup norm of the gradient.*]{} Arch. Rational Mech. Anal. 123 (1993), no. 1, 51–74.
M. Kac. [*Random Walk and the Theory of Brownian Motion*]{}. Amer. Math. Monthly, 54, No. 7, (1947), 369–391.
B. Kawohl, J.J. Manfredi and M. Parviainen. [*Solutions of nonlinear PDEs in the sense of averages*]{}. J. Math. Pures Appl. 97(3), (2012), 173–188.
P. Lindqvist and J. J. Manfredi. [*On the mean value property for the $p-$Laplace equation in the plane*]{}. Proc. Amer. Math. Soc. 144 (2016), no. 1, 143–149.
Q. Liu and A. Schikorra. *General existence of solutions to dynamic programming principle.* Commun. Pure Appl. Anal. 14 (2015), no. 1, 167–184.
H. Luiro, M. Parviainen, and E. Saksman. [*Harnack’s inequality for p-harmonic functions via stochastic games.*]{} Comm. Partial Differential Equations, 38(11), (2013), 1985–2003
J. J. Manfredi, M. Parviainen and J. D. Rossi. [*An asymptotic mean value characterization for p-harmonic functions*]{}. Proc. Amer. Math. Soc. 138 (2010), no. 3, 881–889.
J. J. Manfredi, M. Parviainen and J. D. Rossi. *Dynamic programming principle for tug-of-war games with noise.* ESAIM, Control, Opt. Calc. Var., 18, (2012), 81–90.
J. J. Manfredi, M. Parviainen and J. D. Rossi. *On the definition and properties of p-harmonious functions.* Ann. Scuola Nor. Sup. Pisa, 11, (2012), 215–241.
H. Mitake and H. V. Tran, [ *Weakly coupled systems of the infinity Laplace equations*]{}, Trans. Amer. Math. Soc. 369 (2017), 1773–1795.
Y. Peres, O. Schramm, S. Sheffield and D. Wilson, [*Tug-of-war and the infinity Laplacian.*]{} J. Amer. Math. Soc., 22, (2009), 167–210.
Y. Peres and S. Sheffield, [*Tug-of-war with noise: a game theoretic view of the $p$-Laplacian*]{}, Duke Math. J., 145(1), (2008), 91–120.
J. D. Rossi. [*Tug-of-war games and PDEs.*]{} Proc. Royal Soc. Edim. 141A, (2011), 319–369.
D. Williams, Probability with martingales, Cambridge University Press, Cambridge, 1991.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
With ultracold $^{88}$Sr in a 1D magic wavelength optical lattice, we performed narrow line photoassociation spectroscopy near the $^1$S$_0 - ^3$P$_1$ intercombination transition. Nine least-bound vibrational molecular levels associated with the long-range $0_u$ and $1_u$ potential energy surfaces were measured and identified. A simple theoretical model accurately describes the level positions and treats the effects of the lattice confinement on the line shapes. The measured resonance strengths show that optical tuning of the ground state scattering length should be possible without significant atom loss.
PACS numbers: 34.80.Qb, 32.80-t, 32.80.Cy, 32.80.Pj
author:
- 'T. Zelevinsky'
- 'M. M. Boyd'
- 'A. D. Ludlow'
- 'T. Ido'
- 'J. Ye'
- 'R. Ciury[ł]{}o'
- 'P. Naidon'
- 'P. S. Julienne'
title: Narrow Line Photoassociation in an Optical Lattice
---
Photoassociation (PA) spectroscopy [@BohnPA99; @JulienneReviewInPress] is a valuable tool for studies of atomic collisions and long-range molecules [@PhillipsNaPA02; @TannoudjiHePA03; @TiemannCaPA03; @TakahashiYbPA04; @KillianPA05; @KatoriScatLengths; @KillianScatLengths05]. In contrast to conventional molecular spectroscopy that probes the most deeply bound vibrational levels of different electronic states, PA spectroscopy excites the vibrational levels close to the dissociation limit, where the molecular properties depend mostly on the long-range dipole-dipole interatomic interactions. Molecular potentials are much simpler at long range than at short range, and are directly related to the basic atomic properties such as the excited state lifetime and the $s$-wave scattering length.
In this Letter, we report the use of PA spectroscopy to resolve nine least-bound vibrational levels of the $^{88}$Sr$_2$ dimer near the $^1$S$_0 - ^3$P$_1$ intercombination line. Our identification of these levels is required for tuning of the ground state scattering length with low-loss optical Feshbach resonances [@JulienneOptFesh05]. This is of great interest for Sr, since magnetic Feshbach resonances are absent for the ground state, and the background scattering length is too small to allow evaporative cooling [@KillianScatLengths05]. In contrast to prior PA work that utilizes strongly allowed transitions with typical line widths in the MHz range, here the spin-forbidden $^1$S$_0 - ^3$P$_1$ line has a natural width of 7.5 kHz. This narrow width allows us to measure the least-bound vibrational levels that would otherwise be obscured by a broad atomic line, and to observe characteristic thermal line shapes even at $\mu$K atom temperatures. It also permits examination of the unique crossover regime between the van der Waals and dipole-dipole interactions, that occurs in Sr near the $^1$S$_0 - ^3$P$_1$ dissociation limit. This access to the van der Waals interactions ensures large bound-bound Franck-Condon factors, and may lead to more efficient creation of cold ground state Sr$_2$ molecules with two-color PA than what is possible using broad transitions. Bosonic $^{88}$Sr is ideal for narrow line PA because of its hyperfine-free structure and a high isotopic abundance (83%). In addition, Sr is a candidate atom for optical clocks [@Ido88SpectrPRL05; @TakamotoNature05; @LudlowPRL06], and PA spectroscopy could help determine density shifts for various clock transitions.
To enable long interrogation times needed for narrow line photoassociation, we trap Sr atoms in a 1D optical lattice. This technique also suppresses recoil shifts and Doppler broadening and thus simplifies the PA line shapes. The specially chosen lattice wavelength $(\lambda_{\rm{magic}})$ induces state-insensitive AC Stark shifts that minimize the perturbing effects of the lattice [@IdoFirstLatticePRL03].
![Schematic diagram of the long-range Sr$_2$ molecular potentials (not to scale). The ground and excited molecular states coincide with two ground state atoms, and with one ground and one excited state atom, respectively, at large internuclear separations. The ground state has $gerade$ symmetry and its energy is given by the potential $V_g$, while the excited state $ungerade$ potentials that support dipole transitions to the ground state are $V_{0u}$ and $V_{1u}$, the latter with a small repulsive barrier. All vibrational states of $0_u$ and $1_u$ (dashed and dotted lines, respectively) are separated by more than the natural line width, permitting high resolution PA spectroscopy very close to the dissociation limit when the atoms are sufficiently cold.[]{data-label="fig:PASchematic"}](PASchematic.eps){width="2.6in"}
Figure \[fig:PASchematic\] illustrates the relevant potential energy curves for the Sr$_2$ dimer as a function of interatomic separation $R$. The photoassociation laser at 689 nm induces allowed transitions between the separated $^1$S$_0$ atom continuum at a temperature $T$ to the vibrational bound levels of the excited $ungerade$ potentials $V_{0u}$ and $V_{1u}$, corresponding to the total atomic angular momentum projections onto the internuclear axis of 0 and 1, respectively. Although our calculations, based on the coupled channels method in Ref. [@JulienneNarrowLinePA04], account for the Coriolis mixing between the $0_u$ and $1_u$ states, this mixing is small enough that $0_u$ and $1_u$ are good symmetry labels for classifying the states. Including the Coriolis mixing is necessary in order to fit the observed $0_u$ and $1_u$ bound state energies. Only $s$-waves (total ground state molecular angular momentum $J=0$) contribute to the absorption (excited state $J=1$) at our low collision energies. The long-range potentials are given by the $C_6/R^6$ van der Waals and $C_3/R^3$ dipole-dipole interactions, plus a rotational term $\propto 1/R^2$, \[eq:V0u\] V\_[0u]{} &= -C\_[6,0u]{}/R\^6-2C\_3/R\^3+h\^2 A\_[0u]{}/(8\^2R\^2),\
V\_[1u]{} &= -C\_[6,1u]{}/R\^6+C\_3/R\^3+h\^2 A\_[1u]{}/(8\^2R\^2), \[eq:V1u\] where $h$ is the Planck constant, $\mu$ is the reduced mass, $A_{0u}=J(J+1) +2$, and $A_{1u}=J(J+1)$. The values of $C_3$, $C_{6,0u}$, and $C_{6,1u}$ are adjusted in our theoretical model so that bound states exist at the experimentally determined resonance energies. Since our modeling of the PA energy levels due to long-range interactions is not sensitive to the details of short-range interactions, the latter are represented by simple 6-12 Lennard-Jones potentials whose depths are chosen to approximately match those in Ref. [@ShortRangeSrCPL03]. The coefficient $C_3$ can be expressed in terms of the atomic lifetime $\tau$ as $C_3 = 3\hbar c^3/(4\tau\omega^3)$, where $\hbar\omega$ is the atomic transition energy and $c$ is the speed of light. Since the $C_3$ and $C_6$ interactions have comparable magnitudes in the region of $R$ relevant to the energy levels we observe, the standard semiclassical analysis based on a single interaction $\propto R^{-n}$ [@LeRoy70] is not applicable.
![Experimental configuration. The 2 $\mu$K atoms are confined in a 1D optical lattice in the Lamb-Dicke regime that allows Doppler- and recoil-free photoassociation spectroscopy. The lattice laser intensity is large enough to confine most atoms in the ground motional state. The 689 nm weak PA laser co-propagates with the lattice. The 914 nm lattice wavelength ensures that $^1$S$_0$ and the $m_J = \pm 1$ sublevels of $^3$P$_1$ are Stark-shifted by equal amounts.[]{data-label="fig:ExptSetup"}](ExptSetup1.eps){width="3.0in"}
The experiment is performed with $\sim 2$ $\mu$K $^{88}$Sr atoms that are trapped and cooled in a two-stage magneto-optical trap (MOT) [@LoftusPRL04; @LoftusPRA04]. As the atoms are cooled in the 689 nm narrow line $^1$S$_0 - ^3$P$_1$ MOT, they are loaded into a 1D far-detuned optical lattice [@LudlowPRL06], which is a 300 mW standing wave of 914 nm light with a 70 $\mu$m beam waist at the MOT. This results in $10^5$ atoms with the average density of about $3\times 10^{12}$/cm$^3$, after the MOT is switched off. The axial trapping frequency in the lattice is $\nu_z\sim 50$ kHz, much larger than the 5 kHz atom recoil frequency and the 7.5 kHz line width, resulting in Doppler- and recoil-free spectroscopy if the PA laser is collinear with the lattice beam. Figure \[fig:ExptSetup\] illustrates the lattice configuration. The polarizations of the lattice and PA lasers are linear and orthogonal in order to satisfy the $\lambda_{\rm{magic}}$ condition at 914 nm [@IdoFirstLatticePRL03]. The choice of $\lambda_{\rm{magic}}$ ensures that $^1$S$_0$ and the magnetic sublevels of $^3$P$_1$ that are coupled to $^1$S$_0$ by the PA laser are Stark shifted by equal amounts, resulting in the zero net Stark shift for the transition and consequently eliminating inhomogeneous Stark broadening and shifts of the PA lines. A 689 nm diode laser used for PA is offset-locked to a cavity-stabilized master laser with a sub-kHz spectral width [@Ido88SpectrPRL05; @LudlowPRL06].
![The spectra of the long-range Sr$_2$ molecule near the $^1$S$_0 - ^3$P$_1$ dissociation limit. The horizontal scale is marked on the rightmost panel; different PA laser intensities were used for each spectrum due to largely varying line strengths. The top labels indicate the interatomic separations that correspond to the classical outer turning points of each resonance.[]{data-label="fig:PASpectrum"}](PASpectrum.eps){width="3.0in"}
To trace out the molecular spectra, the PA laser frequency is stepped, and after 320 ms of photoassociation at a fixed frequency the atoms are released from the lattice and illuminated with a 461 nm light pulse (resonant with $^1$S$_0 - ^1$P$_1$) for atom counting. At a PA resonance, the atom number drops as excited molecules form and subsequently decay to ground state molecules in high vibrational states or hot atoms that cannot remain trapped. Figure \[fig:PASpectrum\] shows the nine observed PA line spectra near the dissociation limit.
The line fits for the free-bound transitions are based on a convolution of a Lorentzian profile with a thermal distribution of initial kinetic energies [@JonesLineShape99], as schematically shown in Fig. \[fig:Temp\] (a). In the 1D optical lattice, we work in the quasi-2D scattering regime, since $T\sim 2$ $\mu$K $\sim \nu_z/k_B$ ($k_B$ is the Boltzmann constant), and the axial motion is quantized. For weak PA laser intensities used here, the total trap loss rate is a superposition of loss rates $K_{\varepsilon}$ for atoms with thermal energies $\varepsilon\equiv h^2k^2/(8\pi^2\mu)$, K\_() = , \[eq:Kepsilon\] where $\nu$ and $\nu_0$ are the laser and molecular resonance frequencies expressed as detunings from the atomic $^1$S$_0 - ^3$P$_1$ line, $\gamma\simeq 15$ kHz is the line width of the excited molecular state, $\gamma_1\simeq 25$ kHz is a phenomenological parameter that accounts for the observed line broadening, $l_{\rm{opt}}$ is the optical length [@JulienneOptFesh05] discussed below, and only $s$-wave collisions are assumed to take place. While the total 3D PA loss rate is K\_[3D]{}(T, ) = \_[0]{}\^[K\_()e\^[-/k\_BT]{}]{}, \[eq:KT\_3D\] the 2D loss rate is K\_[2D]{}(T, ) = \_[0]{}\^[K\_(+\_z/2)e\^[-/k\_BT]{}]{}. \[eq:KT\_2D\] Dimensional effects appear in Eq. (\[eq:KT\_2D\]) as a larger density of states at small thermal energies ($i.e.$ no $\sqrt{\varepsilon}$ factor), and as a red shift by the zero-point confinement frequency $\nu_z/2$. Another property of a 2D system (not observed here due to small probe intensities) is a lower bound on power broadening near $T\sim 0$, fixed by the zero-point momentum [@Petrov2D01]. Equation (\[eq:KT\_2D\]) assumes only collisions in the motional ground state of the lattice potential. Although 50% of our atoms are in excited motional states, using a more complete expression yielded essentially the same line shapes. If any local atom density variations associated with PA are neglected and the temperature is assumed to be constant during probing, then the atom density evolution is given by = -2Kn\^2-n/\_l, \[eq:DensityDiffEq\] where $\tau_l \sim 1$ s is the lattice lifetime, from which we calculate the number of atoms remaining in the lattice.
The measured PA resonances $\nu_0$ were reproduced theoretically by adjusting the $C_3$ and $C_6$ parameters as well as the short-range interatomic potentials. The measured and calculated $\nu_0$ are compared in Table \[table:Table1\].
---------------- ------------------ ---------------- ------------------
$0_u$ Measured $0_u$ Calculated $1_u$ Measured $1_u$ Calculated
(MHz) (MHz) (MHz) (MHz)
-0.435(37) -0.418 -353.236(35) -353.232
-23.932(33) -23.927 -2683.722(32) -2683.729
-222.161(35) -222.152 -8200.163(39) -8112.987
-1084.093(33) -1084.091
-3463.280(33) -3463.296
-8429.650(42) -8420.124
---------------- ------------------ ---------------- ------------------
: The comparison of measured and calculated molecular resonance frequencies, for both $0_u$ and $1_u$ potentials.[]{data-label="table:Table1"}
The positions of the measured lines are found by fitting Eqs. (\[eq:Kepsilon\]-\[eq:DensityDiffEq\]) to the data (see Fig. \[fig:Temp\]). Both 2D and 3D line shape models were applied, without an appreciable difference in the obtained optical lengths. However, the 2D model yields $\nu_0$ corrections of $+15(10)$ kHz that are included in Table \[table:Table1\]. The measurement errors in the Table are a combination of a 30 kHz uncertainty of the atomic reference for the PA laser, and of the systematic and statistical reproducibility. The calculated $\nu_0$ results are based on the coupled channels model [@JulienneNarrowLinePA04] and were obtained using $C_3 = 0.007576$ au ($\tau = 21.35$ $\mu$s), $C_{6,0u} = 3550$ au, and $C_{6,1u} = 3814$ au (1 au is $E_h a_0^3$ for $C_3$ and $E_h a_0^6$ for $C_6$, where $E_h = 4.36\times 10^{-18}$ J and $a_0 = 0.0529$ nm). However, the model limitations, such as insufficient knowledge of the short-range potentials, require the following uncertainties in the long-range parameters: $C_3 = 0.0076(1)$ au ($\tau = 21.4(2)$ $\mu$s), $C_{6,0u} = 3600(200)$ au, and $C_{6,1u} = 3800(200)$ au. The theoretical results for the two PA lines at detunings over 8 GHz disagree with experiment by 0.1-1% due to a high sensitivity to the short-range potentials, while all the other lines fit well within the specified uncertainties of as low as $10^{-5}$.
![(a) A line is schematically shown as a sum of Lorentzians with positions and amplitudes given by a continuous thermal distribution. (b) The measured line shapes of the $-222$ MHz transition at 1.7 $\mu$K (open circles) and 3.5 $\mu$K (filled circles), clearly showing the effect of temperature on the red side of the line. The bottom trace shows the residuals of the 1.7 $\mu$K curve fit.[]{data-label="fig:Temp"}](PATemp.eps){width="3.0in"}
Since the width of the Sr intercombination line is in the kHz range, even ultracold temperatures of a few $\mu$K contribute significant thermal broadening. Figure \[fig:Temp\] (b) shows two spectra of the $-222$ MHz PA line taken at $T=1.7$ $\mu$K and $T=3.5$ $\mu$K ($T$ was measured from the time-of-flight atom cloud expansion). The spectrum of the hotter sample manifests thermal broadening as a tail on the red-detuned side.
A photoassociation line depth is given by the optical length [@JulienneOptFesh05], $l_{\rm{opt}}$, a parameter proportional to the free-bound Franck-Condon factor and the PA laser intensity. It is useful for specifying the laser-induced changes in both elastic and inelastic collision rates that lead to optical control of the atomic scattering length and to atom loss, respectively. The optical lengths were obtained from the data and calculated from the Franck-Condon factors under the assumption of a zero ground state background scattering length [@KillianScatLengths05]. For the $-0.4$ MHz line, the experimental $l_{\rm{opt}}/I\simeq 4.5\times 10^5$ $a_0/$(W/cm$^2$), while the theoretical value is $9.0\times 10^5$ $a_0/$(W/cm$^2$), where $I$ is the PA laser intensity. Significant experimental uncertainties may arise from the calibration of the atomic density and the focused PA laser intensity. In addition, we observe line broadening $\gamma_1$, most likely a result of the residual magnetic field, and any variation in this extra width can lead to $l_{\rm{opt}}$ error. The $-0.4$ MHz resonance is special due to its very large classical turning point, and its effective optical length increases with decreasing $\varepsilon$, while $l_{\rm{opt}}$ is energy-independent for the other lines (due to the Wigner threshold law [@Wigner48]). Therefore, the reported effective $l_{\rm{opt}}$ values for the $-0.4$ MHz line are based on the 2 $\mu$K sample temperature with the 1 $\mu$K zero-point energy. In addition, the fitted $l_{\rm{opt}}$ for this resonance was multiplied by 2.5 to obtain the value quoted above, since our calculations show that 60% of the photoassociated and decayed atoms have insufficient kinetic energies to leave the $\sim 10$ $\mu$K deep trap. The measured and calculated $l_{\rm{opt}}$ for the other PA lines can differ by up to an order of magnitude, but both decrease rapidly as the magnitude of the detuning increases and are of the order of $l_{\rm{opt}}/I=1$ $a_0/$(W/cm$^2$) for the most deeply bound levels. The theoretical optical length of the $-222$ MHz line was found to have the largest dependence on the assumed ground state scattering length due to its sensitivity to the nodal structure of the ground state wave function.
Intercombination transitions of alkaline earths such as Sr are particularly good candidates for optical control of the ground state scattering length, $a_{\rm{opt}}$, because there is a possibility of large gains in $a_{\rm{opt}}$ with small atom losses. In fact, using the $-0.4$ MHz PA line with the measured $l_{\rm{opt}}\sim 5\times 10^5$ $a_0$ cm$^2$/W will allow tuning the ground state scattering length by [@JulienneOptFesh05] $a_{\rm{opt}}=l_{\rm{opt}}(\gamma/\delta)f\simeq \pm 300$ $a_0$, where the PA laser with $I=10$ W/cm$^2$ is far-detuned by $\delta = \pm 160$ MHz from the molecular resonance, and the factor $f=(1+(\gamma/2\delta)^2(1+2kl_{\rm{opt}})^2)^{-1}\simeq 0.8$ accounts for the power broadening for 3 $\mu$K collisions. In contrast, optical tuning of the scattering length in alkali $^{87}$Rb [@GrimmOptFeshRb04] achieved $a_{\rm{opt}} = \pm 90$ $a_0$ at much larger PA laser intensities of 500 W/cm$^2$. In addition, the Sr system at the given parameter values will have a loss rate of [@JulienneOptFesh05] $K=(2h/\mu)a_{\rm{opt}}\gamma/(2\delta)\simeq 2\times10^{-14}$ cm$^3$/s, while the loss rate in the $^{87}$Rb experiment was $2\times10^{-10}$ cm$^3$/s. The overall optical length gain of over 5 orders of magnitude is possible for the Sr system because the narrow intercombination transition allows access to the least-bound molecular state, and the PA line strength increases with decreasing detuning from the atomic resonance (see Fig. 1 in Ref. [@JulienneOptFesh05]). The above $a_{\rm{opt}}$ and $K$ values accessible with the $-0.4$ MHz resonance result in the elastic and inelastic collision rates of $\Gamma_{\rm{el}}\sim\sqrt{(2\varepsilon/\mu)}8\pi a_{\rm{opt}}^2 n\sim 600$/s and $\Gamma_{\rm{inel}}\sim 2Kn\sim 0.1$/s, respectively. The favorable $\Gamma_{\rm{el}}/\Gamma_{\rm{inel}}$ ratio may enable evaporative cooling. A stringent constraint on the use of narrow-line optical Feshbach resonances is the proximity of the atomic transition. In the above example, the scattering rate of the PA laser photons due to the $^1$S$_0 - ^3$P$_1$ atomic line is $\Gamma_s\sim 40$/s $\sim\Gamma_{\rm{el}}/15$. Excessive one-atom photon scattering can cause heating and loss of atoms from the lattice.
Another application of narrow line PA is potentially efficient production of cold molecules in the ground state. Our bound-bound Franck-Condon factor calculations show that over 50% of the molecules electronically excited to the $-222$ MHz level decay to the last ground state vibrational level (distributed between $J = 0,2$).
In summary, we have measured a series of nine vibrational levels of the $^{88}$Sr$_2$ dimer near the $^1$S$_0 - ^3$P$_1$ intercombination transition. The 20 $\mu$s long lifetime of $^3$P$_1$, combined with the magic wavelength optical lattice technique, allowed direct probing of the least-bound state, and observation of thermal photoassociation line shapes even at ultracold $\mu$K temperatures, with explicit dimensional effects. We have characterized the strengths of the molecular resonances, and shown that the least-bound state should allow a unique combination of extensive, yet low-loss optical control of the $^{88}$Sr ground state scattering length that may enable efficient evaporative cooling.
We thank N. Andersen for stimulating discussions, and acknowledge financial support from NIST, NRC, NSF, ONR, and the National Laboratory FAMO in Toruń.
[10]{} J. L. Bohn and P. S. Julienne, Phys. Rev. A [**60**]{}, 414 (1999).
K. M. Jones [*et al.*]{}, Rev. Mod. Phys., in press.
C. McKenzie [*et al.*]{}, Phys. Rev. Lett. [**88**]{}, 120403 (2002).
J. Léonard [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 073203 (2003).
C. Degenhardt [*et al.*]{}, Phys. Rev. A [**67**]{}, 043408 (2003).
Y. Takasu [*et al.*]{}, Phys. Rev. Lett. [**93**]{}, 123202 (2004).
S. B. Nagel [*et al.*]{}, Phys. Rev. Lett. [**94**]{}, 083004 (2005).
M. Yasuda [*et al.*]{}, Phys. Rev. A [**73**]{}, 011403(R) (2006).
P. G. Mickelson [*et al.*]{}, Phys. Rev. Lett. [**95**]{}, 223002 (2005).
R. Ciury[ł]{}o [*et al.*]{}, Phys. Rev. A [**71**]{}, 030701(R) (2005).
T. Ido [*et al.*]{}, Phys. Rev. Lett. [**94**]{}, 153001 (2005).
M. Takamoto [*et al.*]{}, Nature [**435**]{}, 321 (2005).
A. D. Ludlow [*et al.*]{}, Phys. Rev. Lett. [**96**]{}, 033003 (2006).
T. Ido and H. Katori, Phys. Rev. Lett. [**91**]{}, 053001 (2003).
R. Ciury[ł]{}o [*et al.*]{}, Phys. Rev. A [**70**]{}, 062710 (2004).
E. Czuchaj [*et al.*]{}, Chem. Phys. Lett. [**371**]{}, 401 (2003).
R. J. LeRoy and R. B. Bernstein, J. Chem. Phys. [**52**]{}, 3869 (1970).
T. H. Loftus [*et al.*]{}, Phys. Rev. Lett. [**93**]{}, 073003 (2004).
T. H. Loftus [*et al.*]{}, Phys. Rev. A [**70**]{}, 063413 (2004).
K. M. Jones [*et al.*]{}, Phys. Rev. A [**61**]{}, 012501 (1999).
D. S. Petrov and G. V. Shlyapnikov, Phys. Rev. A [**64**]{}, 012706 (2001).
E. P. Wigner, Phys. Rev. [**73**]{}, 1002 (1948).
M. Theis [*et al.*]{}, Phys. Rev. Lett. [**93**]{}, 123001 (2004).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We apply Neeman’s method of forcing with side conditions to show that PFA does not imply the precipitousness of the nonstationary ideal on $\omega_1$.'
address: 'Equipe de Logique Mathématique, Institut de Mathématiques de Jussieu, Université Paris Diderot, Paris, France'
author:
- 'B. Veličković'
bibliography:
- 'pfa.bib'
title: PFA and precipitousness of the nonstationary ideal
---
Introduction {#introduction .unnumbered}
============
One of the main consequences of Martin’s Maximum (MM) is that the nonstationary ideal on $\omega_1$ (${\rm NS}_{\omega_1}$) is saturated and hence also precipitous. This was already shown by Foreman, Magidor and Shelah in [@FMS], where the principle MM was introduced. It is natural to ask if the weaker Proper Forcing Axiom (PFA) is sufficient to imply the same conclusion. In the 1990s the author adapted the argument of Shelah in [@Sh_P Chapter XVII], where PFA is shown to be consistent with the existence of a function $f:\omega_1 \to \omega_1$ dominating all the canonical functions below $\omega_2$, to show that PFA does not imply the precipitousness of ${\rm NS}_{\omega_1}$. This result was also obtained independently by Shelah and perhaps several other people, but since it was never published it was considered a folklore result in the subject.
Some twenty years later Neeman [@Neeman] introduced a method for iterating proper forcing by using conditions which consist of two components: the *working part*, which is a function of finite support, and the *side condition*, which is a finite $\in$-chain of models of one of two types. The interplay between the working parts and the side conditions allows us to show that the iteration of proper forcing notions is proper. Neeman used this new iteration technique to give another proof of the consistency of PFA as well as several other interesting applications.
In this paper we adapt Neeman’s iteration technique to give another proof of the consistency of PFA together with ${\rm NS}_{\omega_1}$ being non precipitous. Our modification consists of two parts. First, we consider a [*decorated*]{} version of the side condition poset. This version is already present in [@Neeman]. Its principal virtue is that it guarantees that the generic sequence of models added by the side condition part of the forcing is continuous. The second modification is more subtle. To each condition $p$ we attach the [*height function*]{} ${\rm ht}_p$ which is defined on certain pairs of ordinals. In order for a condition $q$ to extend $p$ we require that ${\rm ht}_q$ extends ${\rm ht}_p$. Now, if $G$ is a generic filter, we can define the derived height function ${\rm ht}_G$ from which we can read off the functions $h_\alpha$, for $\alpha <\theta$, where $\theta$ is the length of the iteration. Each of these functions is defined on a club in $\omega_1$ and takes values in $\omega_1$. If ${\mathcal U}$ is a generic ultrafilter over ${\mathcal P}(\omega_1)/{\rm NS}_{\omega_1}$ we can consider $[h_\alpha]_{\mathcal U}$, the equivalence class of $h_\alpha$ modulo $\mathcal U$, as an element of ${\rm Ult}(V,{\mathcal U})$. The point is that the family $\{ [h_\alpha]_{\mathcal U}: \alpha <\theta\}$ cannot be well ordered by $\leq_{\mathcal U}$ and hence ${\rm Ult}(V,{\mathcal U})$ will not be well-founded. Now, our requirement on the height functions introduces some complications in the proof of Neeman’s lemmas required to show the properness of the iteration. The main change concerns the pure side condition part of the forcing. Our side conditions consist of pairs $({\mathcal{M}}_p,d_p)$ where ${\mathcal{M}}_p$ is the $\in$-chain of models and $d_p$ is the decoration. If a model $M$ occurs in ${\mathcal{M}}_p$ we can form the restriction $p|M$, which is simply $({\mathcal{M}}_p\cap M,d_p\restriction M)$. Then $p|M$ is itself a side condition and belongs to $M$. The problem is that we do not know that $p$ is stronger than $p|M$, simply because ${\rm ht}_p$ may not extend ${\rm ht}_{p|M}$. However, for many models $M$ we will be able to find a [reflection]{} $q$ of $p$ inside $M$ such that ${\rm ht}_q$ does extend ${\rm ht}_p \restriction M$ and we will then use $q$ instead of $p |M$. This requires reworking some of the lemmas of [@Neeman]. Since our main changes involve the side condition part of the forcing, we present a detailed proof that after forcing with the pure side condition poset the nonstationary ideal on $\omega_1$ is not precipitous. When we add the working parts we need to rework some of Neeman’s iteration lemmas, but the modifications are mostly straightforward, so we only sketch the arguments and the refer the reader to [@Neeman]. Finally, let us mention that precipitousness of ideals in forcing extensions was studied by Laver [@Laver], and while we do not use directly results from that paper, some of our ideas were inspired by [@Laver].
The paper is organized as follows. In §1 we recall some preliminaries about canonical functions and precipitous ideals. In §2 we introduce a modification of the pure side condition forcing with models of two types from [@Neeman]. In §3 we prove a factoring lemma for our modified pure side condition poset and use it to show that after forcing with this poset the nonstationary ideal is non precipitous. In §4 we introduce the working parts and show how to complete the proof of the main theorem. In order to read this paper, a fairly good understanding of [@Neeman] is necessary and the reader will be referred to it quite often. Our notation is fairly standard and can be found in [@Sh_P] and [@Jech] to which we refer the reader for background information on precipitous ideals, proper forcing, and all other undefined concepts. Let us just mention that a family $\mathcal F$ of subsets of a set $K$ is called [*stationary*]{} in $K$ if for every function $f:K^{<\omega}\to K$ there is $M\in \mathcal F$ which is closed under $f$, ie. such that $f[M^{<\omega}]\subseteq M$.
Preliminaries
=============
We start by recalling the relevant notions concerning precipitous ideals from [@JMMP]. Suppose $\mathcal I$ is a $\kappa$-complete ideal on a cardinal $\kappa$ which contains all singletons. Let ${\mathcal I}^+$ be the collection of all $\mathcal I$-positive subsets of $\kappa$, i.e. ${\mathcal I}^+={\mathcal P}(\kappa)\setminus \mathcal I$. We consider ${\mathcal I}^+$ as a forcing notion under inclusion. If $G_{\mathcal I}$ is a $V$-generic over ${\mathcal I}^+$ then $G_{\mathcal I}$ is an ultrafilter on ${\mathcal P}^V(\kappa)$ which extends the dual filter of $\mathcal I$. We can then form the generic ultrapower ${\rm Ult}(V,G_{\mathcal I})$ of $V$ by $G_{\mathcal I}$ in the usual way, i.e. it is simply $(V^\kappa \cap V)/G_{\mathcal I}$. Recall that $\mathcal I$ is called *precipitous* if the maximal condition forces that this ultrapower is well founded. There is a convenient reformulation of this property in terms of games.
\[game\] Let $\mathcal I$ be a $\kappa$-complete ideal on a cardinal $\kappa$ which contains all singletons. The game ${\mathcal G}_{\mathcal I}$ is played between two players ${\rm I}$ and ${\rm II}$ as follows.
$$\begin{array}[c]{ccccccccc}
{\rm I} : & E_0 \phantom{E_1} & E_2 \phantom{E_3}& \ \ \cdots \ \ & E_{2n} \phantom{E_{2n+1}}& \ \ \cdots \ \ \\
\hline
{\rm II} : & \phantom{E_0} E_1 & \phantom{E_2} E_3 & \ \ \cdots \ \ & \phantom{E_{2n}} E_{2n+1} & \ \ \cdots \ \ \\
\end{array}$$
We require that $E_n\in {\mathcal I}^+$ and $E_{n+1}\subseteq E_n$, for all $n$. The first player who violates these rules loses. If both players respect the rules, we say that ${\rm I}$ wins the game if $\bigcap _n E_n=\emptyset$. Otherwise, ${\rm II}$ wins.
\[game char\] The ideal $\mathcal I$ is precipitous if and only if player ${\rm I}$ does not have a winning strategy in ${\mathcal G}_{\mathcal I}$.
We will also need the notion of *canonical functions* relative to the nonstationary ideal, ${\rm NS}_{\omega_1}$. We recall the relevant definitions from [@GaHa]. Given $f,g: \omega_1 \rightarrow {\rm ORD}$ we let $f<_{{\rm NS}_{\omega_1}}g$ if $\{ \alpha : f(\alpha)<g(\alpha)\}$ contains a club. Since ${\rm NS}_{\omega_1}$ is countably complete, the quasi order $<_{{\rm NS}_{\omega_1}}$ is well founded. For a function $f\in {\rm ORD}^{\omega_1}$, let $||f||$ denote the rank of $f$ in this ordering. It is also known as the *Galvin-Hajnal norm* of $f$. By induction on $\alpha$, the $\alpha$-th *canonical function* $f_\alpha$ is defined (if it exists) as the $<_{{\rm NS}_{\omega_1}}$-least ordinal valued function greater than the $f_\xi$, for all $\xi <\alpha$. Clearly, if the $\alpha$-th canonical function exists then it is unique up to the equivalence $=_{{\rm NS}_{\omega_1}}$. One can show in [ZFC]{} that the $\alpha$-th canonical function $f_\alpha$ exists, for all $\alpha <\omega_2$. One way to define $f_\alpha$ is to fix an increasing continuous sequence $(x_\xi)_{\xi < \omega_1}$ of countable sets with $\bigcup_{\xi< \omega_1}x_\xi = \alpha$ and let $f_\alpha(\xi)= {\rm o.t.}(x_\xi)$, for all $\xi$. The point is that if we wish to witness the non well foundedness of the generic ultrapower we have to work with functions that are above the $\omega_2$ first canonical functions. Our forcing is designed to introduce $\theta$ many such functions, where $\theta$ is the length of the iteration. From these functions we define a winning strategy for I in $\mathcal G_{\rm NS_{\omega_1}}$ and implies that ${\rm NS}_{\omega_1}$ is not precipitous in the final model.
The Side Condition Poset
========================
We start by reviewing Neeman’s side condition poset from [@Neeman]. We fix a transitive model $\mathcal K=(K,\in,\ldots)$ of a sufficient fragment of ZFC, possibly with some additional functions or predicates. Let ${\mathcal{S}}$ denote a collection of countable elementary submodels of $\mathcal K$ and let ${\mathcal{T}}$ be a collection of transitive $W\prec \mathcal K$ such that $W\in K$. We say that the pair $({\mathcal{S}},{\mathcal{T}})$ is [*appropriate*]{} if $M\cap W\in {\mathcal{S}}\cap W$, for every $M\in {\mathcal{S}}$ and $W\in {\mathcal{T}}$. We are primarily interested in the case when $K$ is equal to $V_\theta$, for some inaccessible cardinal $\theta$, let ${\mathcal{S}}$ consists of all countable submodels of $V_\theta$ and ${\mathcal{T}}$ consists of all the $V_\alpha$ such that $V_\alpha \prec V_\theta$ and $\alpha$ has uncountable cofinality. We present the more general version since it will be needed in the analysis of the factor posets of the side condition forcing.
Let us fix a transitive model $\mathcal K$ of a sufficient fragment of set theory and an appropriate pair $({\mathcal{S}},{\mathcal{T}})$. The *side condition poset* ${\mathbb{M}}_{{\mathcal{S}}, {\mathcal{T}}}$ consists of finite $\in$-chains ${\mathcal{M}}= \{M_0 , \ldots , M_{n-1} \}$ of elements of ${\mathcal{S}}\cup {\mathcal{T}}$, closed under intersection. So for each $k < n$, $M_k \in M_{k+1}$, and if $M,N \in {\mathcal{M}}$, then also $M \cap N \in {\mathcal{M}}$. We will refer to elements of ${\mathcal{M}}\cap {\mathcal{S}}$ as *small* or *countable* *nodes* of ${\mathcal{M}}$, and to the elements of ${\mathcal{M}}\cap {\mathcal{T}}$ as *transitive* nodes of ${\mathcal{M}}$. We will write $\pi_{{\mathcal{S}}}({\mathcal{M}})$ for ${\mathcal{M}}\cap {\mathcal{S}}$ and $\pi_{{\mathcal{T}}}({\mathcal{M}})$ for ${\mathcal{M}}\cap {\mathcal{T}}$. Notice that ${\mathcal{M}}$ is totally ordered by the ranks of its nodes, so it makes sense to say, for example, that $M$ is above or below $N$, when $M$ and $N$ are nodes of ${\mathcal{M}}$. The order on ${\mathbb{M}}_{{\mathcal{S}}, {\mathcal{T}}}$ is reverse inclusion, i.e. ${\mathcal{M}}\leq {\mathcal{N}}$ iff ${\mathcal{N}}\subseteq {\mathcal{M}}$.
The *decorated* side condition poset ${\mathbb{M}}^{\rm dec}_{{\mathcal{S}}, {\mathcal{T}}}$ consists of pairs $p$ of the form $({\mathcal{M}}_p , d_p)$, where ${\mathcal{M}}_p \in {\mathbb{M}}_{{\mathcal{S}}, {\mathcal{T}}}$ and $d_p:{\mathcal{M}}_p \to K$ is such that $d_p(M)$ is a finite set which belongs to the successor of $M$ in ${\mathcal{M}}_p$, if this successor exists, and if $M$ is the largest node of ${\mathcal{M}}_p$ then $d_p(M)\in K$. Sometimes our $d_p$ will be only a partial function on ${\mathcal{M}}_p$. In this case, we identify it with the total function which assigns the empty set to all nodes on which $d_p$ is not defined. The order on ${\mathbb{M}}^{\rm dec}_{{\mathcal{S}}, {\mathcal{T}}}$ is given by letting $q \leq p$ iff ${\mathcal{M}}_p \subseteq {\mathcal{M}}_q$ and $d_p(M) \subseteq d_q(M)$, for every $M \in {\mathcal{M}}_p$. Suppose $p \in {\mathbb{M}}^{\rm dec}_{{\mathcal{S}}, {\mathcal{T}}}$ and $Q \in {\mathcal{M}}_p$. Let $p \vert Q$ be the condition $({\mathcal{M}}_p \cap Q , d_p {\upharpoonright}Q)$. One can check that $p \vert Q$ is indeed a condition in ${\mathbb{M}}^{\rm dec}_{{\mathcal{S}}, {\mathcal{T}}}$.
We first observe the following simple fact.
\[M\_p\] Suppose $p$ is a condition in ${\mathbb{M}}^{\rm dec}_{{\mathcal{S}}, {\mathcal{T}}}$ and $M$ is a model in ${\mathcal{S}}\cup {\mathcal{T}}$ such that $p\in M$. Then there is a condition $p^M$ extending $p$ such that $M$ is the top model of ${\mathcal{M}}_{p^M}$.
We let ${\mathcal{M}}_{p^M}$ be the closure of ${\mathcal{M}}_p \cup \{ M\}$ under intersection. Note that if $M\in {\mathcal{T}}$ then all the nodes of ${\mathcal{M}}_p$ are subsets of $M$ and hence ${\mathcal{M}}_{p^M}$ is simply ${\mathcal{M}}_p \cup \{ M\}$. On the other hand if $M$ is countable we need to add nodes of the form $M\cap W$, where $W$ is a transitive mode in ${\mathcal{M}}_p$. We define $d_{qp^M}$ by letting $d_{p^M}(N)=d_p(N)$, if $N\in {\mathcal{M}}_p$, and $d_{p^M}(N)=\emptyset$, if $N$ is one of the new nodes. It is straightforward to check that $p^M$ is as desired.
The main technical results about the (decorated) side condition poset are Corollaries 2.31 and 2.32 together with Claim 2.38 in [@Neeman]. We combine them here as one lemma.
\[scl\] Let $p$ be a condition in ${\mathbb{M}}^{\rm dec}_{{\mathcal{S}}, {\mathcal{T}}}$, and let $Q$ be a node in ${\mathcal{M}}_p$. Suppose that $q$ is a condition in ${\mathbb{M}}^{\rm dec}_{{\mathcal{S}}, {\mathcal{T}}}$ which belongs to $Q$ and strengthens the condition $p \vert Q$. Then there is $r \in {\mathbb{M}}^{\rm dec}_{{\mathcal{S}}, {\mathcal{T}}}$ with $r \leq p,q$ such that:
1. ${\mathcal{M}}_r$ is the closure under intersection of ${\mathcal{M}}_p \cup {\mathcal{M}}_q$,
2. ${\mathcal{M}}_r \cap Q = {\mathcal{M}}_q$,
3. The small nodes of ${\mathcal{M}}_r$ outside $Q$ are of the form $M$ or $M \cap W$, where $M$ is a small node of ${\mathcal{M}}_p$ and $W$ is a transitive node of ${\mathcal{M}}_q$.
We now discuss our modification of Neeman’s posets. Let $\theta =K\cap {{\mathrm}{ORD}}$. We will choose ${\mathcal{S}}$ and ${\mathcal{T}}$ to be stationary families of subsets of $K$. The stationary of ${\mathcal{S}}$ guarantees that $\omega_1$ is preserved in the generic extension and the stationarity of ${\mathcal{T}}$ guarantees that $\theta$ is preserved. All the cardinals in between will be collapsed to $\omega_1$, thus $\theta$ becomes $\omega_2$ in the final model. We plan to simultaneously add $\theta$-many partial functions from $\omega_1$ to $\omega_1$. Each of the partial functions will be defined on a club in $\omega_1$ and will be forced to dominate the $\theta$ first canonical functions in the generic extension. The decorated version of the pure side condition forcing gives us a natural way to represent the canonical function $f_\alpha$, for cofinally many $\alpha <\theta$.
Before we introduce our version of the side condition poset let us make a definition.
\[g\] Let ${\mathcal{M}}$ be a member of ${\mathbb{M}}_{{\mathcal{S}},{\mathcal{T}}}$. We define the partial function $h_{{\mathcal{M}}}$ from $\theta \times \omega_1$ as follows. The domain of $h_{{\mathcal{M}}}$ is the set of pairs $(\alpha,\xi)$ such that there is a countable node $M\in {\mathcal{M}}$ with $M\cap \omega_1=\xi$ and $\alpha \in M$. If $(\alpha,\xi)\in {{\rm dom}}(h_{{\mathcal{M}}})$ we let $$h_{{\mathcal{M}}}(\alpha,\xi) = \max \{ \operatorname{{\rm o.t.}}(M\cap \theta) : M\in \pi_{{\mathcal{S}}}({\mathcal{M}}), \alpha \in M \mbox{ and } M\cap \omega_1 =\xi\}.$$
If $p\in {\mathbb{M}}^*_{{\mathcal{S}},{\mathcal{T}}}$ we let $h_p$ denote $h_{{{\mathcal{M}}}_p}$. We are now ready to define our modified side condition poset ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$.
\[refined side cond defn\] The poset ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$ consists of all conditions $p \in {\mathbb{M}}^{\rm dec}_{{\mathcal{S}},{\mathcal{T}}}$ such that
1. for every $M,N\in \pi_{{\mathcal{S}}}({\mathcal{M}}_p)$, if $N \cap \omega_1 \in M$, then $\operatorname{{\rm o.t.}}{(N \cap \theta)} \in M$.
The ordering is defined by letting $q \leq p$ iff $({\mathcal{M}}_q , d_q) \leq_{{\mathbb{M}}^{\rm dec}_{{\mathcal{S}}, {\mathcal{T}}}} ({\mathcal{M}}_p , d_p)$ and $h_p\subseteq h_q$.
Let us first observe that we have an analog of Lemma \[M\_p\].
\[M\_p\*\] Suppose $p$ belongs to ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$ and $M$ is a model in ${\mathcal{S}}\cup {\mathcal{T}}$ such that $p\in M$. Then there is a condition $p^M$ extending $p$ such that $M$ is the top node of ${\mathcal{M}}_{p^M}$.
We now establish some elementary properties of conditions in ${\mathbb{M}}^*_{{\mathcal{S}},{\mathcal{T}}}$.
\[restriction\] Suppose $p$ is a condition in ${\mathbb{M}}^*_{{\mathcal{S}},{\mathcal{T}}}$ and $M\in {\mathcal{M}}_p$. Then $h_p\restriction M \in M$.
If $M$ is a transitive node then this is immediate. Suppose $M$ is countable. We will use the following.
\[small-restriction\] Suppose $N$ is a countable node in ${\mathcal{M}}_p$ and $N\cap \omega_1\in M$. Then $N\cap M \in M$.
Since ${\mathcal{M}}_p$ is closed under intersection we have that $N\cap M\in {\mathcal{M}}_p$. Moreover, since $N\cap \omega_1 \in M$ we have that $N\cap M$ is below $M$. If there is no transitive node between $N\cap M$ and $M$ then $N\cap M\in M$. Otherwise, let $W$ be the least transitive node above $N\cap M$. By closure under intersection again, $M\cap W \in {\mathcal{M}}_p$. Moreover, $N\cap M \subseteq W\cap M$ and the inclusion is proper. Therefore, $M\cap W$ is a countable node above $N\cap M$ and there is no transitive node between them. Therefore, $N\cap M \in M\cap W$ and so $N\cap M \in M$.
Let us say that a node $N$ of ${\mathcal{M}}_p$ is an *end node* of ${\mathcal{M}}_p$ if there is no node in ${\mathcal{M}}_p$ which is an end extension of $N$. The domain of the function $h_p$ is the union of all the sets of the form $(N \cap \theta) \times \{ \xi\}$, where $N$ is a countable end node of ${\mathcal{M}}_p$ and $\xi =N\cap \omega_1$. Moreover, on $(N \cap \theta)\times \{ \xi\}$ the function $h_p$ is constant and equal to $\operatorname{{\rm o.t.}}(N\cap \theta)$. Now, if $\xi \in M$ then by Claim \[small-restriction\] $N\cap M \in M$. Moreover, since $p\in {\mathbb{M}}^*_{{\mathcal{S}},{\mathcal{T}}}$ we have that $\operatorname{{\rm o.t.}}(N \cap \theta)\in M$. It follows that $h_p \restriction M\in M$.
We wish to have an analog of Lemma \[scl\]. If $p\in {\mathbb{M}}^*_{{\mathcal{S}},{\mathcal{T}}}$ and $Q\in {\mathcal{M}}_p$ we can let $p\vert Q$ be $({\mathcal{M}}_p \cap Q , d_p {\upharpoonright}Q)$. It is easy to check that $p \vert Q$ is a condition. However, we do not know that $p$ extends $p\vert Q$ since $h_p$ may not be an extension of $h_{p\vert Q}$. We must refine the notion of restriction in order to arrange this. In order to do this, let us enrich our initial structure $\mathcal K$ by adding predicates for ${\mathcal{S}}$ and ${\mathcal{T}}$. Let $\mathcal K^*$ denote the structure $(K,\in,{\mathcal{S}},{\mathcal{T}},\ldots)$. Note that our poset ${\mathbb{M}}^*_{{\mathcal{S}},{\mathcal{T}}}$ is definable in $\mathcal K^*$. Let ${\mathcal{S}}^*$ be the collection of all $M\in {\mathcal{S}}$ that are elementary in $\mathcal K^*$ and let ${\mathcal{T}}^*$ be the set of all $W\in {\mathcal{T}}$ that are elementary in $\mathcal K^*$. Note that ${\mathcal{S}}^*$ (respectively ${\mathcal{T}}^*$) is a relative club in ${\mathcal{S}}$ (respectively ${\mathcal{T}}$), hence if ${\mathcal{S}}$ (respectively ${\mathcal{T}}$) is stationary then so is ${\mathcal{S}}^*$ (respectively ${\mathcal{T}}^*$).
Assume $p$ is a condition and $Q$ a node in $p$ which belongs to ${\mathcal{S}}^*\cup {\mathcal{T}}^*$. Now, we know that $p\vert Q$ and $h_p\restriction Q$ belong to $Q$. Moreover, $Q$ is elementary in $\mathcal K^*$ and ${\mathbb{M}}^*_{{\mathcal{S}},{\mathcal{T}}}$ is definable in this structure. Therefore, there is a condition $q \in Q$ such that ${\mathcal{M}}_p \cap Q \subseteq {\mathcal{M}}_q$, $d_p(R)\subseteq d_q(R)$, for all $R\in {\mathcal{M}}_q\cap Q$, and $h_q$ extends $h_p\restriction Q$. We will call such $q$ a *reflection* of $p$ inside $Q$. Note that if $q$ is a reflection of $p$ inside $Q$ then any condition $r \in Q$ which is stronger than $q$ is also a reflection of $p$ inside $Q$. Let us say that $p$ [*reflect*]{} to $Q$ if $p| Q$ is already a reflection of $p$ to $Q$. Finally, let us say that $p$ is [*reflecting*]{} if $p$ reflects to $W$, for all transitive nodes $W$ in ${\mathcal{M}}_p$.
We now have a version of Lemma \[scl\] for our poset.
\[scl2\] Let $p$ be a condition in ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$, and let $Q$ be a node in ${\mathcal{M}}_p$ which belongs to ${\mathcal{S}}^* \cup {\mathcal{T}}^*$. Suppose that $q \in {\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$ is a reflection of $p$ inside $Q$. Then there is $r \in {\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$ with $r \leq p,q$ such that:
1. ${\mathcal{M}}_r$ is the closure under intersection of ${\mathcal{M}}_p \cup {\mathcal{M}}_q$,
2. ${\mathcal{M}}_r \cap Q = {\mathcal{M}}_q$,
3. The small nodes of ${\mathcal{M}}_r$ outside $Q$ are of the form $M$ or $M \cap W$, where $M$ is a small node of ${\mathcal{M}}_p$ and $W$ is a transitive node of ${\mathcal{M}}_q$.
Let $r$ be the condition given by Lemma \[scl\]. We need to check that ${\mathcal{M}}_r$ satisfies ($\ast$) and $h_r$ extends $h_p$ and $h_q$. Suppose $N,M$ are countable nodes in ${\mathcal{M}}_r$ and $N\cap \omega_1 \in M$. We need to check that $\operatorname{{\rm o.t.}}(N)\in M$. If $N$ and $M$ are both in ${\mathcal{M}}_p$ or ${\mathcal{M}}_q$ this follows from the fact that ${\mathcal{M}}_p$ and ${\mathcal{M}}_q$ satisfy ($\ast$). Now, suppose $N\in {\mathcal{M}}_q$ and $M\in {\mathcal{M}}_r \setminus {\mathcal{M}}_q$. Then $M$ is of the form $M' \cap W$, for some transitive node $W$ of ${\mathcal{M}}_q$. Since $N\in Q$ it follows that $\operatorname{{\rm o.t.}}(N) \in Q$, so if $M\cap \omega_1 \geq Q\cap \omega_1$, then we have $\operatorname{{\rm o.t.}}(N)\in M$. If $M\cap \omega_1 < Q\cap \omega_1$, then by Fact \[small-restriction\], $M'\cap Q \in Q$ and hence $M'\cap Q \in {\mathcal{M}}_q$, therefore our conclusion follows from the fact that ${\mathcal{M}}_q$ satisfies ($\ast$) and $M\cap \omega_1 =M'\cap \omega_1$. Now, suppose $M\in {\mathcal{M}}_q$ and $N\in {\mathcal{M}}_r \setminus {\mathcal{M}}_q$. Then $N$ is of the form $N'\cap W$, for some countable node $N'\in {\mathcal{M}}_p$ and a transitive node $W\in {\mathcal{M}}_q$. Since $N\cap \omega_1 \in M$ and $M\in Q$ it follows that $N'\cap \omega_1 \in Q$. By our assumption $q$ is a reflection of $p$ inside $Q$, so there is a node $N''\in {\mathcal{M}}_q$ such that $N''\cap \omega_1 = N'\cap \omega_1$ and $\operatorname{{\rm o.t.}}(N'')=\operatorname{{\rm o.t.}}(N')$. Now, $N''$ and $M$ both belong to ${\mathcal{M}}_q$ which satisfies ($\ast$), so $\operatorname{{\rm o.t.}}(N'')\in M$. Therefore, in all cases $\operatorname{{\rm o.t.}}(N)\in M$.
Now, we check that $h_r$ extends $h_p$ and $h_q$. Since every node in ${\mathcal{M}}_r$ is either in ${\mathcal{M}}_q$ or is of the form $M\cap W$, for some countable node $M$ of ${\mathcal{M}}_r$ and transitive $W\in {\mathcal{M}}_q$, and $q$ is a reflection of $p$ inside $Q$ it follows that the set of the end nodes of ${\mathcal{M}}_r$ is precisely the union of the end nodes of ${\mathcal{M}}_p$ and the end nodes of ${\mathcal{M}}_q$. This implies that $h_r$ extends $h_p$ and $h_q$. This completes the proof of the lemma.
We have a couple of immediate corollaries.
\[fully reflect\] For every condition $p\in {\mathbb{M}}^*_{{\mathcal{S}},{\mathcal{T}}}$ there is a reflecting condition $q\leq p$ which has the same top model as $p$.
\[sp\] ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$ is ${\mathcal{S}}^* \cup {\mathcal{T}}^*$-strongly proper. In particular, if ${\mathcal{S}}$ is stationary then ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$ preserves $\omega_1$, and if ${\mathcal{T}}$ is stationary then ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$ preserves $\theta$.
From now on, we assume that ${\mathcal{S}}$ and ${\mathcal{T}}$ are stationary families of subsets of $K$. Suppose that $G$ is a $V$-generic filter over ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$. Let ${\mathcal{M}}_G$ denote $\bigcup \{ {\mathcal{M}}_p : p \in G\}$. Then ${\mathcal{M}}_G$ is an $\in$-chain of models in ${\mathcal{S}}\cup {\mathcal{T}}$. Hence, ${\mathcal{M}}_G$ is totally ordered by $\in^*$, where $\in^*$ denotes the transitive closure of $\in$. If $M,N$ are members of ${\mathcal{M}}_G$ with $M\in^* N$ let $(M,N)_G$ denote the interval consisting of all $P \in {\mathcal{M}}_G$ such that $M\in^*P \in^*N$. The following lemma is the main reason we are working with the decorated version of the side condition poset.
\[continuity\] Suppose $W$ and $W'$ are two consecutive elements of ${\mathcal{M}}_G \cap {\mathcal{T}}$. Then the $\in$-chain $(W,W')_G$ is continuous.
Note that $(W,W')_G$ consists entirely of countable models and therefore the membership relation is transitive on $(W,W')_G$. Suppose $M$ is a limit member of $(W,W')_G$. We need to show that $M$ is the union of the $\in$-chain $(W,M)_G$. Let $p\in G$ be a condition such that $M\in {\mathcal{M}}_p$ and $p$ forces that $M$ is a limit member of $(W,W')_G$. Given any $x\in M$ and a condition $q\leq p$ we show that there is $r\leq q$ and a countable node that $R\in {\mathcal{M}}_r\cap M$ such that $x\in R$. We may assume that there is a countable model in ${\mathcal{M}}_q$ between $W$ and $M$. Let $Q$ be the $\in^*$-largest such model. By increasing $d_q(Q)$ if necessary, we may assume that $x\in d_q(Q)$. Since $p$ forces that $M$ is a limit member of ${\mathcal{M}}_G$ so does $q$. Therefore, there exists $r \leq q$ such that ${\mathcal{M}}_r$ contains a countable node between $Q$ and $M$. Let $R$ be the $\in^*$-least such node. By the definition of the order relation on ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$ we must have that $d_q(P)\in R$ and hence $x\in R$, as desired.
Note that Lemma \[continuity\] implies in particular that if $W'$ is a successor element of ${\mathcal{M}}_G \cap {\mathcal{T}}$ then $W'$ has cardinality $\omega_1$ in $V[G]$. Therefore, if $\beta =W'\cap {{\mathrm}{ORD}}$, one way to represent the $\beta$-th canonical function $f_\beta$ in $V[G]$ is the following. Let $W$ be the predecessor of $W'$ in ${\mathcal{M}}_G\cap {\mathcal{T}}$. Since ${\mathcal{S}}$ is stationary in $K$, so is ${\mathcal{S}}\cap W'$ in $W'$. Therefore, $(W,W')_G$ will be an $\in$-chain of length $\omega_1$. Let $\{M_\xi : \xi <\omega_1\}$ be the increasing enumeration of this chain. Then we can let $f_\beta(\xi)=\operatorname{{\rm o.t.}}(M_\xi \cap \beta)$, for all $\xi$. Note that $M_\xi \cap \omega_1 =\xi$, for club many $\xi$.
Now, let $h_G$ denote $\bigcup \{ h_p: p\in G\}$. Then $h_G$ is a partial function from $\theta \times \omega_1$ to $\omega_1$. Let $h_{G,{\alpha}}$ be a partial function from $\omega_1$ to $\omega_1$ defined by letting $h_{G,{\alpha}}(\xi)=h_G(\alpha,\xi)$, for every $\xi$ such that $(\alpha,\xi)\in {{\rm dom}}(h_G)$. By Lemma \[continuity\] and the above remarks we have the following.
\[dominating\] For every $\alpha <\theta$ the function $h_{G,\alpha}$ is defined on a club in $\omega_1$. Moreover, $h_{G,\alpha}$ dominates under $<_{{\rm NS}_{\omega_1}}$ all the canonical function $f_\beta$, for $\beta <\theta$.
Factoring the Side Condition Poset
==================================
We now let $\mathcal K =(V_\theta,\in,\ldots)$, for some inaccessible cardinal $\theta$. Let $T$ be the set of all $\alpha <\theta$ of uncountable cofinality such that $V_\alpha \prec \mathcal K$ and let ${\mathcal{T}}=\{ V_\delta : \delta \in T\}$. Finally, let ${\mathcal{S}}$ be the set of all countable elementary submodels of $\mathcal K$. Clearly, the pair $({\mathcal{S}},{\mathcal{T}})$ is appropriate. Let ${\mathcal{S}}^*$ and ${\mathcal{T}}^*$ be defined as before and let $T^*=\{ \alpha : V_\alpha \in {\mathcal{T}}^*\}$. We start by analyzing the factor posets of ${\mathbb{M}}^*_{{\mathcal{S}},{\mathcal{T}}}$. Suppose $\delta \in T^*$ and let $p_\delta=(\{ V_\delta\} ,\emptyset)$. Then, by Lemma \[scl2\], the map $i_\delta: {\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}\cap V_\delta \to {\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}\restriction p_\delta$ given by $i_\delta (p) = ({\mathcal{M}}_p\cup \{ V_\delta \},d_p)$ is a complete embedding. Fix a $V$-generic filter $G_\delta$ over ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}\cap V_\delta$. Let ${\mathcal{M}}_{G_\delta}$ denote $\bigcup \{ {\mathcal{M}}_p: p\in G_\delta\}$ and let $h_{G_\delta}$ be the derived height function, i.e. $h_{G_\delta}=\bigcup \{ h_p: p\in G_\delta\}$. Let ${\mathbb{Q}}_\delta$ denote the factor forcing ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}\! \! \restriction \! p_\delta/i_\delta[G_\delta]$. We can identify ${\mathbb{Q}}_\delta$ with the set of all conditions $p \in {\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}$ such that $V_\delta\in {\mathcal{M}}_p$, $p$ reflects to $V_\delta$ and $p | V_\delta \in G_\delta$.
We make the following definition in $V[G_\delta]$.
\[S-delta\] Let ${\mathcal{S}}_\delta$ be the collection of all $M\in {\mathcal{S}}$ such that $M\nsubseteq V_\delta$, $M\cap V_\delta \in {\mathcal{M}}_{G_\delta}$ and ${\rm o.t.}(M\cap \theta) \leq h_{G_\delta}(\alpha,M\cap \omega_1)$, for all $\alpha \in M\cap \delta$.
We also let $T_\delta=T\setminus (\delta+1)$ and ${\mathcal{T}}_\delta=\{ V_\gamma: \gamma \in T_\delta\}$. We define ${\mathcal{S}}_\delta^*$ and ${\mathcal{T}}_\delta^*$ as before. Clearly, the pair $({\mathcal{S}}_\delta,{\mathcal{T}}_\delta)$ is appropriate. We show that ${\mathbb{Q}}_\delta$ is very close to ${\mathbb{M}}^*_{{\mathcal{S}}_\delta,{\mathcal{T}}_\delta}$. More precisely, let ${\mathbb{M}}^*_\delta$ consist of all pairs $p$ of the form $({\mathcal{M}}_p,d_p)$ such that ${\mathcal{M}}_p \in {\mathbb{M}}_{{\mathcal{S}}_\delta,{\mathcal{T}}_\delta}$, $d_p: {\mathcal{M}}_p \cup \{ V_\delta\}\to V_\theta$, $({\mathcal{M}}_p,d_p\restriction {\mathcal{M}}_p)\in {\mathbb{M}}^*_{{\mathcal{S}}_\delta,{\mathcal{T}}_\delta}$, and $V_\theta$ and $d_p(V_\theta)$ belong to the least model of ${\mathcal{M}}_p$. So, formally we do not put $V_\delta$ as the least node of conditions $p$ in ${\mathbb{M}}^*_\delta$, but we require the function $d_p$ to be defined on ${\mathcal{M}}_p \cup \{ V_\delta\}$. This puts a restriction on the nodes we are allowed to add below the least node of ${\mathcal{M}}_p$.
${\mathbb{Q}}_\delta$ and ${\mathbb{M}}_\delta^*$ are equivalent forcing notions.
Given a condition $p\in {\mathbb{Q}}_\delta$, let $\varphi(p)=({\mathcal{M}}_p \setminus V_{\delta+1},d_p \restriction ({\mathcal{M}}_p\setminus V_{\delta}))$. Clearly, the function $\varphi$ is order preserving. To see that $\varphi$ is onto, let $s\in {\mathbb{M}}_\delta^*$. Then $M\cap V_\delta \in {\mathcal{M}}_{G_\delta}$, for all small nodes $M\in {\mathcal{M}}_s$. Fix a condition $p\in G_\delta$ such that $M\cap V_\delta \in {\mathcal{M}}_p$, for every such $M$. Define a condition $q$ by letting ${\mathcal{M}}_q= {\mathcal{M}}_p \cup \{ V_\delta\} \cup {\mathcal{M}}_s$ and $d_q=d_p \cup d_s$. Since every small node of ${\mathcal{M}}_s$ is in ${\mathcal{S}}_\delta$ it follows that ${\rm ht}_q\restriction \delta \times \omega_1 = {\rm ht}_p$. Therefore, $q \in {\mathbb{Q}}_\delta$ and $\varphi(q)=s$. Finally, note that if $p,q\in {\mathbb{Q}}_\delta$ then $p$ and $q$ are compatible in ${\mathbb{Q}}_\delta$ iff $\varphi(p)$ and $\varphi(q)$ are compatible in ${\mathbb{M}}_\delta^*$. This implies that ${\mathbb{Q}}_\delta$ and ${\mathbb{M}}_\delta^*$ are equivalent forcing notions.
\[delta-strong-properness\] ${\mathbb{Q}}_\delta$ is ${\mathcal{S}}_\delta^*\cup {\mathcal{T}}_\delta^*$-strongly proper.
\[stationary\] ${\mathcal{S}}_\delta$ is stationary family of countable subsets of $V_\theta$.
We argue in $V$ via a density argument. Let $\dot{f}$ be a ${\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}\cap V_\delta$-name for a function from $V_\theta^{<\omega}$ to $V_\theta$ and let $p\in {\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}\cap V_\delta$. We find a condition $q \leq p$ and $M\in {\mathcal{S}}$ such that $q$ forces that $M$ belongs to $\dot{{\mathcal{S}}}_\delta$ and is closed under $\dot{f}$. For this purpose, fix a cardinal $\theta^*>\theta$ such that $V_{\theta^*}$ satisfies a sufficient fragment of ZFC. Let $M^*$ be an countable elementary submodel of $V_{\theta^*}$ containing all the relevant parameters. It follows that $M\in {\mathcal{S}}^*$, where $M=M^* \cap V_\theta$. Let $p^M$ be the condition given by Lemma \[M\_p\*\]. Since $\delta \in {\mathcal{T}}^*$ we can find a reflection $q$ of $p^M$ inside $V_\delta$. We claim that $q$ and $M$ are as required. To see this, note that, since $M\cap V_\delta \in {\mathcal{M}}_q \cap {\mathcal{S}}^*$, then, by Lemma \[scl2\], $q$ is $(M\cap V_\delta,{\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}\cap V_\delta)$-strongly generic and hence also $(M^*, {\mathbb{M}}^*_{{\mathcal{S}}, {\mathcal{T}}}\cap V_\delta)$-generic. It follows that $q$ forces that $M^*[\dot{G}_\delta]\cap V_\theta=M$, and hence that $M$ is closed under $\dot{f}$. On the other hand, $h_{p^M}(\alpha,M\cap \omega_1)={\rm o.t.}(M\cap \theta)$, for all $\alpha \in M\cap \theta$. Since $q$ is a reflection of $p^M$, we have $h_q(\alpha,M\cap \omega_1)=h_{p^M}(\alpha,M\cap \omega_1)$, for all $\alpha \in M\cap \delta$. Therefore, $q$ forces $M$ to belong to $\dot{{\mathcal{S}}}_\delta$. This completes the argument.
We need to understand which stationary subsets of $\omega_1$ in $V[G_\delta]$ will remain stationary in the final model. So, suppose $E$ is a subset of $\omega_1$ in $V[G_\delta]$. Let $${\mathcal{S}}_\delta(E) = \{ M \in {\mathcal{S}}_\delta : M \cap \omega_1 \in E \}.$$
For $\rho \in T_\delta$ let ${\mathcal{S}}_\delta^\rho(E)= {\mathcal{S}}_\delta(E)\cap V_\rho$. Note that if $M\in {\mathcal{S}}_\delta(E)$ and $\rho \in T_\delta$ then $M\cap V_\rho \in {\mathcal{S}}_\delta^\rho(E)$. Therefore, if $\rho <\sigma$ and ${\mathcal{S}}_\delta^{\sigma}(E)$ is stationary in $V_{\sigma}$ then ${\mathcal{S}}_\delta^\rho(E)$ is stationary in $V_{\rho}$. Since $\theta$ is inaccessible, it follows that ${\mathcal{S}}_\delta(E)$ is stationary in $V_\theta$ iff ${\mathcal{S}}_\delta^{\rho}(E)$ is stationary in $V_\rho$, for all $\rho \in T_\delta$.
\[characterization\] The maximal condition in ${\mathbb{Q}}_\delta$ decides if $E$ remains stationary in $\omega_1$. Namely, if ${\mathcal{S}}_\delta(E)$ is stationary in $V_\theta$ then ${\Vdash}_{{\mathbb{Q}}_\delta} \check{E} \textrm{ is stationary}$, and if ${\mathcal{S}}_\delta(E)$ is nonstationary then ${\Vdash}_{{\mathbb{Q}}_\delta} \check{E} \textrm{ is nonstationary}$.
The first implication follows from Corollary \[delta-strong-properness\] and the fact that ${\mathcal{S}}_\delta^*$ is a relative club in ${\mathcal{S}}_\delta$. For the second implication, suppose ${\mathcal{S}}_\delta(E)$ is nonstationary and fix a successor element of $T_\delta$, say $\sigma$, such that ${\mathcal{S}}_\delta^{\sigma}(E)$ is nonstationary in $V_\sigma$. Let $\rho$ be the predecessor of $\sigma$ in $T_\delta$ and fix a condition $p\in {\mathbb{Q}}_\delta$ such that $V_\rho,V_\sigma \in {\mathcal{M}}_p$. Pick an arbitrary $V[G_\delta]$-generic filter $G$ over ${\mathbb{Q}}_\delta$ containing $p$. Then we can identify $G$ with a $V$-generic filter $\bar{G}$ over $\mathbb M^*_{{\mathcal{S}},{\mathcal{T}}}$ which extends $G_\delta$ and such that $V_\delta \in {\mathcal{M}}_{\bar{G}}$. Since $p\in \bar{G}$, we have that $V_\rho$ and $V_\sigma$ are consecutive elements of ${\mathcal{M}}_{\bar{G}}\cap {\mathcal{T}}$. By Lemma \[continuity\] we know that, in $V[\bar{G}]$, ${\mathcal{S}}_\delta \cap V_\sigma$ contains a club of countable subsets of $V_\sigma$. On the other hand, by our assumption, ${\mathcal{S}}_\delta^\sigma(E)$ is nonstationary. It follows that $E$ is a nonstationary subset of $\omega_1$ in $V[\bar{G}]$. Since $G$ was an arbitrary generic filter containing $p$, it follows that $p {\Vdash}_{{\mathbb{Q}}_\delta} \check{E} \mbox{ is nonstationary in } \omega_1$.
One can show that if $\delta$ is inaccessible in $V$ then ${\mathbb{Q}}_\delta$ actually preserves stationary subsets of $\omega_1$. To see this note that, under this assumption, for every subset $E$ of $\omega_1$ in $V[G_\delta]$ there is $\delta^*< \delta$ with $V_{\delta^*}\in {\mathcal{M}}_{G_\delta}$ such that $E\in V[G_{\delta^*}]$, where $G_{\delta^*}= G_\delta \cap V_{\delta^*}$. If, in the model $V[G_{\delta^*}]$, ${\mathcal{S}}_{\delta^*}(E)$ is nonstationary there is $\rho <\theta$ such that ${\mathcal{S}}_{\delta^*}^\rho(E)$ is nonstationary. By elementarity of $V_\delta$ in $V_\theta$ there is such $\rho <\delta$. But then, as in the proof of Lemma \[characterization\], we would have that $E$ is nonstationary already in the model $V[G_\delta]$.
Suppose $E\in V[G_\delta]$ is a subset $\omega_1$ and $\gamma <\delta$. Let $${\mathcal{S}}_\delta(E,\gamma) =
\{ M \in {\mathcal{S}}_\delta(E): \gamma,\delta \in M \mbox{ and } {\rm o.t.}(M\cap \theta) < h_{G_\delta}(\gamma,M\cap \omega_1)\}.$$
Recall that if $M\in {\mathcal{S}}_\delta$ then $M\cap V_\delta \in G_\delta$. Hence, if $\gamma \in M$ then $(\gamma,M\cap \omega_1) \in {{\rm dom}}(h_{G_\delta})$.
\[gamma\] Suppose that, in $V[G_\delta]$, $E$ is a subset of $\omega_1$ such that ${\mathcal{S}}_\delta(E)$ is stationary. Then ${\mathcal{S}}_\delta(E,\gamma)$ is stationary, for all $\gamma <\delta$.
Work in $V[G_\delta]$ and let $\gamma <\delta$ and $f: V_\theta^{<\omega} \rightarrow V_\theta$ be given. We need to find a member of ${\mathcal{S}}_\delta(E,\gamma)$ which is closed under $f$. Since $\theta$ is inaccessible, we can first find $\sigma \in T_\delta$ such that $V_\sigma$ is closed under $f$. We know that ${\mathcal{S}}_\delta(E)$ is stationary, hence we can find $M\in {\mathcal{S}}_\delta(E)$ which is closed under $f$ and such that $\gamma,\delta,\sigma \in M$. It follows that $M\cap V_\sigma$ is also closed under $f$. Since $\sigma \in M$ we have that ${\rm o.t.}(M\cap \sigma) < {\rm o.t.}(M\cap \theta)$. Since $M\in {\mathcal{S}}_\delta(E)$ and $\gamma \in M$ we have that ${\rm o.t.}(M\cap \theta)\leq h_{G_\delta}(\gamma,M\cap \omega_1)$. Finally, $(M\cap V_\sigma)\cap \omega_1 = M\cap \omega_1$. It follows that $M\cap V_\sigma \in {\mathcal{S}}_\delta(E,\gamma)$, as desired.
We now consider what happens in the final model $V[G]$, where $G$ is $V$-generic over $\mathbb M_{{\mathcal{S}},{\mathcal{T}}}^*$. For an ordinal $\gamma <\theta$ let $D_\gamma$ denote the domain of $h_{G,\gamma}$. Recall that, by Corollary \[dominating\], $D_\gamma$ contains a club, for all $\gamma$. Given a subset $E$ of $\omega_1$ and $\gamma,\delta <\theta$ let $$\varphi (E,\gamma,\delta) =\{ \xi \in E\cap D_\gamma \cap D_\delta : h_{G,\delta}(\xi) < h_{G,\gamma}(\xi)\}.$$
\[shrinking\] Let $G$ be $V$-generic over $\mathbb M_{{\mathcal{S}},{\mathcal{T}}}^*$. Suppose, in $V[G]$, that $E$ is a stationary subset of $\omega_1$ and $\gamma <\theta$. Then there is $\delta <\theta$ such that $\varphi(E,\gamma,\delta)$ is stationary.
Since $\mathbb M^*_{{\mathcal{S}},{\mathcal{T}}}$ is ${\mathcal{T}}^*$-proper, we can find $\delta \in T^*\setminus (\gamma +1)$ such that $V_\delta \in {\mathcal{M}}_G$ and $E\in V[G_\delta]$, where $G_\delta = G\cap V_\delta$. Since $E$ remains stationary in $V[G]$, it follows that, in $V[G_\delta]$, ${\mathcal{S}}_\delta(E)$ is stationary. Work for a while in $V[G_\delta]$. We claim that the maximal condition in ${\mathbb{Q}}_\delta$ forces that $\dot{\varphi}(E,\gamma,\delta)$ is stationary, where $\dot{\varphi}(E,\gamma,\delta)$ is the canonical name for $\varphi(E,\gamma,\delta)$. To see this fix a ${\mathbb{Q}}_\delta$-name $\dot{C}$ for a club in $\omega_1$ and a condition $p\in {\mathbb{Q}}_\delta$. Let $\theta^*>\theta$ be such that $(V_{\theta^*},\in)$ satisfies a sufficient fragment of ${{\mathrm}{ZFC}}$. We know, by Lemma \[gamma\], that ${\mathcal{S}}_\delta(E,\gamma)$ is stationary, so we can find a countable elementary submodel $M^*$ of $V_{\theta^*}$ containing all the relevant objects such that $M\in {\mathcal{S}}_\delta(E,\gamma)$, where $M=M^* \cap V_\theta$. Let $q$ be the condition $p^M$ as in Lemma \[M\_p\*\] (or rather its version for ${\mathbb{Q}}_\delta$). Since $\dot{C}\in M^*$ and $q$ is $(M^*,{\mathbb{Q}}_\delta)$-generic, it follows that $q$ forces that $M\cap \omega_1$ belongs to $\dot{C}$. Also, note that the top model of ${\mathcal{M}}_q$ is $M$. Hence $h_q(\delta,M\cap\omega_1) = {\rm o.t.}(M\cap \theta)$. Since $M\in {\mathcal{S}}_\delta(E,\gamma)$ we have that ${\rm o.t.}(M\cap \theta) <h_{G_\delta}(\gamma,M\cap \omega_1)$. It follows that $q$ forces that $M\cap \omega_1$ belongs to the intersection of $\dot{\varphi}(E,\gamma,\delta)$ and $\dot{C}$, as required.
We now have the following conclusion.
\[pure-precipitous\] Let $G$ be $V$-generic over $\mathbb M^*_{{\mathcal{S}}, {\mathcal{T}}}$. Then, in $V[G]$, $\theta =\omega_2$ and ${\rm NS}_{\omega_1}$ is not precipitous.
We already know that $\mathbb M^*_{{\mathcal{S}},{\mathcal{T}}}$ is ${\mathcal{S}}^* \cup {\mathcal{T}}^*$-strongly proper. This implies that $\omega_1$ and $\theta$ are preserved. Moreover, by Lemma \[continuity\], we know that all cardinals between $\omega_1$ and $\theta$ are collapsed to $\aleph_1$. Therefore, $\theta$ becomes $\omega_2$ in $V[G]$. In order to show that ${\rm NS}_{\omega_1}$ is not precipitous we describe a winning strategy $\tau$ for Player I in $\mathcal G_{{\rm NS}_{\omega_1}}$. On the side, Player I will pick a sequence $(\gamma_n)_n$ of ordinals $<\theta$. So, Player I starts by playing $E_0=\omega_1$ and letting $\gamma_0=0$. Suppose, in the $n$-th inning, Player II has played a stationary set $E_{2n+1}$. Player I applies Lemma \[shrinking\] to find $\delta <\theta$ such that $\varphi(E_{2n+1},\gamma_n,\delta)$ is stationary. He then lets $\gamma_{n+1} =\delta$ and plays $E_{2n+2}= \varphi(E_{2n+1},\gamma_n,\gamma_{n+1})$. Suppose the game continues $\omega$ moves and II respects the rules. We need to show that $\bigcap_n E_n$ is empty. Indeed, if $\xi \in \bigcap_n E_n$ then $\xi \in D_{\gamma_n}$, for all $n$, and $h_{G,\gamma_0}(\xi) > h_{G,\gamma_1}(\xi) > \ldots$ is an infinite decreasing sequence of ordinals, a contradiction.
The Working Parts
=================
In this section we show how to add the working part to the side condition poset described in §2, which allows us to define a Neeman style iteration. As in [@Neeman], if at each stage we choose a proper forcing, the resulting forcing notion will be proper as well. By a standard argument, if we use the Laver function to guide our choices, we we obtain ${{\mathrm}{PFA}}$ in the final model. The point is that the relevant lemmas from §2 and §3 go through almost verbatim and hence we obtain, as before, that ${\rm NS}_{\omega_1}$ will be non precipitous in the final model.
Let us now recall the iteration technique from [@Neeman]. We fix an inaccessible cardinal $\theta$ and a function $F:\theta \to V_\theta$. Let $\mathcal K$ be the structure $(V_\theta,\in,F)$. Let ${\mathcal{S}}$ be the set of all countable elementary submodels of $\mathcal K$ and $T$ the set of all $\alpha <\theta$ of uncountable cofinality such that $V_\alpha$ is an elementary submodel of $\mathcal K$. Let ${\mathcal{T}}=\{ V_\alpha : \alpha \in T\}$. Define ${\mathcal{S}}^*$, $T^*$ and ${\mathcal{T}}^*$ as before. Note that if $\alpha \in T^*$ then $T^*\cap \alpha$ is definable in $\mathcal K$ from parameter $\alpha$. Hence, if ${\mathcal{M}}\in {\mathcal{S}}$ and $\alpha \in T^*$ then $M\cap V_\alpha \in {\mathcal{S}}^*$. We will define, by induction on $\alpha \in T^*\cup \{ \theta\}$, a forcing notion $\mathbb P_\alpha$. In general, $\mathbb P_\alpha$ consists of tripes $p$ of the form $({\mathcal{M}}_p,d_p,w_p)$ such that $({\mathcal{M}}_p,d_p)$ is a reflecting condition in $\mathbb M^*_{{\mathcal{S}},{\mathcal{T}}}$ and $w_p$ is a finite partial function from $T^*\cap \alpha$ to $V_\alpha$ with some properties. If $\alpha <\beta$ are in $T^*\cup \{ \theta\}$ and $p \in \mathbb P_\beta$ we let $p \restriction \alpha$ denote $({\mathcal{M}}_p\cap V_\alpha, d_p \restriction ({\mathcal{M}}_p \cap V_\alpha), w_p\restriction V_\alpha)$. It will be immediate from the definition that $p \restriction \alpha \in \mathbb P_\alpha$. Moreover, since $({\mathcal{M}}_p,d_p)$ is reflecting, it will be an extension of $({\mathcal{M}}_p \cap V_\alpha, d_p \restriction ({\mathcal{M}}_p\cap V_\alpha))$. For $\alpha \in T^*$ we will also be interested in the partial order $\mathbb P_\alpha \cap V_\alpha$. We let $\dot{G_\alpha}$ denote the canonical $\mathbb P_\alpha \cap V_\alpha$-name for the generic filter. If $M \in {\mathcal{S}}\cup {\mathcal{T}}$ and $\alpha \in M$ we let $M[\dot{G}_\alpha]$ be the canonical $\mathbb P_\alpha \cap V_\alpha$-name for the model $M[G_\alpha]$, where $G_\alpha$ is the generic filter. If $F(\alpha)$ is a $\mathbb P_\alpha \cap V_\alpha$-name which is forced by the maximal condition to be a proper forcing notion we let $\dot{\mathbb F}_\alpha$ denote $F(\alpha)$; otherwise let $\dot{\mathbb F}_\alpha$ denote the $\mathbb P_\alpha \cap V_\alpha$-name for the trivial forcing. Let $\leq_{\mathbb F_\alpha}$ be the name for the ordering on $\dot{\mathbb F}_\alpha$.
We are now ready for the main definition.
Suppose $\alpha \in T^*\cup \{ \theta\}$. Conditions in $\mathbb P_\alpha$ are triples $p$ of the form $({\mathcal{M}}_p,d_p,w_p)$ such that:
1. $({\mathcal{M}}_p,d_p)$ is a reflecting condition in $\mathbb M^*_{{\mathcal{S}},{\mathcal{T}}}$,
2. $w_p$ is a finite function with domain contained in the set $\{ \gamma \in T^* \cap \alpha : V_\gamma \in {\mathcal{M}}_p\}$,
3. if $\gamma \in {{\rm dom}}(w_p)$ then:
1. $w_p(\gamma)$ is a canonical $\mathbb P_\gamma \cap V_\gamma$-name for an element of $\dot{\mathbb F}_\gamma$,
2. if $M\in {\mathcal{S}}\cap {\mathcal{M}}_p$ and $\gamma \in M$ then $$p \restriction \gamma {\Vdash}_{{\mathbb P}_\gamma \cap V_\gamma} w_p(\gamma) \mbox{ is }
(M[\dot{G_\alpha}],\mathbb F_\alpha)\mbox{-generic}.$$
We let $q\leq p$ if $({\mathcal{M}}_q,d_q)$ extends $({\mathcal{M}}_p,d_p)$ in $\mathbb M^*_{{\mathcal{S}},{\mathcal{T}}}$, ${{\rm dom}}(w_p)\subseteq {{\rm dom}}(w_q)$ and, for all $\gamma \in {{\rm dom}}(p)$, $$q\restriction \gamma {\Vdash}_{{\mathbb P}_\gamma \cap V_\gamma} w_q(\gamma)\leq_{\mathbb F_\gamma}w_p(\gamma).$$
Our posets $\mathbb P_\alpha$ is almost identical as the posets $\mathbb A_\alpha$ from [@Neeman]. The difference is that we have a requirement that the height function ${\rm ht}_p$ of a condition $p$ is preserved when going to a stronger condition and we also added the decoration $d_p$. We are restricting ourselves to reflecting conditions $p$ since we then know that $p$ is an extension of $p \restriction \alpha$, for any $\alpha\in T^*$ such that $V_\alpha$ is a node in ${\mathcal{M}}_p$. Of course, the working part $w_p$ is defined only for such $\alpha$. These modifications do not affect the relevant arguments from [@Neeman]. We state the main properties of our posets and refer to [@Neeman] for the proofs.
\[strongly-proper\] Suppose $\beta$ belongs to $T^*\cup \{ \theta\}$.
1. Let $p\in \mathbb P_\beta$ and let $V_\alpha \in {\mathcal{M}}_p \cap {\mathcal{T}}^*$. Then $p$ is $(V_\alpha,\mathbb P_\beta)$-strongly generic.
2. Let $p\in \mathbb P_\beta$, let $V_\alpha \in {\mathcal{T}}$ and suppose $p \in V_\alpha$. Then $({\mathcal{M}}_p \cup \{ V_\alpha\},d_p,w_p)$ is a condition in $\mathbb P_\beta$.
3. $\mathbb P_\beta$ is ${\mathcal{T}}^*$-strongly proper.
This is essentially the same as Lemma 6.7 from [@Neeman].
\[top-model\] Suppose $\beta \in T^*\cup \{ \theta\}$ and $p \in \mathbb P_\beta$. Let $M\in {\mathcal{S}}$ be such that $p\in M$. Then there is a condition $q \in \mathbb P_\beta$ extending $p$ such that $M$ is the top model of $q$.
First, let ${\mathcal{M}}$ be closure of ${\mathcal{M}}_p \cup \{ M\}$ under intersection and let $d$ be the extension of $d_p$ to ${\mathcal{M}}$ defined by letting $d(N)=\emptyset$, for all $N\in {\mathcal{M}}\setminus {\mathcal{M}}_p$. Then $({\mathcal{M}},d)\in \mathbb M^*_{{\mathcal{S}},{\mathcal{T}}}$. By Lemma \[fully reflect\] we can find a reflecting condition $({\mathcal{M}}_q,d_q) \leq ({\mathcal{M}},d)$ such that the top model of ${\mathcal{M}}_q$ is $M$. Now, we need to define $w_q$. If $\alpha \in {{\rm dom}}(w_p)$ then $\mathbb P_\alpha \cap V_\alpha, \dot{\mathbb F}_\alpha \in M$. Since $\dot{\mathbb F}_\alpha$ is forced by the maximal condition in $\mathbb P_\alpha \cap V_\alpha$ to be proper and $w_p(\alpha)\in M$ is a canonical name for a member of $\dot{\mathbb F}_\alpha$, we can fix a canonical $\mathbb P_\alpha \cap V_\alpha$-name $w_q(\alpha)$ for a member of $\dot{\mathbb F}_\alpha$ such that $p \restriction \alpha$ forces in $\mathbb P_\alpha \cap V_\alpha$ that $w_q(\alpha)$ extends $w_p(\alpha)$ and is $(M[\dot{G_\alpha}],\dot{\mathbb F}_\alpha)$-generic. Then the condition $q=({\mathcal{M}}_q,d_q,w_q)$ is as required.
\[generic-condition\] Suppose $\beta \in T^*\cup \{ \theta\}$ and $p \in \mathbb P_\beta$. Let $\theta^*>\theta$ be such that $(V_{\theta^*},\in)$ satisfies a sufficient fragment of ${{\mathrm}{ZFC}}$. Let $M^*$ be a countable elementary submodel of $V_{\theta^*}$ containing all the relevant parameters. Let $M=M^* \cap V_\theta$ and suppose $M\in {\mathcal{M}}_p$. Then $p$ is $(M^*,\mathbb P_\beta)$-generic.
This is essentially the same as Lemma 6.11 from [@Neeman].
Then, as in [@Neeman], we have the following.
\[prop-PFA\] Suppose that $\theta$ is supercompact and $F$ is a Laver function on $\theta$. Let $G_\theta$ be a $V$-generic filter over $\mathbb P_\theta$. Then $V[G_\theta]$ satisfies ${{\mathrm}{PFA}}$.
Now, if $\delta \in T^*$ and $G_\delta$ is a $V$-generic filter over $\mathbb P_\delta \cap V_\delta$, we can define the function $h_{G_\delta}$ and the factor forcing $\mathbb Q_\delta$ as in §3. Further, we define the set ${\mathcal{S}}_\delta$ in an analogous way to Definition \[S-delta\] and show that it is stationary as in Lemma \[stationary\]. We show, as in Lemma \[strongly-proper\] that ${\mathbb{Q}}_\delta$ is ${\mathcal{T}}^*_\delta$-strongly proper. By Lemma \[top-model\] and Lemma \[generic-condition\] we also get that, in $V[G_\delta]$, $\mathbb Q_\delta$ is ${\mathcal{S}}^*_\delta$-proper. For every subset $E$ of $\omega_1$ which belongs to $V[G_\delta]$, we define the set ${\mathcal{S}}_\delta(E)$ as in §3 and prove a version of Lemma \[characterization\]. Then, proceeding in the same way, for every $\gamma <\delta$ we define ${\mathcal{S}}_\delta(E,\gamma)$ and prove an analog of Lemma \[gamma\]. Then, turning to the final model $V[G_\theta]$, we prove an analog of Lemma \[shrinking\]. Finally, combining the arguments of Theorem \[pure-precipitous\] and Proposition \[prop-PFA\] we get the conclusion.
\[main-thm\] Suppose $\theta$ is supercompact and $F$ is a Laver function on $\theta$. Let $G_\theta$ be $V$-generic over $\mathbb P_\theta$. Then, in $V[G_\theta]$, ${{\mathrm}{PFA}}$ holds and ${\rm NS}_{\omega_1}$ is not precipitous.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
Nowadays, many strategies to solve polynomial systems use the computation of a Gröbner basis for the graded reverse lexicographical ordering, followed by a change of ordering algorithm to obtain a Gröbner basis for the lexicographical ordering. The change of ordering algorithm is crucial for these strategies. We study the $p$-adic stability of the main change of ordering algorithm, FGLM.
We show that FGLM is stable and give explicit upper bound on the loss of precision occuring in its execution. The variant of FGLM designed to pass from the grevlex ordering to a Gröbner basis in shape position is also stable.
Our study relies on the application of Smith Normal Form computations for linear algebra.
author:
- |
Guénaël Renault\
Tristan Vaccon\
\
bibliography:
- 'biblio.bib'
title: 'On the p-adic stability of the FGLM algorithm'
---
\[section\] \[theo\][Lemma]{} \[theo\][Proposition]{} \[theo\][Corollary]{} \[theo\][Question]{}
\[theo\][Definition]{} \[theo\][Example]{}
\[theo\][Remark]{} \[theo\][Note]{}
\[Algebraic Algorithms\]
Introduction
============
The advent of arithmetic geometry has seen the emergence of questions that are purely local (*i.e.* where a prime $p$ is fixed at the very beginning and one can not vary it). As an example, one can cite the work of Caruso and Lubicz [@Caruso:2014] who gave an algorithm to compute lattices in some $p$-adic Galois representations. A related question is the study of $p$-adic deformation spaces of Galois representations. Since the work of Taylor and Wiles [@Taylor:1995], one knows that these spaces play a crucial role in many questions in number theory. Being able to compute such spaces appears then as an interesting question of experimental mathematics and require the use of purely $p$-adic Gröbner bases and more generally $p$-adic polynomial system solving.
Since [@Vaccon:2014], it is possible to compute a Gröbner basis, under some genericness assumptions, for a monomial ordering $\omega$ of an ideal generated by a polynomial sequence $F=(f_1,\dots,f_s) \subset \mathbb{Q}_p [X_1, \dots,
X_n]$ if the coefficients of the $f_i$’s are given with enough initial precision. Unfortunately, one of the genericness assumptions (namely, the sequence $(f_1,\dots,f_i)$ has to be weakly-$\omega$) is at most generic for the graduate reverse lexicographical (denoted grevlex in the sequel) ordering (conjecture of Moreno-Socias). Moreover, in the case of the lexicographical ordering (denoted lex in the sequel) this statement is proved to be generically not satisfied for some choices of degrees of the entry polynomials. In the context of polynomial system solving, where the lex plays an important role, this fact becomes a challenging problem that is essential to overcome.
Thus, in this paper, we focus on the fundamental problem of change of ordering for a given $p$-adic Gröbner basis. In particular we provide precise results in the case where the input basis has a grevlex ordering and one wants to compute the lex basis corresponding. We will use the following notations.
Notations
---------
Throughout this paper, $K$ is a field with a discrete valuation $val$ such that $K$ is complete with respect to the norm defined by $val$. We denote by $R=O_K$ its ring of integers, $m_K$ its maximal ideal and $k=O_K/m_K$ its fraction field. We denote by CDVF (complete discrete-valuation field) such a field. We refer to Serre’s Local Fields [@Serre:1979] for an introduction to such fields. Let $\pi \in R$ be a uniformizer for $K$ and let $S_K \subset R$ be a system of representatives of $k=O_K/m_K.$ All numbers of $K$ can be written uniquely under its $\pi$-adic power series development form: $\sum_{k \geq l}
a_k \pi^l$ for some $ l \in \mathbb{Z}$, $a_k \in S_K$.
The case that we are interested in is when $K$ might not be an effective field, but $k$ is (*i.e.* there are constructive procedures for performing rational operations in $k$ and for deciding whether or not two elements in $k$ are equal). Symbolic computation can then be performed on truncation of $\pi$-adic power series development. We will denote by finite-precision CDVF such a field, and finite-precision CDVR for its ring of integers. Classical examples of such CDVF are $K = \mathbb{Q}_p$, with $p$-adic valuation, and $\mathbb{Q}[[X]]$ or $\mathbb{F}_q[[X]]$ with $X$-adic valuation. We assume that $K$ is such a finite-precision CDVF.
The polynomial ring $K[X_1,\dots, X_n]$ will be denoted $A$, and $u=(u_1,\dots,u_n) \in \mathbb{Z}_{\geq 0}^n$, we write $x^u$ for $X_1^{u_1} \dots X_n^{u_n}.$
Mains results
-------------
In the context of $p$-adic algorithmic, one of the most important behavior to study is the stability of computation: how the quality of the result, in terms of $p$-adic precision, evolves from the input. To quantify such a quality, it is usual to use an invariant, called condition number, related to the computation under study. Thus, we define such an invariant for the change of ordering.
\[defn cond FGLM\] Let $I \subset A$ be a zero-dimensional ideal. Let $\leq_1$ and $\leq_2$ be two monomial orderings on $A$. Let $B_{\leq_1}$ and $B_{\leq_2}$ be the canonical bases of $A/I$ for $\leq_1 $ and $\leq_2$. Let $M$ be the matrix whose columns are the $NF_{\leq_1} (x^\beta)$ for $x^\beta \in B_{\leq_2}$. We define the condition number of $I$ for $\leq_1$ to $\leq_2$, with notation $cond_{\leq_1, \leq_2}(I)$ (or $cond_{\leq_1, \leq_2}$ when there is no ambiguity) as the biggest valuation of an invariant factor of $M$.
We can now state our main result on change of ordering of $p$-adic Gröbner basis.
\[thm stabilite fglm\] Let $\leq_1$ and $\leq_2$ be two monomial orderings. Let $G=(g_1,\dots,g_t) \in K[X_1,\dots, X_n]^t$ be an approximate reduced Gröbner basis for $\leq_1$ of the ideal $I$ it generates, with $\dim I = 0$ and $\deg I = \delta$, and with coefficients known up to precision $O(\pi^N)$. Let $\beta$ be the smallest valuation of a coefficient in $G.$ Then, if $N>cond_{\leq_1, \leq_2}(I)$, the stabilized FGLM Algorithm, Algorithm \[FGLM stabilise\], computes a Gröbner basis $G_2$ of $I$ for $\leq_2$. The coefficients of the polynomials of $G_2$ are known up to precision $N+n^2(\delta+1)^2 \beta-2cond_{\leq_1, \leq_2}$. The time-complexity is in $O(n \delta^3)$.
In the case of a change of ordering from grevlex to lex, we provide a more precise complexity result:
\[thm:FGLMp:grevlex:lex\] With the same notations and hypothesis as in Theorem \[thm stabilite fglm\]. If $\leq_1$, $\leq_2$ are respectively instantiated to grevlex and lex, and if we assume that the ideal $I$ is in shape position. Then, the adapted FGLM Algorithm for general position, Algorithm \[FGLM stabilise avec shape depuis grevlex\], computes a Gröbner basis $G_2$ of $I$ for lex, in shape position. The coefficients of the polynomials of $G_2$ are known up to precision $N+\beta \delta-2cond_{\leq_1, \leq_2}$. The time-complexity is in $O(n\delta^2)+O(\delta^3)$.
In order to obtain these results, one has to tackle technical problems related to the core of the FGLM algorithm. Thus, we first present a summary of some important facts on this algorithm. Then we present more precisely the underlying problems in the $p$-adic situation.
The FGLM algorithm
------------------
For a given zero-dimensional $I$ in a polynomial ring ${A}$, the FGLM algorithm [@Faugere:1993] is mainly based on computational linear algebra in ${A}/I$. It allows to compute a Gröbner basis $G_2$ of $I$ for a monomial ordering $\leq_2$ starting from a Gröbner basis $G_1$ of $I$ for a first monomial ordering $\leq_1$. To solve polynomial systems, one possible method is the computation of a Gröbner basis for lex. However, computing a Gröbner basis for lex by a direct approach is usually very time-consuming. The main application of the FGLM algorithm is to allow the computation of a Gröbner basis for lex by computing a Gröbner basis for grevlex then by applying a change of ordering to lex. The superiority of this approach is mainly due to the fact that the degrees of the intermediate objects are well controlled during the computation of the grevlex Gröbner basis. The second step of this general method for polynomial system solving, is what we call the FGLM algorithm. Many variants and improvements (in special cases) of the FGLM algorithm have been published, *e.g.* Faugère and Mou in [@Faugere-Mou:2011; @Faugere-Mou:2013; @Mou:2013] and Faugère, Gaudry, Huot and Renault [@Faugere-Huot:2013; @Faugere:2014; @Huot:13] takes advantages of sparse linear algebra and fast algorithm in linear algebra to obtain efficient algorithms. In this paper, as a first study of the problem of loss of precision in a change of ordering algorithm, we follow the original algorithm. This study already brings to light some problems for the loss of precision and propose solutions to overcome them. Thus, the FGLM algorithm we consider can be sketched as follows:
1. Order the images in ${A}/I$ of the monomials of ${A}$ according to $\leq_2$.
2. \[intro:step:independence\] Starting from the first monomial, test the linear independence in ${A}/I$ of a monomial $x^\alpha$ with the $x^\beta$ smaller than it for $\leq_2$.
3. In case of independence, $x^\alpha$ is added to the canonical (for $\leq_2$) basis ${A}/I$ in construction.
4. \[intro:step:relation\] Otherwise, $x^\alpha \in LM(I)$ and the linear relation with the $x^\beta$ smaller than it for $\leq_2$ give rise to a polynomial in $I$ whose leading term is $x^\alpha.$
Precision problems arise in step \[intro:step:independence\] and \[intro:step:relation\]. The first one is the issue of testing the independence of a vector from a linear subspace. While it is possible to prove independence when the precision is enough, it is usually not possible to prove directly dependence. It is however possible to prove some dependence when there is more vectors in a vector space than the dimension of this vector space. It is indeed some dimension argument that permits to prove the stability of FGLM. We show (see Section \[section: stabilite FGLM\]) that it is enough to treat approximate linearly dependence (up to some precision) in the same way as in the non-approximate case and to check at the end of the execution of the algorithm that the number of independent monomials found is the same as the degree of the ideal. The second issue corresponds to adding the computation of an approximate relation. We show that the same idea of taking approximate linear dependence as non-approximate and check at the end of the computation is enough.
Linear algebra and Smith Normal Form
------------------------------------
As we have seen, the FGLM algorithm relies mainly on computational linear algebra: testing linear independence and solving linear systems.
The framework of differential precision of [@Caruso:2014:2] has been applied to linear algebra in [@Caruso:2015] for some optimal results on the behavior of the precision for basic operations in linear algebra (matrix multiplication, LU factorization). From this analysis it seems clear, and this idea is well accepted by the community of computation over $p$-adics, that using the Smith Normal Form (SNF) to compute the inverse of a matrix or to solve a well-posed linear system is highly efficient and easy to handle. Moreover, it always achieves a better behavior than classical Gaussian elimination, even allowing gain in precision in some cases. Its optimality remains to be proved but in comparison with classical Gaussian elimination, the loss of precision is far fewer. [^1]
This is the reason why we use the SNF in the $p$-adic version of FGLM we propose in this paper. In Section \[sec:SNF\] we briefly recall some properties of the SNF and its computation. We also provide a dedicated version of SNF computation for the FGLM algorithm. More precisely, to apply the SNF computation to iterative tests of linear independence (as in step \[intro:step:independence\]), we adapt SNF computation into some iterative SNF in Algorithm \[Update FGLM stabilise\]. This allows us to preserve an overall complexity in $O(n \delta^3).$
SNF and linear systems {#sec:SNF}
======================
SNF and approximate SNF
-----------------------
We begin by presenting our main tool, the SNF of a matrix in $M_{n,m}(K)$:
Let $M \in M_{n,m}(K)$. There exist some $P \in GL_n (O_K)$, $val(\det P)=0 $, $Q \in GL_m (O_K)$, $\det Q = \pm 1$ and $ \Delta \in M_{n,m}(K)$ such that $M = P \Delta Q$ and $\Delta$ is diagonal, with diagonal coefficients being $\pi^{a_1},\dots,\pi^{a_s},0,\dots,0$ with $a_1 \leq \cdots \leq a_s$ in $\mathbb{Z}.$ $\Delta$ is unique and called the Smith Normal Form of $M$, and we say that $P,\Delta,Q$ realize the SNF of $M$. The $a_i$ are called the invariant factors of $M.$
In a finite-precision context, we introduce the following variant of the notion of SNF:
\[FNS approchee\] Let $M \in M_{n,m}(K)$, known up to precision $O(\pi^l)$. We define an **approximate SNF** of $M$ as a factorization $$M = P \Delta Q$$ with $P \in M_n(R)$, $val(\det P)=0,$ $Q \in M_m(R)$ with $\det Q=\pm 1$ known up to precision $O(\pi^l)$ and $ \Delta \in M_{n,m}(K)$ such that $\Delta = \Delta_0+O(\pi^l)$, where $ \Delta_0 \in M_{n,m}(K)$ is a diagonal matrix, with diagonal coefficients of the form $\Delta_0 [1,1]= \pi^{\alpha_1}, \dots, \Delta_0 [\min (n,m),\min (n,m)] = \pi^{\alpha_{\min (n,m)}}$ with $\alpha_1 \leq \dots \leq \alpha_{\min (n,m)}$. $\alpha_i = + \infty$ is allowed. $(P, \Delta, Q)$ are said to realize an approximate SNF of $M.$
To compute an approximate SNF, with use Algorithm \[algo approx snf\].
Find $i,j$ such that the coefficient $M_{i,j}$ realize $\min_{k,l} val (M_{k,l})$ Track the following operations to obtain $P$ and $Q$ Swap rows $1$ and $i$ and columns $1$ and $j$ Normalize $M_{1,1}$ to the form $\pi^{a_1}+O(\pi^l)$ By pivoting reduce coefficients $M_{i,1}$ ($i>1$) and $M_{1,j}$ ($j>1$) to $O(\pi^l).$ **Recursively**, proceed with $M_{i \geq 2, j \geq 2}$ Return $P,M, Q$ \[algo approx snf\]
Behaviour of Algorithm \[algo approx snf\] is given by the following proposition:
\[prop snf approchee\] Given an input matrix $M$, of size $n \times m$, with precision $(O(\pi^l)$ on its coefficients, Algorithm \[algo approx snf\] terminates and returns $U,\Delta, V$ realising an approximate SNF of $M$. Coefficients of $U, \Delta $ and $V$ are known up to precision $O(\pi^l)$. Time complexity is in $O( \min(n,m) \max (n,m)^2)$ operations in $K$ at precision $O(\pi^l)$.
Now, it is possible to compute the SNF of $M$, along with an approximation of a realization, from some approximate SNF of $M$ with Algorithm \[algo snf precisee\].
$\Delta_0 \leftarrow \Delta$ Track the following operations to obtain $P$ and $Q$ $t := min(n,m)$ Return $\Delta_0,P,Q$
\[algo snf precisee\]
Given an input matrix $M$, of size $n \times m$, with precision $(O(\pi^l)$ on its coefficients ($l>cond(M)$), and $(U, \Delta,V)$, known at precision $O(\pi^l)$, realizing an approximate SNF of $M$, Algorithm \[algo snf precisee\] computes the SNF of $M$, with $U'$ and $V'$ known up to precision $O(\pi^{l-cond(M)})$. Time-complexity is in $O( \max (n,m)^2).$
We refer to [@Vaccon:2014; @Vaccon-these] for more details on how to prove this result. We can then conclude on the computation of the SNF:
Given an input matrix $M$, of size $n \times m$, with precision $O(\pi^l)$ on its coefficients ($l>cond(M)$), then by applying Algorithms \[algo approx snf\] and \[algo snf precisee\], we compute $P,Q,\Delta$ with $M=P \Delta Q$ and $\Delta$ the SNF of $M$. Coefficients of $P$ and $Q$ are known at precision $O(\pi^{l-cond(M)})$. Time complexity is in $O(\max(n,m)^2 \min(n,m))$ operations at precision $O(\pi^l)$.
Solving linear systems
----------------------
Computation of $P$ and $Q$ in the previous algorithms can be slightly modified to obtain (approximation of) $P^{-1}$ and $Q^{-1},$ and thus $M^{-1}.$
\[prop snf inverses\] Using the same context as the previous theorem, by modifying Algorithms \[algo approx snf\] and \[algo snf precisee\] using the inverse operations of the one to compute $P$ and $Q$, we can obtain $P^{-1}$ and $Q^{-1}$ with precision $O(\pi^{l-cond(M)})$. When $M
\in GL_n(K),$ using $M^{-1}=Q^{-1} \Delta^{-1} P^{-1},$ we get $M^{-1}$ with precision $O(\pi^{l-2cond(M)})$. Time complexity is in $O(n^3)$ operations at precision $O(\pi^{l})$.
We can then estimate the loss in precision in solving a linear system:
\[thm sys lin\] Let $M \in GL_n(K)$ be a matrix with coefficients known up to precision $O(\pi^l)$ with $l>2cond(M).$ Let $Y \in K^n$ be known up to precision $O(\pi^l)$. Then one can solve $Y=MX$ in $O(n^3)$ operations at precision $O(\pi^l).$ $X$ is known at precision $O(\pi^{l-2cond(M)})$.
When the system is not square but we can ensure that $Y \in Im(M)$, then we have the following variant:
\[prop sys lin pas carre\] Let $M \in M_{n,m}(K)$ be full rank matrix, with coefficients known at precision $O(\pi^l),$ with $l> 2cond(M)$. Let $Y
\in K^n$ known at precision $O(\pi^l)$ be such that $Y \in
Im(M)$. Then, we can compute $X$ such that $Y=MX$, with precision $O(\pi^{l-2cond(M)})$ and time-complexity -3000 $O(nm \max (n,m))$ operations in $K$ at precision $O(\pi^l).$
Compute the multiplication matrices $T_1,\dots,T_n$ for $I$ and $\leq$ with Algorithm \[calcul des matrices de multiplication\] $B_2 := \{ 1 \}$ ; $\mathbf{v}= [{}^t (1,\dots,0)]$ ; $G_2 := \emptyset$ $L := \{ (1,n),(1,n-1),\dots,(1,1) \}$ $Q1,Q2,P1,P2,\Delta:=I_1,I_1,I_\delta,I_\delta,\mathbf{v}$
Stability of FGLM {#section: stabilite FGLM}
=================
A stabilized algorithm
----------------------
This section is devoted to the study of the FGLM algorithm at finite precision over $K$. More precisely, we provide a stable adaptation of this algorithm. The main difference with the classical FGLM algorithm consists in the replacement of the row-echelon form computations by SNF computation, as in Section \[sec:SNF\]. This way, we are able to take advantage of the smaller loss in precision of the SNF, and the nicer estimation on the behaviour of the precision it yields.
FGLM is made of Algorithms \[calcul des matrices de multiplication\], \[FGLM stabilise\] and \[Update FGLM stabilise\], with Algorithm \[FGLM stabilise\] being the main algorithm.
For the linear systems solving in Algorithm \[FGLM stabilise\], we use the computation of a SNF from an approximate SNF thanks to Algorithm \[algo snf precisee\], and then solve the system as in \[prop sys lin pas carre\].
The remaining of this Section is devoted to the proof of our main theorem \[thm stabilite fglm\].
Proof of the algorithm
----------------------
To prove Theorem \[thm stabilite fglm\] regarding the stability of Algorithm \[FGLM stabilise\], we first begin by a lemma to control the behaviour of the condition number of $\mathbf{v}$ during the execution of the algorithm, and then apply it to prove each component of the proof one after the other.
A preliminary remark can be given: over infinite precision, correction and termination of Algorithm \[FGLM stabilise\] are already known. Indeed, the only difference in that case with the classical FGLM algorithm is that the independence testing and linear system solving are done using (iterated) SNF instead of reduced row-echelon form computation.
### Growth of the condition in iterated SNF
In order to control the condition number of $\mathbf{v}$ during the execution of the algorithm, and thus control the precision, we use the following lemma:
\[lem snf iteree\] Let $M \in M_{s, \delta}(K)$ be a matrix, with $s < \delta$ being integers. Let $v \in K^\delta$ be a vector and $M' \in M_{s+1, \delta}(K)$ the matrix obtained by adjoining the vector $v$ as an $(s+1)$-th column for $M.$ Let $c=cond(M)$, and $c'=cond(M').$ We assume $c,c' \neq + \infty$ (*i.e.*, the matrices are of full-rank). Then $c \leq c'$.
We use the following classical fact : let $d_{s}'$ be the smallest valuation achieved by an $s \times s$ minor of $M'$, and $d_{s+1}'$ the smallest valuation achieved by an $(s+1) \times (s+1)$ minor of $M'$, then[^2] $c'=d_{s+1}'-d_{s}'$.
In our case, let $P,Q,\Delta$ be such that $\Delta$ is the SNF of $M,$ $P \in GL_{\delta}(R),$ $Q \in GL_s (R)$ and $PMQ=\Delta$. Then, by augmenting trivially $Q$ to get $Q'$ with $Q'_{s+1,s+1}=1$, we can write: $$PM'Q'= \begin{tikzpicture}[baseline=(current bounding box.center),scale=0.2]
\matrix (m) [matrix of math nodes,nodes in empty cells,right delimiter={]},left delimiter={[}, ampersand replacement=\& ]{
\pi^{a_1} \& \& \& \& \& w_1 \\
\& \& \& {}^0 \& \& \\
\& \& \& \&\& \\
\& \& \& \& \& \\
\& \& \& \& \pi^{a_s} \& \\
\& \& \& \& \& \\
\& \&0 \& \& \& \\
\& \& \&\& \& w_{\delta} \\
} ;
\draw[loosely dotted] (m-1-1)-- (m-5-5);
\draw[loosely dotted] (m-1-6)-- (m-8-6);
\draw (m-2-1)-- (m-6-5) -- (m-8-5) -- (m-8-1) -- (m-2-1);
\draw (m-1-2)-- (m-1-5) -- (m-4-5) -- (m-1-2);
\end{tikzpicture}.$$ In this setting, $c = a_s$.
Moreover, we can deduce from this equality that $d'_{s+1}$ is of the form $a_1+\dots+a_s+val(w_k)$ for some $k>s$. Indeed, the non-zero minors $(s+1) \times (s+1)$ of $PM'Q'$ are all of the following form: they correspond to the choice of $(s+1)$ row linearly independent, and all the rows of index at least $(s+1)$ are in the same one-dimensional sub-space. With such a choice of rows, the corresponding minor is the determinant of a triangular matrix, whose diagonal coefficients are $\pi^{a_1},\dots,\pi^{a_s},w_k$.
On the other hand, $a_1+\dots+a_{s-1}+val(w_k)$ is the valuation of an $s \times s$ minor of $PM'Q'$. By definition, we then have $d'_s \leq a_1+\dots+a_{s-1}+val(w_k)$. Since $d_{s+1}'=a_1+\dots+a_s+val(w_k)$ and $c'=d_{s+1}'-d_{s}'$, we deduce that $c' \geq a_s= c,$ *q.e.d.*.
We introduce the following notation:
Let $E$ be an $R$-module and $X \subset E$ a finite subset. We write $Vect_R(X)$ for the $R$-module generated by the vectors of $X$.
The previous lemma has then the following consequence:
\[lem ecriture avec snf dans fglm\] Let $I,G_1,\leq,\leq_2,B_\leq,B_{\leq_2}$ be as in Theorem \[thm stabilite fglm\]. Let $x^\beta \in \mathscr{B}_{\leq_2}(I)$. Let $V=Vect_R ( \{ NF_\leq (x^\alpha) \vert x^\alpha \in B_{\leq_2}, \: x^\alpha < x^\beta \} )$ then $NF_\leq (x^\beta) \in \pi^{-cond_{\leq, \leq_2}(I)} V$
The proof of the correction of the classical FGLM algorithm shows that, if $\mathbf{v}$ is a matrix whose columns are the $NF_\leq (x^\alpha)$ with $x^\alpha \in B_{\leq_2}$ and $ x^\alpha < x^\beta$ (written in the basis $B_\leq$), then $NF_\leq (x^\beta) \in Im(\mathbf{v}).$
By applying the proof of the Proposition \[prop sys lin pas carre\], we obtain that $NF_\leq (x^\beta) \in \pi^{-cond(\mathbf{v})} Vect_R ( \{ NF_\leq (x^\alpha) \vert x^\alpha \in B_{\leq_2}, \: x^\alpha < x^\beta \} )$.
Finally, Lemma \[lem snf iteree\] implies that $cond(\mathbf{v}) \leq cond_{\leq, \leq_2}(I).$ The result is then clear.
### Correction and termination
We can now prove the correction and termination of Algorithm \[FGLM stabilise\] under the assumption that the initial precision is enough. Which precision is indeed enough is addressed in the following Subsubsection.
\[prop correction et terminaison pour FGLM stabilise\] Let $G_1,\leq,\leq_2,B_\leq,B_{\leq_2}, I$ be as in Theorem \[thm stabilite fglm\]. Then, assuming that the coefficients of the polynomials of $G_1$ are all known up to a precision $O(\pi^N)$ for some $N \in \mathbb{Z}_{>0}$ big enough, the stabilized FGLM algorithm \[FGLM stabilise\] terminates and returns a Gröbner basis $G_2$ of $I$ for $\leq_2$.
The computation of the multiplication matrices only involves multiplication and addition and the operation performed do not depend on the precision. This is similar for the computation of the $NF_{\leq}(x^\alpha)$ processed in the algorithm and obtained as product of $T_i$’s and $\mathbf{1}$. We may assume that all those $NF_{\leq}(x^\alpha)$, for $\vert x^\alpha \vert$, are obtained up to some precision $O(\pi^N)$ for some $N \in \mathbb{Z}_{>0}$ big enough. Subsubsection \[subsubsec:analysis-loss-fglm\] gives a precise estimation on such an $N$ and when it is big enough.
Let $M$ be the matrix whose columns are the $NF_\leq (x^\beta)$ for $x^\beta \in B_{\leq_2}$. Let $cond_{\leq, \leq_2}$ be as in Definition \[defn cond FGLM\].
To show the result, we use the following loop invariant: at the beginning of each time in the **while** loop of Algorithm \[FGLM stabilise\], we have *(i)*, $B_2 \subset B_{\leq_2}$ and *(ii)* if $x^\beta = B_2[j] x_i$ (where $(j,i)=m$, $m$ taken at the beginning of the loop), then every monomial $x^\alpha <_2 x^\beta$ satisfies $x^\alpha \in B_{\leq_2}$ or $NF_{\leq}(x^\alpha) \in \pi^{N-cond_{\leq, \leq_2}} Vect_R(NF_\leq (B_{\leq_2}))+O(\pi^{N-cond_{\leq, \leq_2}}).$ Here, -3000 $O(\pi^{N-cond_{\leq, \leq_2}})$ is the $R$-module generated by the -30000 $\pi^{N-cond_{\leq, \leq_2}} \epsilon$’s for $\epsilon \in B_\leq.$
We begin by first proving that this proposition is a loop invariant. It is indeed true when entering the first loop since $ 1 \in B_{\leq_2},$ for $I$ is zero-dimensional.
We then show that this proposition is stable when passing through a loop. Let $x^\beta = B_2[j] x_i$ with $(j,i)=m$. By the way we defined it, $x^\beta $ is in the border of $B_2$ (*i.e.* non-trivial multiple of a monomial of $B_2$). Since $B_2 \subset B_{\leq_2}$, we deduce that $x^\beta $ is also in either in $B_{\leq_2}$, or in the border of $B_{\leq_2}$, also denoted by $\mathscr{B}_{\leq_2}(I)$.
We begin by the second case. We then have, thanks to Lemma \[lem ecriture avec snf dans fglm\], $NF_\leq (x^\beta) \in \pi^{-cond_{\leq, \leq_2}} Vect_R ( \{ NF_\leq (x^\alpha) \vert x^\alpha \in B_{\leq_2}, \: x^\alpha < x^\beta \} )$. Precision being finite, it tells us that $\lambda = P_1 v=P_1 NF_\leq (x^\beta)$ only appears with coefficients of the form $O(\pi^{l'})$ for its coefficients or row of index $i>s$. This corresponds to being in the image of $\Delta$.
Hence, the **if** test succeeds, and $x^\beta$ is not added to $B_2$. Points *(i)* and *(ii)* are still satisfied.
We now consider the first case, where $x^\beta \in B_{\leq_2}$. Once again, two cases are possible. The first one is the following: we have enough precision for, when computing $\lambda = P_1 v$ where $v=NF_{\leq}(x^\beta)$, we can prove that $v$ is not in $Vect(NF_\leq (B_{\leq_2}))$. In other words, we are in the *else* case, and $x^\beta$ is rightfully added to $B_2$. The points *(i)* and *(ii)* remain satisfied. In the other case: we do not have enough precision for, when computing $\lambda = P_1 v$ with $v=NF_{\leq}(x^\beta)$, we can prove that $v$ is not in $Vect(B_{\leq_2})$. In other words, numerically, we get $NF_\leq (x^\beta) \in \pi^{-cond(\mathbf{v})} Vect_R ( \{ NF_\leq (x^\alpha) \vert x^\alpha \in B_{\leq_2}, \: x^\alpha < x^\beta \} )+O(\pi^{N-cond(\mathbf{v})})$. In that case, the **if** condition is successfully passed and, since $cond(\mathbf{v}) \leq cond_{\leq, \leq_2},$ the points *(i)* and *(ii)* remain satisfied.
This loop invariant is now enough to conclude this demonstration. Indeed, since $B_2 \subset B_{\leq_2}$ is always satisfied, we can deduce that $L$ is always included in $B_{\leq_2} \cup \mathscr{B}_{\leq_2}(I)$, and since a monomial can not be considered more than once inside the **while** loop, there is at most $n \delta$ loops. Hence the termination.
Regarding correction, if the **if** test with $card(B_2)=\delta = card (B_{\leq_2})$ is passed, then, because of the inclusion we have proved, we have $B_2 = B_{\leq_2}$. In that case, the leading monomials which passed the **if** are necessarily inside the border of $\mathscr{B}_{\leq_2}(I)$, and can indeed be written in the quotient $A/I$ in terms of the monomials of $B_2$ smaller than them. In other words, the linear system solving with the assumption of membership indeed builds a polynomial in $I$. *In fine*, $G_2$ is indeed a Gröbner basis of $I$ for $\leq_2$.
In the second case, where the **if** test is failed, with -10000 $card(B_2) \neq \delta$, then precision was not enough.
Augment trivially the matrices $Q_1,Q_2$ into square invertible matrices with one more row and one more column Compute $U_1,V_1$ and $\Delta'$ realizing an approximate SNF of $P_1 \mathbf{v} Q_1$, as well as $U_2,V_2$ the inverses of $U_1,V_1$ for Algorithm \[algo approx snf\] $P_1 := U_1 \times P_1$ $Q_1 := Q_1 \times V_1$ $P_2 := P_2 \times U_2$ $Q_2 := V_2 \times Q_2$ $\Delta := \Delta'$
$L:= [x_i \epsilon_k \vert i \in \llbracket 1,n \rrbracket \text{ et } \epsilon_k \in B_\leq ],$ ordered increasingly for $\leq$ with no repetition Return $T_1,\dots,T_n$
### Analysis of the loss in precision {#subsubsec:analysis-loss-fglm}
We can now analyse the behaviour of the loss in precision during the execution of the stabilized FGLM algorithm \[FGLM stabilise\], and thus estimate what initial precision is big enough for the execution to be without error. To that intent, we analyse the precision on the computation of the multiplication matrices and we use the notion of condition number of Definition \[defn cond FGLM\] to show that it can handle the behaviour of precision inside the execution the stabilized FGLM algorithm \[FGLM stabilise\]. This is what is shown in the following propositions.
Let $I,G_1,\leq,B_\leq, \delta, \beta$ be as defined when announcing Theorem \[thm stabilite fglm\]. Then the coefficients of the multiplication matrices for $I$ are of valuation at least $n \delta \beta.$ \[lem:val\_of\_Ti\]
$G_1$ is a reduced Gröbner basis of a zero-dimensional ideal. Hence, it is possible to build a Macaulay matrix $\text{Mac}$ with columns indexed by the monomials of $\text{mon} :=\{ X_i \times \epsilon, i \in \llbracket 1, n\rrbracket, \epsilon \in B_\leq \}$, in decreasing order for $\leq$, and rows of the form $x^\alpha g$, with $x^\alpha$ a monomial and $g \in G$, such that this matrix is under row-echelon form, (left)-injective and all monomials in $\text{mon} \cap LM_{\leq}(I)$ are leading monomial of exactly one row of $\text{Mac}$. Since $G_1$ is a reduced Gröbner basis, the first non-zero coefficients of the rows are $1$ and all other coefficients are of valuation at least $\beta.$ $\text{Mac}$ has at most $n \delta$ columns and rows.
The computation of the reduced row-echelon form of $\text{Mac}$ yields a matrix whose coefficients are of valuation at least $n \delta \beta$, except the first non-zero coefficient of each row which is equal to $1.$
$NF_\leq (x^\alpha)$ for $x^\alpha \in \text{mon} \setminus B_\leq$ can then be read on the row of $\text{Mac}$ of leading monomial $x^\alpha$. It proves that the coefficients of such a $NF_\leq (x^\alpha)$ are of valuation at least $n \delta \beta.$ The result is then clear.
Let $I,G_1,\leq,\leq_2,B_\leq,B_{\leq_2}$ be as defined when announcing Theorem \[thm stabilite fglm\]. Let $M$ be the matrix whose columns are the $NF_\leq (x^\beta)$ for $x^\beta \in B_{\leq_2}$. Then, if the coefficients of the polynomials of $G_1$ are all known up to some precision $O(\pi^N)$ with $N \in \mathbb{Z}_{>0}$, $N>cond_{\leq, \leq_2}(M)+n^2(\delta+1)^2 \beta$, the stabilized FGLM algorithm \[FGLM stabilise\] terminates and returns an approximate Gröbner basis $G_2$ of $I$ for $\leq_2$. The coefficients of the polynomials of $G_2$ are known up to precision $N-n^2(\delta+1)^2 \beta-2cond_{\leq, \leq_2}(M)$.
We first analyse the behaviour of precision for the computation of the multiplication matrices. There are at most $n\delta$ matrix-vector multiplication in the execution of Algorithm \[calcul des matrices de multiplication\]. The coefficients involved in those multiplication are of valuation at least $n \delta \beta$ thanks to Lemma \[lem:val\_of\_Ti\]. Hence, the coefficients of the $T_i$ are known up to precision $O(\pi^{N-(n \delta)^2 \beta}).$
We now analyse the exection of Algorithm \[FGLM stabilise\]. The computation of $v$ involves the multiplication of $\deg (v)$ $T_i$’s and $\mathbf{1}.$ Hence, $v$ is known up to precision $O(\pi^{N-(n \delta)^2 \beta-\deg (v) n \delta \beta}),$ which can be lower-bounded by $O(\pi^{N-(\delta+1)^2 n^2 \beta}).$
As a consequence, all coefficients of $M$ are known up to precision $O(\pi^{N-(\delta+1)^2 n^2 \beta})$ and this is the same for its approximate SNF.
Now, we can address the loss in precision for the linear system solving. Thanks to Proposition \[prop sys lin pas carre\], and with the membership assumption of $v$ to $Im(\mathbf{v})$, a precision $O(\pi^N)$ with $N$ strictly bigger than $(\delta+1)^2 n^2 \beta$ plus the biggest valuation $c$ of an invariant factor of $\mathbf{v}$ is enough to solve the linear system $\mathbf{v}W=v,$ and the coefficients of $W$ are determined up to precision $O(\pi^{N-n^2(\delta+1)^2 \beta-2c})$. The Lemma \[lem snf iteree\] then allows us to conclude that at any time, $c \leq cond_{\leq, \leq_2}(M)$, hence the result.
### Complexity
To conclude the proof of Theorem \[thm stabilite fglm\], what remains is to give an estimation of the complexity of Algorithm \[FGLM stabilise\]. Regarding to the computation of multiplication matrices, there is no modification concerning complexity, and what we have to study is only the complexity of the iterated SNF computation. This is done in the following lemma:
\[lem iteraton snf\] Let $1 \leq s \leq \delta$ and $prec$ be integers, $k \in \llbracket 1, s \rrbracket$ and $M,C^{(k)}$ be two matrices in $M_{\delta \times s}(K).$ We assume that the coefficients of $M$ satisfies $M_{i,j}=m_{i,j} \delta_{i,j}+O(\pi^{prec})$ for some $m_{i,j} \in K$ and the coefficients of $C^{(k)}$ satisfies $C_{i,j}^{(k)}=c_{i,j} \delta_{j,k}+O(\pi^{prec})$ for some $c_{i,j} \in K$. Let $C_{SNF}(M+C^{(k)})$ be the number of operations in $K$ (at precision $O(\pi^{prec})$) applied on rows and columns to compute an approximate SNF for $M+C^{(k)}$ at precision $O(\pi^{prec}).$ Then $C_{SNF}(M+C^{(k)})\leq s \delta.$
We show this result by induction on $s$. For $s=1$, for any $\delta, prec,k,M$ and $C^{(k)}$, the result is clear.
Let us assume that for some $s \in \mathbb{Z}_{>0}$, we have for any $\delta, prec, k$, $M$ and $C^{(k)} \in M_{\delta \times (s-1)}(K)$ as in the lemma, $C_{SNF}(M+C^{(k)})\leq (s-1) \delta.$
Then, let us take some $\delta \geq s,$ $k \in \llbracket 1, s \rrbracket$ and $rec \in \mathbb{Z}_{\geq 0}.$ Let $M,C^{(k)}$ be two matrices in $M_{\delta \times s}(K)$ such that their coefficients satisfies $M_{i,j}=m_{i,j} \delta_{i,j}+O(\pi^{prec}),$ for some $m_{i,j} \in K,$ and $C_{i,j}^{(k)}=c_{i,j} \delta_{j,k}+O(\pi^{prec})$ for some $c_{i,j} \in K$. Let $N=M+C^{(k)}.$
We apply Algorithm \[algo approx snf\] until the recursive call. Let us assume that the coefficient used as pivot, that is, one $N_{i,j}$ which attains the minimum of the $val(N_{i,j})$’s, is $N_{1,1}$. Then $1$ operation on the columns is done when going through the two consecutives **for** loops in Algorithm \[algo approx snf\]. The only other case is that of pivot being some $N_{i,k}$ for some $i$. Then $\delta-1$ operations on the rows and $1$ operation on the columns are done.
The matrix $N'=\widetilde{N}_{i \geq 2, j \geq 2}$ can be written $N'=M'+C^{'(k)}$ with $M'$ and $C^{'(k)}$ in $M_{(\delta-1) \times (s-1)}(K)$ of the desired form, for $k=s-1$ if the pivot $N_{i,j}$ is $N_{1,1}$ and $k=i$ if it is $N_{i,s}$. By applying the induction hypothesis on $N'$, we obtain that $C_{SNF}(M+C^{(k)})\leq \delta+(\delta-1)\times (s-1) \leq \delta s.$
The result is then proved by induction.
We then have the following result regarding the complexity of Algorithm \[FGLM stabilise\] :
\[prop complexite fglm stabilise\] Let $G_1$ be an approximate reduced Gröbner basis, for some monomial ordering $\leq,$ of some zero-dimensional $I \subset A$ of degree $\delta$, and let $\leq_2$ be some monomial ordering. We assume that the coefficients of $G_1$ are known up to precision $O(\pi^N)$ for some $N>cond_{\leq,\leq_2}$. Then, the complexity of the execution of Algorithm \[FGLM stabilise\] is in $O(n \delta^3)$ operations in $K$ at absolute precision $O(\pi^N)$.
Firstly, we remark that the computation of the matrices of multiplication is in $O(n \delta^3)$ operations at precision $O(\pi^N)$. Now, we consider what happens inside the **while** loop in Algorithm \[FGLM stabilise\]. The computation of approximate SNF through Algorithm \[Update FGLM stabilise\] are in $O(\delta^2)$ operations at precision $O(\pi^N)$ thanks to Lemma \[lem iteraton snf\]. The solving of linear systems thanks to Proposition \[prop sys lin pas carre\] are also in $O(\delta^2)$ operations at precision $O(\pi^N)$. There is at most $n \delta$ entrance in this loop thanks to the proof of termination in Proposition \[prop correction et terminaison pour FGLM stabilise\]. The result is then proved.
We can recall that the complexity of the classical FGLM algorithm is also in $O(n \delta^3)$ operations over the base field.
Shape position {#section: stabilite shape}
==============
In this Section, we analyse the special variant of FGLM to compute a shape position Gröbner basis. We show that the gain in complexity observed in the classical case is still satisfied in our setting. We can combine this result with that of [@Vaccon:2014] to express the loss in precision to compute a shape position Gröbner basis starting from a regular sequence.
Grevlex to shape
----------------
To fasten the computation of the multiplication matrices, we use the following notion.
$I$ is said to be semi-stable for $x_n$ if for all $x^\alpha$ such that $x^\alpha \in LM(I)$ and $x_n \mid x^\alpha$ we have for all $k \in \llbracket 1, n-1 \rrbracket$ $\frac{x_k}{x_n} x^\alpha \in LM(I).$
Semi-stability’s application is then explained in Proposition 4.15, Theorem 4.16 and Corollary 4.19 of [@Huot:13] (see also [@Faugere-Huot:2013]) that we recall here:
\[prop:Huot\] Applying FGLM for a zero-dimensional ideal $I$ starting from a Gröbner basis $G$ of $I$ for grevlex:\
`1.` $T_i 1$ ($i<n$) can be read from $G$ and requires no arithmetic operation;\
`2.` If $I$ is semi-stable for $x_n,$ $T_n$ can be read from $G$ and requires no arithmetic operation;\
`3.` After a generic change of variable, $I$ is semi-stable for $x_n.$
The FGLM algorithm can then be adapted to this setting in the special case of the computation of a Gröbner basis of an ideal in shape position, with Algorithm \[FGLM stabilise avec shape depuis grevlex\].
If the ideal $I$ is weakly grevlex (or the initial polynomials satisfy the more restrictive **H2** of [@Vaccon:2014]), then $I$ is semi-stable for $x_n.$
The remaining of this Section is then devoted to the proof of Theorem \[thm:FGLMp:grevlex:lex\].
Correction, termination and precision
-------------------------------------
We begin by proving correction and termination of this algorithm.
We assume that the coefficients of the polynomials of the reduced Gröbner basis $G_1$ for grevlex are known up to a big enough precision, and that the ideal $I=\left\langle G_1 \right\rangle$ is in general position and semi-stable for $x_n$. Then Algorithm \[FGLM stabilise avec shape depuis grevlex\] terminates and returns a Gröbner basis for lex of $I$, yielding an univariate representation. Time complexity is in $O(\delta^3) + O(n \delta^2)$.
As soon as one can certify that the rank of $M$ is $\delta$, the dimension of $A/I$, then we can certify that $I$ possesses an univariate representation. Correction, termination are then clear. Computing $T_n$ and the $T_i 1$ is free, computing the SNF is in $O(\delta^3)$ and solving the linear systems is in $O(n \delta^2),$ hence the complexity is clear.
What remains to be analysed is the loss in precision. To that intent, we use again the condition number of $I$ (from grevlex to lex) and the smallest valuation of a coefficient of $G_1.$
\[prop: prec of shape from grevlex\] Let $G_1$ be the reduced Gröbner basis for grevlex of some zero-dimensional ideal $I \subset A$ of degree $\delta$. We assume that the coefficients of the polynomials of $G_1$ are known up to precision $O(\pi^N)$ for some $N \in \mathbb{Z}_{>0},$ except the leading coefficients, which are exactly equal to $1$. Let $\beta$ be the smallest valuation of a coefficient of $G_1.$ Let $m=cond_{grevlex, lex}(I).$ We assume that $m-\delta \beta <N$, that $I$ is in shape position and semi-stable for $x_n$. Then Algorithm \[FGLM stabilise avec shape depuis grevlex\] computes a Gröbner basis $(x_1-h_1,\dots,x_{n-1}-h_{n-1}, h_n)$ of $I$ for lex which is in shape position. Its coefficients are known up to precision $O(\pi^{N-2m+\delta \beta})$. The valuation of the coefficients of $h_n$ is at least $\beta \delta-m,$ and those of the $h_i$’s is at least $\beta-m.$
There is no loss in precision for the computation concerning the multiplication matrices since it only involves reading coefficients on $G_1$. Their coefficients are of valuation at least $\beta.$ The columns of $M:=Mat_{B_{grevlex}}(NF_\leq(1),\dots,$ $NF_\leq (x_n^{\delta-1}))$ are obtained using $T_n.$ Their coefficients are known up to precision $O(\pi^{N+(\delta-1)
\beta})$ and are of valuation at least $(\delta-1) \beta.$ For $\mathbf{z}[\delta]$, it is $O(\pi^{N+\delta \beta})$ and $\delta \beta.$ The only remaining step to analyse is then the solving of linear systems, which is clear thanks to Theorem \[thm sys lin\].
Summary on shape position
-------------------------
Thanks to the results of [@Vaccon:2014] and [@Vaccon-these], we can express the loss in precision to compute a Gröbner basis in shape position under some genericity assumptions. Let $F=(f_1,\dots,f_n) \in R[X_1,\dots, X_n]$ be a sequence of polynomials satisfying the hypotheses **H1** and **H2** of [@Vaccon:2014] for grevlex. Let $D$ be the Macaulay bound of $F$ and $I=\left\langle F \right\rangle.$ We assume that $I$ is strongly stable for $x_n.$ Let $\delta = \deg (I).$ Let $\beta=-prec_{MF5}(F, D, grevlex)$ be the bound on loss in precision to compute an approximate grevlex Gröbner basis of [@Vaccon:2014]. Let $\gamma=-\delta \beta+2 cond_{grevlex,lex}(I)$.
If the coefficients of the $f_i$’s are known up to precision $N>\gamma$, then one can compute a shape position Gröbner basis for $I$ with precision $N-\gamma$ on its coefficients.
An approximate reduced Gröbner basis of $I$ for grevlex is determined up to precision $N+2\beta$ and its coefficients are of valuation at least $\beta.$ Thanks to Proposition \[prop: prec of shape from grevlex\], the lexicographical Gröbner basis of $I$ is of the form $x_1-h_1(x_n),\dots,$ $x_{n-1}-h_{n-1}(x_n),h_n(x_n).$ Moreover, the coefficients of $h_n$ are of valuation at least $\delta \beta-cond_{grevlex,lex}(I)$ and known at precision $N-\delta \beta-2cond_{grevlex,lex}(I).$ For the other $h_i$’s, the coefficients are of valuation at least $\beta-cond_{grevlex,lex}(I)$ and precision $N-\gamma.$
As a corollary, if $x_n \in R$ is such that $val(f_n'(x))$ $=0$, then $x_n$ lifts to $x \in V(I),$ known at precision $N-2\gamma.$
Read the multiplication matrix $T_n$ for $I$ and grevlex using $G$ $G_2 := \emptyset$ Read the $\mathbf{y}[i]:= T_i 1$’s from $G$ ($1 \leq i <n$) $\mathbf{z}[0]:=1$ $M:= Mat_{B_\leq}(\mathbf{z}[0],\dots,\mathbf{z}[\delta-1])$ Compute $\Delta$ the SNF of $M$ with $\Delta = P M Q$
Experimental Results
====================
An implementation in Sage [@sage] of the previous algorithms is available at <http://www2.rikkyo.ac.jp/web/vaccon/fglm.sage>. Since the main goal of this implementation is the study of precision, it has not been optimized regarding to time-complexity. We have applied the main Matrix-F5 algorithm of [@Vaccon:2014] to homogeneous polynomials of given degrees, with coefficients taken randomly in $\mathbb{Z}_p$ (using the natural Haar measure): $f_1,\dots, f_s,$ of degree $d_1,
\dots, d_s$ in $\mathbb{Z}_p[X_1,\dots, X_s],$ known at precision $O(p^{150}),$ for grevlex, using the Macaulay bound $D$. We also used the extension to the affine case of [@Vaccon:2014] to handle affine polynomials with the same setting (we specify this property in the column aff.). We have then applied our p-adic variant of FGLM algorithm, specialized for grevlex to lex or not , on the obtained Gröbner bases to get Gröbner bases for the lex order.\
$d =$ $nb_{test}$ aff. fast D $p$ max mean fail
------- ------------- ------ ------ ---- ------- ----- ------ -------- --
20 no no 7 2 21 3 (0,0)
20 no no 8 2 21 3 (0,0)
20 no no 10 2 28 5.2 (0,0)
20 yes no 7 2 150 78 (0,0)
20 yes no 8 2 149 92 (0,5)
20 yes no 10 2 150 118 (0,11)
20 yes yes 7 2 145 65 (0,1)
20 yes yes 8 2 150 89 (0,7)
20 yes yes 10 2 156 124 (0,15)
20 no no 7 65519 0 0 (0,0)
20 no no 10 65519 0 0 (0,0)
20 yes no 7 65519 0 0 (0,0)
20 yes no 10 65519 0 0 (0,0)
20 yes yes 7 65519 0 0 (0,0)
20 yes yes 10 65519 0 0 (0,0)
This experiment has been realized $nb_{test}$ times for each given choice of parameters. We have reported in the previous array the maximal (column max), resp. mean (column mean), loss in precision (in successful computations), and the number of failures. This last quantity is given as a couple: the first part is the number of failure for the Matrix-F5 part and the second for the FGLM part.
We remark that these results suggest a difference of order in the loss in precision between the affine and the homogeneous case. Qualitatively, we remark that, for some given initial degrees, more computation (particularly computation involving loss in precision) are done in the affine case, because of the inter-reduction step. Also, it seems clear that loss in precision decreases when $p$ increases, in particular, on small instances like here, loss in precision when $p=65519$ are very unlikely.
Future works
============
Following this work, it would be interesting to investigate whether the sub-cubics algorithms of [@Faugere-Mou:2011; @Faugere-Mou:2013; @Mou:2013; @Faugere-Huot:2013; @Faugere:2014; @Huot:13] could be adapted to the $p$-adic setting with reasonable loss in precision. Another possibility of interest for $p$-adic computation would be the extension of FGLM to tropical Gröbner bases.
[^1]: See Chapter 1 of [@Vaccon-these] for more details on the comparison between these two strategies.
[^2]: This is a direct consequence of the fact that for an ideal $\mathscr{I}$ in the ring of integers of a discrete valuation field, any element $x \in \mathscr{I}$ such that $val(x)=\min val(\mathscr{I})$ generates $\mathscr{I}$, with the converse being true.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Faraday Rotation (FR) of CMB polarization, as measured through mode-coupling correlations of E and B modes, can be a promising probe of a stochastic primordial magnetic field (PMF). While the existence of a PMF is still hypothetical, there will certainly be a contribution to CMB FR from the magnetic field of the Milky Way. We use existing estimates of the Milky Way rotation measure (RM) to forecast its detectability with upcoming and future CMB experiments. We find that the galactic RM will not be seen in polarization measurements by Planck, but that it will need to be accounted for by CMB experiments capable of detecting the weak lensing contribution to the B-mode. We then discuss prospects for constraining the PMF in the presence of FR due to the galaxy under various assumptions that include partial de-lensing and partial subtraction of the galactic FR. We find that a realistic future sub-orbital experiment, covering a patch of the sky near the galactic poles, can detect a scale-invariant PMF of 0.1 nano-Gauss at better than 95% confidence level, while a dedicated space-based experiment can detect even smaller fields.'
author:
- 'Soma De$^{1}$, Levon Pogosian$^{2,3}$, Tanmay Vachaspati$^{4}$'
title: CMB Faraday rotation as seen through the Milky Way
---
Introduction
============
The discovery of a primordial magnetic field[^1] (PMF) would have a profound impact on our understanding of the early universe [@Grasso:2000wj] and would help explain the origin of the observed magnetic fields in galaxies and clusters [@Widrow:2002ud]. Several observational probes are currently being investigated and the cosmic microwave background (CMB) is a promising tool to discover and study the PMF on the largest cosmic scales.
A PMF can affect the thermal distribution and the polarization of the CMB. The most relevant CMB signature depends on the form of the magnetic field spectrum and also on the level of instrumental noise for different observations. For instance, current CMB bounds of a few nG [@Ade:2013lta; @Paoletti:2008ck; @Paoletti:2012bb] on a scale-invariant PMF [@Turner:1987bw; @Ratra:1991bn] derive from temperature anisotropies sourced by the magnetic stress-energy. Comparable bounds were recently obtained in [@Kahniashvili:2012dy] using the Lyman-$\alpha$ forest spectra [@Croft:2000hs]. As the stress-energy is quadratic in the magnetic field strength, improving this bound by a factor of $2$ would require a $16$ fold improvement in the accuracy of the spectra. On the other hand, Faraday Rotation (FR) of CMB polarization, being linear in the magnetic field strength, offers an alternative way to improve CMB bounds on a scale-invariant PMF by an order of magnitude [@Yadav:2012uz], and possibly more, with the next generation CMB experiments. Even current CMB data is close to providing competitive bounds on scale-invariant PMF through their FR signature [@Kahniashvili:2008hx; @Kahniashvili:2012dy].
In contrast, a PMF produced causally in an aftermath of the electroweak or QCD phase transition [@Vachaspati:1991nm] would have a blue spectrum [@Durrer:2003ja; @Jedamzik:2010cy] with most of the power concentrated near a small cutoff scale set by the plasma conductivity at recombination. It has been suggested [@Jedamzik:2011cu] that such small-scale fields can appreciably alter the recombination history via enhancement of small scale baryonic inhomogeneities. Consequently, the strongest CMB constraint would come from an overall shift in the distance to last scattering. According to [@Jedamzik:2011cu], upcoming CMB experiments can rule out causally produced fields with a comoving strength larger than $10^{-11}~{\rm G}$.
Given the promise of FR to significantly improve the bounds on scale-invariant PMF [@Yadav:2012uz], it is important to assess the strength of the rotation induced by the magnetic fields in our own galaxy. A rotation measure (RM) map of the Milky Way has recently been assembled by Oppermann et al [@Oppermann:2011td] based on an extensive catalog of FR of compact extragalactic polarized radio sources. They also presented the angular power spectrum of RM and find that it is very close to a scale- invariant spectrum at $\ell \lesssim 200$. Fig. 1 of [@Oppermann:2011td] shows that the root-mean-square RM away from the galactic plane is about $20$-$30~{\rm rad/m}^2$. Additional insight into galactic RM on smaller scales can be gained from Haverkorn et al [@Haverkorn:2003ad] who studied Stokes parameters generated by diffuse polarized sources residing inside the Milky Way. Their power spectra generally agree with the scale-invariance of the large scale tail of the power spectrum of [@Oppermann:2011td], but also show evidence of a break in the spectrum, indicating a power law suppression of the RM power on small scales. Such a break in the RM spectrum is also present in the model of Minter and Spangler [@MinterSpangler1996] who, by fitting to a small set of RM data from extragalactic sources, argued that the spectrum should be set by Kolmogorov turbulence on small scales.
The approximately scale-invariant shape of the galactic RM spectrum can obscure FR constraints on scale-invariant PMF. Based on calculations in [@Pogosian:2011qv; @Yadav:2012uz], one can estimate that a typical galactic RM of $30~{\rm rad/m}^2$ corresponds to an [*effective*]{}, or “energy equivalent” (see Eq. (\[Beff\]) for the definition) PMF strength of $0.6$ nG. This is well below the bounds from Planck [@Ade:2013lta] obtained using non-FR diagnostics and, as we confirm in this paper, below the levels detectable by Planck via the mode-coupling statistics induced by FR. However, as shown in [@Yadav:2012uz], future CMB experiments capable of detecting the weak lensing B-mode will be able to constrain a scale-invariant PMF of $0.1$ nG strength at $100$ GHz. Operating at lower frequencies and combining FR information from several channels may further improve the constraints. Clearly, with such high sensitivities the contribution of the galactic RM to the total FR signal will become important.
In this paper, we investigate the imprint of the galactic RM on CMB observables, and its impact on detectability of the PMF via the EB and TB mode-coupling correlations. We start by introducing the necessary concepts and reviewing the known galactic RM measurements in Sec. \[FRCMB\]. In Sec. \[sec:detection\], we estimate detectability of the galactic RM by upcoming and future CMB experiments and forecast future bounds on the scale-invariant PMF under various assumptions. We conclude with a summary in Sec. \[sec:summary\].
Faraday Rotation of CMB polarization {#FRCMB}
====================================
Basics of Faraday Rotation {#basics}
--------------------------
A CMB experiment measures Stokes parameters in different directions on the sky, with parameters $Q$ and $U$ quantifying linear polarization. If CMB photons pass through ionized regions permeated by magnetic fields, the direction of linear polarization is rotated by an angle [@Kosowsky:1996yc; @Harari:1996ac] $$\alpha(\hat{\bf n}) = \lambda_0^2 \ RM(\hat{\bf n})= \frac{3}{{16 \pi^2 e}} \lambda_0^2
\int \dot{\tau} \ {\bf B} \cdot d{\bf l} \ ,
\label{alpha-FR}$$ where $\hat{\bf n}$ is the direction along the line of sight, $\dot{\tau}$ is the differential optical depth, $\lambda_0$ is the observed wavelength of the radiation, ${\bf B}$ is the “comoving” magnetic field strength (the physical field strength scales with the expansion as ${\bf B}^{\rm phys}={\bf B}/a^2$) and $d{\bf l}$ is the comoving length element along the photon trajectory. The rotation measure, $RM$, is a frequency independent quantity used to describe the strength of FR. Under the rotation of the polarization vector, the two Stokes parameters transform as $$Q(\nu) + iU(\nu) =(Q^{(0)} + iU^{(0)}) \exp(2i\alpha(\nu)) \ ,
\label{qu-rotation}
\ee
where $Q^{(0)}$ and $U^{(0)}$ are the Stokes parameters at last scattering. As
an approximation, $Q^{(0)}$ and $U^{(0)}$ can be taken to be the observed
Stokes parameters at a very high frequency since the FR falls off as $1/\nu^2$.
A PMF contributes to FR primarily at the time of last scattering, just after the polarization was generated, while the mean ionized fraction was still high, and
the field strength was strongest.
Subsequently, additional FR is produced in ionized regions along the line of sight that contain magnetic fields, such as clusters of galaxies and our own galaxy. Eq.~(\ref{alpha-FR}) implies that a significant FR angle can be produced by a small magnetic field over a very large distance, which is the case at recombination, or by a larger magnetic field over a smaller path, which is the case for the Milky Way. We will not discuss FR from clusters which are likely to have a white noise spectrum and contribute to CMB polarization on small scales. We are more concerned with the contribution from the Milky Way that can look very similar to that of a scale-invariant PMF.
In theory, it is possible to extract a map of the FR angle by taking maps of
$Q$ and $U$ at different frequencies and using Eq.~(\ref{qu-rotation})
to solve for the rotation in each pixel.
Each additional frequency channel provides a separate measurement of $\alpha(\nu)$
thus reducing the error bar on the measurement of $RM({\hat{\bf n}})$. Such a direct
measurement of FR may be challenging when the $Q$ and $U$ signal in each pixel is
dominated by noise.
Another way to extract the rotation field is from correlations between E and B modes induced by FR using quadratic estimators \cite{Kamionkowski:2008fp,Yadav:2009eb,Gluscevic:2009mm,Gluscevic:2012me,Yadav:2012tn}, which is analogous to the method introduced in \cite{Hu:2001kj} for isolating the weak lensing contribution to CMB anisotropy. Unlike a direct extraction of FR from Eq.~(\ref{qu-rotation}), the quadratic estimator method does not utilize frequency dependence and is statistical in nature. It formally involves summing over all pixels of $Q$ and $U$ in order to reconstruct $\alpha$ in a given direction on the sky.
For small rotation angles, the relation between the spherical expansion coefficients of the E, B and $\alpha$ fields can be written as \cite{Gluscevic:2009mm}
\be
B_{lm}=2\sum_{LM}\sum_{l' m'}\alpha_{LM} E_{l' m'}
\xi_{lml'm'}^{LM}H_{ll'}^L \ ,
\label{eq:blm}
\ee
where $\xi_{lml'm'}^{LM}$ and $H_{ll'}^L$ are defined in
terms of Wigner $3$-$j$ symbols as \cite{Gluscevic:2009mm}
\ba
\nonumber
\xi_{lml'm'}^{LM} &\equiv& (-1)^m \sqrt{ (2l+1)(2L+1)(2l'+1) \over 4\pi} \\
&\times& \left(
\begin{array}{ccc}
l & L & l' \\
-m & M & m'
\end{array}
\right) \\
H_{ll'}^L &\equiv&
\left(
\begin{array}{ccc}
l & L & l' \\
2 & 0 & -2
\end{array}
\right) \ ,
\ea
and the summation is restricted to even $L+l'+l$. Eq.~(\ref{eq:blm}) implies
correlations between multipoles of E and B modes that are caused by the FR.
Since the primordial T and E are correlated, FR also correlates T and B.
\begin{figure}[tbp]
\includegraphics[height=0.46\textwidth]{clbb}
\caption{The CMB B-mode spectrum from Faraday rotation sourced at $30$~GHz
by a scale-invariant primordial magnetic field of 1 nG strength (red solid), by the full sky
galactic magnetic field (blue dot), and by the galactic field with Planck's sky
mask ($f_{\rm sky}=0.6$). The black short-dash line is the input E-mode spectrum,
the black dash-dot line
is the contribution from inflationary gravitational waves with $r=0.1$, while the black
long-dash line is the expected contribution from gravitational lensing by large scale
structure.}
\label{fig:cl}
\end{figure}
The quadratic estimator method is based on the assumption of statistical isotropy of primordial perturbations. This implies statistical independence of primordial $E_{l m}$ and $B_{l m}$ for different $lm$ pairs, {\it e.~g.} $\langle E_{l m}^* E_{l'm'} \rangle = \delta_{ll'} \delta_{mm'} C_l^{EE}$ for the primordial E-mode. Furthermore, if primordial fields are Gaussian, all of their correlation functions can be expressed in terms of the power spectra. FR introduces correlations between unequal $lm$ and generates connected four-point correlations. The corresponding non-Gaussian signal can be used to extract the rotation angle. Namely, given a CMB polarization map, one constructs quantities such as \cite{Kamionkowski:2008fp,Gluscevic:2009mm}
\be
{\hat D}_{ll'}^{LM,{\rm map}}={4\pi \over (2l+1)(2l'+1)} \sum_{mm'} B_{lm}^{\rm map}E_{l'm'}^{\rm map*} \xi_{lml'm'}^{LM}
\label{eq:dll}
\ee
which is the minimum variance unbiased estimator for $D_{ll'}^{LM} = 2 \alpha_{LM}C_l^{EE}H_{ll'}^L$ as shown in \cite{Pullen:2007tu}. Then, each pair of $l$ and $l'$ provides an estimate of $\alpha_{LM}$ via
\be
[{\hat \alpha}_{LM}]_{ll'} = {{\hat D}_{ll'}^{LM,{\rm map}} \over 2C_l^{EE}H_{ll'}^L} \ .
\label{alphallpr}
\ee
The minimum variance estimator ${\hat \alpha}_{LM}$ is obtained from an appropriately weighted sum
over estimators for each $ll'$ pair. If all such pairs were statistically independent, the weighting would
be given by the inverse variance of each estimator. However, one has to account for the correlation
between the $ll'$ and $l'l$ pairs and a detailed derivation of the appropriate weighting can be found in \cite{Kamionkowski:2008fp}.
Quantities such as ${\hat D}_{ll'}^{LM,{\rm map}}$ can also be constructed from products of T and B or, more generally,
all possible quadratic combinations, {\it i.e.} \{EB, BE, TB, BT, TE, ET, EE\}. One can construct an estimator
${\hat \alpha}_{LM}$ that utilizes all these combinations while accounting for the covariance between them \cite{Gluscevic:2009mm}.
In this paper, we opt to consider EB/BE and TB/BT separately, since one of them is typically
much more informative than other combinations. For further details of the method the reader is referred to
\cite{Kamionkowski:2008fp,Gluscevic:2009mm}.
We will assume a ``stochastic'' primordial magnetic field such that its value at any given location
is described by a probability distribution function. This implies that one can only
predict statistical properties of FR, such as the rotation power spectrum,
$C_L^{\alpha \alpha}$. The estimator for the power spectrum directly follows
from the estimator of $\alpha_{LM}$ and derives from four-point correlations,
such as EBEB. In the next section, we will examine detectability of the rotation
spectrum caused by the galactic and primordial magnetic fields.
We note that the form of the quadratic estimator allows for contributions from the
monopole ($L=0$) and dipole ($L=1$) of the FR field which, in principle, should not be
ignored. However, the monopole is generally not expected for FR, since it would imply
a non-zero magnetic charge enclosed by the CMB surface, while the dipole,
corresponding to a uniform magnetic field, is strongly constrained for the PMF and is
relatively unimportant for the galactic RM.
In addition to mode-coupling correlations of EB and TB type, FR also contributes
to the B-mode polarization spectrum, $C_l^{BB}$. In Fig.~\ref{fig:cl} we show the B-
mode spectrum at $30$ GHz due to FR by a PMF with a scale-invariant spectrum and
effective field strength of $1~{\rm nG}$.The B-mode
spectrum due to FR by the galactic field is also shown for two choices of sky cuts. For
reference, we show the $E$ mode auto-correlation spectrum which acts as a source
for the FR B modes, the B modes from inflationary gravitational waves with $r=0.1$,
and the expected contribution from weak lensing (WL). As can be seen from Fig.~
\ref{fig:cl}, the shape of the FR induced B mode spectrum largely mimics that of the E-
mode, and the galactic contribution has a very similar shape to that of the scale-
invariant PMF. A more detailed discussion of the FR induced B-mode spectrum can be
found in \cite{Pogosian:2011qv}. As shown in Sec.~\ref{sec:detection}, the
FR signatures of the galactic or scale-invariant PMF are always less visible in
the B-mode spectrum
compared to the EB quadratic estimator, with the latter promising to provide the
tightest constraints. We note that for causally generated PMF, which have blue spectra
with most of the power concentrated at the dissipation scale, the signal-to-noise ratio
(SNR) for detecting FR in BB is higher than that in the quadratic estimators
\cite{Yadav:2012uz}, although the resulting constraints are not competitive with
other probes of
causal fields. In what follows, we will focus on FR signatures of scale-invariant PMF
only.
\subsection{A scale-invariant primordial magnetic field}
\label{pmf}
Assuming statistical homogeneity and isotropy, the
magnetic field correlation function in Fourier space is~\cite{1975mit..bookR....M}
\begin{eqnarray}
\langle {b}^* _i ({\bm k}) {b} _j ({\bm k}') \rangle
&=&
\left[ \left(\delta _{ij} - {k_i k_j \over k^2} \right)
S (k) + i \epsilon_{ijl} {k_l \over k} A(k) \right]
\nonumber \\
&& \hskip 2 cm \times (2 \pi)^3 \delta^3 ({\bm k} -{\bm k}')
\end{eqnarray}
where repeated indices are summed over, $b_i({\bm k})$ denotes the Fourier transform of the magnetic field at
wave vector ${\bm k}$, $k=|{\bm k}|$, and $S(k)$ and $A(k)$ are the symmetric and
antisymmetric (helical) parts of the magnetic field power spectrum.
The helical part of the power spectrum does not play a role in Faraday Rotation;
only the symmetric part of the spectrum is relevant for us. On scales larger than
the inertial scale, {\it i.e.} $k < k_I$, $S(k)$ will fall off as a power law. On scales
smaller than a dissipative scale, {\it i.e.} $k > k_d$, the spectrum will get sharply
cut-off. A second power-law behavior is possible for $k_I < k < k_d$.
However, we shall assume that $k_I \approx k_d$, so that the magnetic field
spectrum is given by a single power law behavior at all scales larger than the
dissipation scale. These features can be summarized by \cite{Pogosian:2011qv}
\begin{eqnarray}
S(k) = \begin{cases}
\Omega_{B\gamma} {\rho}_\gamma
\frac{32\pi^3 n}{k_I^3}
\left ( \frac{k}{k_I} \right ) ^{2n-3}, &\mbox{$0<k<k_{\rm diss}$}\\
0, & \mbox{$k_{\rm diss} < k$}
\end{cases}
\label{eq:PB}
\end{eqnarray}
where $\Omega_{B\gamma}$ is the ratio of cosmological magnetic and photon
energy densities, and $\rho_\gamma$ is the photon energy density.
For convenience, we define an ``effective'', or ``energy equivalent'',
magnetic field strength
\begin{equation}
B_{\rm eff} \equiv ( 8\pi \Omega_{B\gamma} \rho_\gamma )^{1/2}
= 3.25\times 10^{-6}\sqrt{\Omega_{B\gamma}} ~{\rm Gauss}.
\label{Beff}$$ A uniform magnetic field of strength $B_{\rm eff}$ will have the same energy density as a stochastic magnetic field with spectrum in Eq. (\[eq:PB\]).
Scale invariance for $k < k_d$ corresponds to $n=0$, in which case the expressions in Eq. (\[eq:PB\]) are not well defined. One reason is that the energy density diverges logarithmically for a strictly scale invariant field. We get around this issue by only considering approximately scale invariant magnetic fields for which $n$ is small. Note that, for scale invariant fields, $B_{\rm eff}$ is equal to the field strength smoothed over a scale $\lambda$ (if it is larger than the dissipation scale), or $B_\lambda$, frequently used in the literature ([*e.g.*]{} in [@Ade:2013lta; @Paoletti:2008ck; @Paoletti:2012bb]).
As we have already noted, the galactic RM spectrum is approximately scale invariant. The RM induced by a scale invariant PMF is also scale invariant [@Yadav:2012uz]. Hence the amplitudes of the galactic and PMF rotation measure spectra can be related directly for a scale invariant PMF.
For a scale-invariant angular spectrum $C_\ell$, the quantity $\ell(\ell+1)C_\ell/2\pi$ is constant. Hence, one can define an amplitude $$A_{\rm RM} = \sqrt{\frac{L(L+1)C_L^{\rm RM} }{2\pi}}$$ for the range of $L$ over which the RM spectrum is approximately scale-invariant. One can see from Fig. \[fig:rmcl\] that this holds for the galactic $RM$ for $L\lesssim 100$. Similarly, one can define an amplitude A\_= The relation between the Faraday rotation angle in radians and the RM is given by $\alpha = c^2 \nu^{-2} {\rm RM} = 10^{-4} \nu^{-2}_{30} {\rm RM} ~{\rm m^2}$ where $\nu_{30}\equiv \nu/30~{\rm GHz}$. For example, a characteristic galactic RM of $30~{\rm rad/m^2}$ at $30~{\rm GHz}$ gives a rotation angle of $3 \times 10^{-3}$ rad, or $0.17^\circ$. The amplitudes of $A_\alpha$ in radians and $A_{\rm RM}$ in ${\rm rad/m^2}$ are therefore also related via A\_= 10\^[-4]{} \^[-2]{}\_[30]{} A\_[RM]{} [m\^2]{} . \[alphaRM\]
To gain some intuition into the relative importance of the galactic RM prior to more detail forecasts in Sec \[detect-pmf\], let us estimate the $B_{\rm eff}$ of scale-invariant PMF that gives RM comparable to that of our galaxy. For this, we can compare the amplitude $A_\alpha$ from PMF to the amplitude for galactic spectra at the same frequency. The relation between PMF and $A_\alpha$ involves integration along the line of sight (Eq. (\[alpha-FR\])). Rather than trying to find an approximate analytical solution, we use exact spectra found numerically in [@Pogosian:2011qv; @Yadav:2012uz]. From Fig. 1 of Ref. [@Yadav:2012uz], one can infer that for a scale invariant PMF with $\Omega_{B\gamma}=10^{-4}$, $(A^{\rm PMF}_\alpha)^2 \approx 2.43 \times 10^{-2}$ rad$^2$ at $30~{\rm GHz}$. We can then use Eqs. (\[alphaRM\]) and Eq. (\[Beff\]) to relate $B_{\rm eff}$ to $A_{\rm RM}$: B\_[eff]{} 0.021 A\_[RM]{} [nG m\^2 rad]{} . \[eq:BeffRM\] From Fig. \[fig:rmcl\], we can see that $A_{\rm RM} \approx 30~{\rm rad/m^2}$ for the galactic RM spectrum obtained without any sky cuts. It follows from Eq. (\[eq:BeffRM\]) that this corresponds to a scale-invariant PMF with $B_{\rm eff} \approx 0.6~{\rm nG}$. Such PMF are well below levels detectable by Planck through their FR signal, but can be detected at high significance by future sub-orbital and space-based experiments [@Yadav:2012uz].
We can also estimate the peak value of the B-mode spectrum, $P_B \equiv \ell_{\rm peak}(\ell_{\rm peak}+1)C_{\ell_{\rm peak}}^{BB}/2\pi$, in ($\mu$K)$^2$, for a given $A_{\rm RM}$ and frequency. From Fig. \[fig:cl\], we see that $P_B \approx 0.03~(\mu$K)$^2 $ for a scale invariant PMF of $1$ nG strength. Using this fact along with Eq. (\[eq:BeffRM\]), we can deduce P\_B \^2 (K)\^2 \^2 (K)\^2 . Thus, for a typical galactic $A_{\rm RM}$ of $30$ rad/m$^2$, at $30$ GHz, we should expect a peak in the B-mode spectrum of about $10^{-2}$ ($\mu$K)$^2$, which agrees with the blue dotted line in Fig. \[fig:cl\] corresponding to the unmasked galactic RM. In practice, the galactic plane is masked in all CMB experiments, [*e.g.*]{} the mask applied to Planck’s $30$ GHz map only uses $0.6$ of the sky. This significantly reduces the amplitude of the galactic RM spectrum and, as a consequence, the B-mode spectrum. This is shown as a blue dashed line in Fig. \[fig:cl\]. One can see that the galactic B-mode is at least one, and more likely two, orders of magnitude below the weak lensing signal, so there is almost no hope of seeing it even at a frequency as low as $30$ GHz. Going a factor of $\sim 3$ higher in frequency, which is more typical in CMB studies, results in another factor of $\sim 100$ decrease in the B-mode spectrum.
One of the conclusions of this section is that the galactic RM is practically invisible in the BB auto-correlation. Hence, we should look at the mode-coupling estimators, as they are known to be much more sensitive to FR [@Yadav:2012uz]. As shown in [@Yadav:2012uz], future experiments can easily probe $0.1~{\rm nG}$ strength PMF fields with the mode-coupling estimators.
The galactic rotation measure {#galaxy}
-----------------------------
![Maps of RM from Oppermann et al [@Oppermann:2011td]. The top panel shows the full sky map, the middle panel shows the associated error, while the bottom panel shows the map after applying the Planck $30~{\rm GHz}$ sky cut. The bottom panel also shows lines of symmetric cuts corresponding to $f_{\rm sky}=0.4$ and $f_{\rm sky}=0.1$. []{data-label="fig:rmmap"}](maps){height="85.00000%"}
![ The RM angular spectra, $L(L+1)C_L^{\rm RM}/2\pi$, obtained from the RM map of Oppermann et al [@Oppermann:2011td] with different cuts. Shown are the RM spectra corresponding to, from top to bottom, a scale-invariant PMF of $1$ nG, galaxy with no sky cut, with a mask used by Planck for their $70$ GHz map, a Planck mask for the $30$ GHz map, and symmetric cuts corresponding to $f_{\rm sky}=0.4$ and $f_{\rm sky}=0.1$. []{data-label="fig:rmcl"}](clrm-gal){height="40.00000%"}
A rotation measure (RM) map of Milky Way has been put together by Oppermann et al [@Oppermann:2011td] based on an extensive catalog of Faraday rotation data of compact extragalactic polarized radio sources. The RM is strongest along the galactic plane, but remains significant at the poles. However, the latitude dependence of the rms RM disappears at latitudes higher than about $80^\circ$, where the dependence flattens out at a rms value of about $10$ rad/m$^2$. Oppermann et al [@Oppermann:2011td] also presented the angular power spectrum of RM calculated after factoring out the latitude dependence. They find that $L(L+1)C^{\rm RM}_L \propto L^{-0.17}$ at $L \lesssim 200$, which is very close to a scale-invariant spectrum.
Some insight into galactic RM on smaller scales is provided by Haverkorn et al [@Haverkorn:2003ad]. They studied Stokes parameters, generated by diffuse polarized sources residing inside the galaxy, at several frequencies around 350 MHz. They focused on two separate patches away from the Galactic plane, each covering about $50$ sq. deg. The rotation measures in both were less than $10$ rad/m$^2$. However, since they are looking at sources inside the galaxy, they only see a fraction of the total RM along each line of sight. Thus, their numbers cannot be directly compared to RM of [@Oppermann:2011td]. Still their power spectrum gives a sense for the scale dependence of the total RM, and generally agrees with the flat nature of the large scale tail of the power spectrum of [@Oppermann:2011td]. At a scale of about $0.3^\circ$ ($L \sim 600$), they see evidence of a break in the spectrum, indicating a power law suppression of the RM power on small scales. Such a break in the spectrum is also present in the model of Minter and Spangler [@MinterSpangler1996], whose analysis is based on a small set of RM data from extragalactic sources. They argued that on small scales the spectrum should be due to Kolmogorov turbulence. The corresponding scaling of $L(L+1)C_L^{\rm RM}$ would be $L^{-2/3}$. To summarize, based on the evidence in [@Oppermann:2011td; @Haverkorn:2003ad; @MinterSpangler1996], if we were to measure RM in a few degree patches near galactic poles, we would expect a RM spectrum that is roughly scale invariant at $L<600$ turning into a $L^{-2/3}$ tail on smaller scales. As we will show in the next section, the quadratic estimator of the FR angle is largely uninformative for $L>300$. Hence, we do not need a RM map with a resolution higher than that of Oppermann et al [@Oppermann:2011td].
Note that the CMB polarization even in a small patch around the poles is affected by very long wavelength modes of variations of the RM. In terms of Eq. (\[eq:blm\]), lowest multipoles of $\alpha_{LM}$ contribute significantly to the much higher multipoles of $B_{l m}$. Thus, to predict the observability of the FR in CMB in a small patch one cannot simply use the high $\ell$ portion of the RM spectrum. We should use the full RM spectrum obtained from the RM after applying a mask normally used to block the galactic plane.
In Fig. \[fig:rmmap\] we show the RM map for our galaxy obtained from the Faraday rotation of extragalactic sources in units of rad/m$^{2}$ by Oppermann et al [@Oppermann:2011td]. The top panel is the full map. The mid-panel shows the uncertainty in RM in rad/m$^{2}$. Most of the signal and uncertainty come from the galactic disk. The bottom panel shows the RM map with part of the sky removed with a mask used for Planck’s mask $30$ GHz map [@planck:maps]. The bottom panel also shows lines of symmetric latitude cuts corresponding to sky fractions of $f_{\rm sky}=0.1$ and $0.4$.
In Fig. \[fig:rmcl\] we present the power spectra $L(L+1)C^{\rm RM}_{L}/2\pi$ of the galactic RM under different symmetric sky cuts and two Planck mask cuts: one for $30$ GHz with $f_{\rm sky}=0.6$, and one for $70$ GHz with $f_{\rm sky}=0.7$. The figure also shows the RM spectrum from a scale-invariant PMF with $B_{\rm eff}=1$ nG. To calculate the galactic RM spectra, we first use HEALPix [@healpix; @Gorski:2004by] to find \_[LM]{} = d W() R() Y\^\*\_[LM]{}(), \[tildealm\] where $R(\Omega)$ is the RM field, and $W(\Omega)$ is the mask which is $0$ if a pixel is masked and $1$ if it is not. We then evaluate C\^[RM]{}\_L = [1 f\_[sky]{}(2L+1)]{} \_[M=-L]{}\^[L]{} [a]{}\^\*\_[LM]{} [a]{}\_[LM]{} , \[eq:clrm\] which are the so-called “pseudo-$C_L$” rescaled by $f_{\rm sky}$.
The commonly used procedure for estimating $C_L$’s from a partial sky is detailed in [@Hivon:2001jp]. It assumes statistical isotropy and involves calculating a conversion matrix that relates “pseudo-$C_L$”, obtained from a partial sky, to the “full-sky” $C_L$. In Eq. (\[eq:clrm\]), as a crude approximation, we ignore the mode-coupling and replace the matrix with a rescaling by the $f_{\rm sky}$ factor. Note that the galactic RM map is not isotropic and the “full-sky” spectrum reconstructed from a given patch is not the same as the actual spectrum evaluated from the full-sky. Thus, our estimates in Sec. \[sec:detection\] of the detectability of the galactic RM spectrum, and of the detectability of the PMF with the galactic RM as a foreground, are specific to particular masks. The error bars in Fig. \[fig:rmcl\] receive contributions from the uncertainty in the measurements of the RM map and from the cosmic variance.
Detectability prospects {#sec:detection}
=======================
We are interested in answering two questions: 1)[*How well can CMB polarization experiments measure the galactic RM spectrum?*]{} 2)[*What bounds on a scale-invariant PMF can be placed from measurements of FR?*]{} Both contributions, the galactic FR and the primordial FR, contribute in the same way to the quadratic estimators discussed in Sec. \[basics\]. Let ${\hat \alpha}^{EB}_{LM}$ be the estimator of the rotation angle that one reconstructs from linear combinations of quadratic terms $E^*_{lm}B_{l'm'}$. The variance in ${\hat \alpha}^{EB}_{LM}$, for a statistically isotropic rotation angle, is defined as $$\langle {\hat \alpha}^{EB*}_{LM} {\hat \alpha}^{EB}_{L'M'} \rangle =\delta_{LL'} \delta_{MM'}
\sigma^2_L \ ,
\label{variance-alm}$$ with $\sigma^2_L$ given by [@Kamionkowski:2008fp; @Gluscevic:2009mm] \^2\_L = C\_L\^ + N\_L\^[EB]{} , \[eq:variance\] where $C_L^{\alpha \alpha}$ is the rotation power spectrum, and $N_L^{EB}$ is the part of the variance that does not depend on the rotation angle and contains contributions from variances of individual estimators ${\hat D}_{ll'}^{LM,{\rm map}}$ (see Eq. (\[eq:dll\])), while accounting for their covariance. It is given by [@Kamionkowski:2008fp; @Gluscevic:2009mm] (N\_L\^[EB]{})\^[-1]{} = [1]{} \_[’]{} [ (2+1)(2’+1)\[C\^[EE]{}\_[’]{} H\^L\_[’]{}\]\^2 \_[’]{}\^[EE]{} [C]{}\_\^[BB]{}]{} , \[eq:ebnoise\] where $\ell+\ell'+L=$even, and \^[XX]{}\_\^[prim]{}\_+f\_[DL]{}\[C\^[XX]{}\]\^[WL]{}\_+I\^[XX]{}\_ , \[clvariance\] is the measured spectrum, with $XX$ standing for either EE or BB, that includes the primordial contribution, $[C^{XX}]^{\rm prim}$, the weak lensing contribution, $[C^{XX}]^{\rm WL}$, and the noise term associated with the experiment, $I^{XX}_\ell$. We allow for the possibility of partial subtraction of the weak lensing contribution by introducing a de-lensing fraction, $f_{\rm DL}$, in (\[clvariance\]). The efficiency of de-lensing depends on the method used to de-lens [@Hirata:2002jy; @Seljak:2003pn], as well as the noise and the resolution parameters of the experiment. In [@Seljak:2003pn] it was found that the quadratic estimator method of de-lensing [@Hu:2001kj], which is similar but orthogonal [@Kamionkowski:2008fp; @Yadav:2009eb] to the one we use to extract the FR rotation, can reduce the lensing contribution to BB by up to a factor of $7$ as it reaches the white noise limit. On the other hand, the so-called iterative method [@Seljak:2003pn] can further reduce the lensing contribution by a factor of $10$ or larger. In our analysis we have only considered two cases – one with no de-lensing ($f_{\rm DL}=1$) and one with a factor of $100$ reduction of the lensing B-mode ($f_{\rm DL}=0.01$). We adopt the latter optimistic assumption with the aim of illustrating the relative significance of lensing contamination for an experiment of given noise and resolution. Note that the numerator in Eq. (\[eq:ebnoise\]) contains the unlensed E-mode spectrum, while the denominator includes the lensing contribution to E- and B-mode spectra. Predictions for both are readily obtained from CAMB [@camb; @Lewis:1999bs] for a given cosmological model.
The instrumental noise term, $I^{XX}_\ell$, accounts for the detector noise as well as the suppression of the spectrum on scales smaller than the width of the beam [@Bond:1997wr]: I\^[EE]{}\_=I\^[BB]{}\_= \^2\_P (\^2 \^2\_[FWHM]{} / 82) , \[eq:polnoise\] where $\Delta_P$ quantifies detector noise associated with measurements of CMB polarization and $\Theta_{\rm FWHM}$ is the full width at half maximum of the Gaussian beam.
In the case of TB, the expression for the variance analogous to Eq. (\[eq:ebnoise\]) is [@Gluscevic:2009mm] (N\_L\^[TB]{})\^[-1]{} = [1]{} \_[’]{} [ (2+1)(2’+1)\[C\^[TE]{}\_[’]{} H\^L\_[’]{}\]\^2 \_[’]{}\^[TT]{} [C]{}\_\^[BB]{}]{} . \[eq:tbnoise\]
Eq. (\[variance-alm\]) assumes that the distribution of the rotation angle is isotropic, and the angular brackets denote an ensemble average over many realizations of $\alpha$ as well as realizations of the primordial CMB sky. This is a well-motivated assumption for the PMF and for the galactic RM near galactic poles. Away from the poles, the galactic RM shows a strong dependence on the galactic latitude [@Oppermann:2011td]. This means that the forecasted variance can differ from the actual uncertainty for larger values of $f_{\rm sky}$, although we expect it to be of the correct order of magnitude. For the galactic RM map, we take the rotation angular spectrum to be $C^{\alpha \alpha}_L \equiv \nu^{-4} C_L^{RM}$, with $C_L^{RM}$ calculated from the RM map of [@Oppermann:2011td] after applying sky cuts, as plotted in Fig. \[fig:rmcl\] and explained in Sec. \[galaxy\].
We will consider several sets of experimental parameters corresponding to ongoing, future and hypothetical experiments. Namely, we consider Planck’s $30$ GHz LFI and $100$ GHz HFI channels based on actual performance characteristics [@Ade:2013ktc], POLARBEAR [@Keating:2011iq] at $90$ GHz with parameters compiled in [@Ma:2010yb], QUIET Phase II [@Samtleben:2008rb] at $40$ GHz using the parameters compiled in [@Ma:2010yb], $30$, $45$, $70$ and $100$ GHz channels of a proposed CMBPol satellite [@Baumann:2008aq], as well as an optimistic hypothetical sub-orbital and space-based experiments at $30$ and $90$ GHz. The assumed sky coverage ($f_{\rm sky}$), resolution ($\Theta_{\rm FWHM}$), and instrumental noise ($\Delta_P$) parameters are listed in Tables \[table:galaxy\] and \[table:pmf\].
Detectability of the galactic rotation measure
----------------------------------------------
![Contribution of individual multipoles to the net SNR of detection of the galactic RM spectrum. Plotted are $d(S/N)_L^2 / d \ln L$ for an optimistic $30$ GHz sub-orbital experiment with (solid blue) and without (blue dot) de-lensing, as well as for a hypothetical future $30$ GHz space probe with (red short dash) and without (red long dash) de-lensing ($f_{\rm DL}=0.01$). The spiky nature of the plot is due to the spikiness of the RM spectra, as seen in Fig. \[fig:rmcl\]. Experimental parameters assumed in this plot are given in the text and in Table \[table:galaxy\].[]{data-label="fig:dsngal"}](dlog-gal){height="45.00000%"}
Name - freq (GHz) $f_{\rm sky}$ FWHM (arcmin) $\Delta_P$($\mu$K-arcmin) $(S/N)_{EB}$ (+DL) $(S/N)_{TB}$ (+DL) $(S/N)_{BB}$ (+DL)
------------------- --------------- --------------- --------------------------- -------------------- -------------------- --------------------
Planck LFI - 30 0.6 33 240 5.3E-4 (same) 2.2E-3 (same) 2.3E-4 (same)
Planck HFI - 100 0.7 9.7 106 1.4E-3 (same) 7.5E-4 (same) 6E-5 (same)
Polarbear - 90 0.024$^a$ 6.7 7.6 1.3E-2 (1.5E-2) 1.6E-3 (2.0E-3) 4.6E-4 (6.0E-4)
QUIET II - 40 0.04$^a$ 23 1.7 0.3 (0.8) 0.05 (0.2) 0.02 (0.08)
CMBPOL - 30 0.6 26 19 1.0 (same) 0.4 (same) 0.05 (same)
CMBPOL - 45 0.7 17 8.25 2.1 (2.3) 0.8 (0.9) 0.12 (0.15)
CMBPOL - 70 0.7 11 4.23 2.0 (2.6) 0.6 (0.9) 0.08 (0.14)
CMBPOL - 100 0.7 8 3.22 1.4 (2.0) 0.3 (0.6) 0.03 (0.07)
Suborbital - 30 0.1 1.3 3 2.0 (3.1) 0.3 (0.7) 0.08 (0.2)
Space - 30 0.6 4 1.4 18 (28) 7 (14) 5 (30)
Space - 90 0.7 4 1.4 3.3 (6.8) 1.0 (2.4) 0.09 (0.64)
Let us estimate the significance level at which the galactic RM spectrum can be detected. The signal in this case is the rotation angle spectrum $C_L^{\alpha \alpha,G}$, related to the galactic RM spectrum via $C_L^{\alpha \alpha,G} = C_L^{\rm RM,G}/\nu^4$. Our forecasts will be specific to the spectra estimated after applying particular masks to the RM map from [@Oppermann:2011td], and the RM spectra are different for different sky cuts, as can be seen in Fig. \[fig:rmcl\]. The approximate expression for the variance is obtained under the assumption of statistically isotropic and Gaussian rotation angle: \^2\_[C\_L]{} = [2 \^2\_L f\_[sky]{}(2L+1)]{} where $\sigma^2_L$ is the variance of the estimator ${\hat \alpha}^{XB}_{LM}$ defined in Eq. (\[eq:variance\]), and $X$ stands for either E or T, depending on whether one uses the EB or TB estimator. The total SNR squared of the detection of the galactic FR spectrum is the sum of $(C_L^{\alpha \alpha,G/\sigma_{C_L})^2}$ at each $L$: ( S N )\^2\_[XB]{} = \_[L=1]{}\^[L\_[max]{}]{} [f\_[sky]{} (2L+1) \[C\_L\^[,G]{}\]\^2 2 \[C\_L\^[,G]{}+N\^[XB]{}\_L\]\^2]{} . \[eq:ebsnrG\] Note that even for a small sky coverage, the SNR receives a non-zero contribution from the smallest multipoles $L$ of the rotation angle. This is because large angle features of RM couple small angle features of CMB, and there will be $ll'$ pairs (see Eq. (\[alphallpr\])) giving an estimate of RM at small $L$ no matter how small $f_{\rm sky}$ is. However, the smaller the sky coverage, the smaller is the number of available $ll'$ pairs leading to larger statistical errors.
In addition to the EB estimator, one may also want to know how detectable the Milky Way RM is in the B-mode spectrum. In this case, the signal is the B-mode spectrum generated by the FR inside the galaxy, $C_L^{BB,G}$ (see Fig. \[fig:cl\]). The squared SNR is ( S N )\^2\_[BB]{} = \_[L=2]{}\^[L\_[max]{}]{} [f\_[sky]{} 2]{} (2L+1) ( C\_L\^[BB,G]{} \^[BB]{}\_L )\^2 . \[eq:bbsnr\] where ${\tilde C}^{BB}_L$ is given by Eq. (\[clvariance\]) and includes contributions from galaxy, weak lensing and instrumental noise.
In Fig. \[fig:dsngal\], we plot contributions to the net SNR in detection of the galactic RM spectrum, given by Eq. (\[fig:dsngal\]), received per $\ln L$. We show four different cases, all at $30$ GHz, corresponding to hypothetical future sub-orbital and space based experiments, with and without de-lensing by a factor $f_{\rm DL}=0.01$. We assume that the sub-orbital experiment will cover $f_{\rm sky}=0.1$ with $\Delta_P=3
\mu$K-arcmin and FWHM of $1.3'$, while the space-based probe will cover $f_{\rm
sky}=0.6$ (based on Planck’s $30$ GHz sky mask) with $\Delta_P=1.4 \mu$K-arcmin and FWHM of $4'$. Note that, as shown in Fig. \[fig:rmcl\], elimination of the galactic plane significantly reduces the amplitude of the galactic RM spectrum signal. Thus, the overall SNR of detection of the galactic RM spectrum is only of ${\cal O}(1)$ for the most optimistic sub-orbital experiment, with de-lensing making a relatively minor difference. In contrast, a space-based probe can detect the galactic RM at a high confidence level, with most of the signal coming from $4<L<100$. This means that CMB polarization can, in principle, be used to reconstruct the galactic RM map at a resolution of up to a degree. De-lensing the CMB maps can further improve the accuracy of the reconstruction.
In Table \[table:galaxy\], we forecast the SNR in detection of the galactic RM spectrum for several ongoing, proposed and hypothetical experiments. For experiments with $f_{\rm sky}<0.1$, such as QUIET and POLARBEAR, we use $C_L^{RM}$ with the $f_{\rm sky}=0.1$ cut of the RM map (the bottom line in Fig. \[fig:rmcl\]) around the galactic plane. Thus, our estimates assume that QUIET and POLARBEAR will observe in patches that are close to the galactic poles where statistical properties of the galactic RM become independent of the exact size of the patch. This expectation is justified by the fact that latitude dependence essentially disappears as one approaches the poles, which is quantified in Fig. 1 of [@Oppermann:2011td].
We separately show results based on the EB, TB and BB estimators, with and without de-lensing. The results allow us to make the following conclusions. Firstly, the galactic RM will be invisible in Planck’s polarization maps. Secondly, the galactic RM spectrum is unlikely to be detected by a sub-orbital experiment covering a small fraction of the sky near the galactic poles, at least not via the mode-coupling quadratic estimators or its contribution to the B-mode spectrum, unless very optimistic assumptions are made about its resolution and sensitivity. Thirdly, space-based polarization probes, such as the proposed CMBPOL mission, should take the galactic RM into account. We note that our analysis does not cover the possibility of the galactic RM being detected directly from Eq. (\[qu-rotation\]) by utilizing the frequency dependence of Stokes parameters. This may turn out to be a more sensitive method for future multi-frequency CMB experiments with sufficiently low instrumental noise. We leave investigation of this possibility for future work.
Detectability of the primordial magnetic field {#detect-pmf}
----------------------------------------------
![Contribution of individual multipoles to the net SNR of detection of the PMF of $0.1$ nG. Plotted are $d(S/N)_L^2 / d \ln L$ for an optimistic $30$ GHz sub-orbital experiment without de-lensing (blue dot), after de-lensing ($f_{\rm DL}=0.01$, solid blue) and after de-lensing and partially subtracting the galactic RM (($f_{\rm DG}=0.1$, blue long dash-dot), as well as for a hypothetical future $30$ GHz space probe with (red short dash) and without (red long dash) de-lensing ($f_{\rm DL}=0.01$), as well as with partial ($f_{\rm DG}=0.1$) subtraction of the galactic RM (green dash-dot). The red short and long dash lines correspond to $f^{\rm opt}_{\rm sky}=0.2$, while the green dash-dot uses $f^{\rm opt}_{\rm sky}=f_{\rm sky}=0.5$. The lines are spiky because the RM spectra, as seen in Fig. \[fig:rmcl\], contribute to then the noise part of the SNR. Note that spikes disappear when the galactic RM is partially subtracted. Experimental parameters assumed in this plot are given the text and in Table \[table:pmf\].[]{data-label="fig:dsnpmg"}](dlog-pmf){height="45.00000%"}
Name - freq (GHz) $f_{\rm sky}$ ($f^{\rm opt}_{\rm sky}$) FWHM (arcmin) $\Delta_P$($\mu$K-arcmin) $B_{\rm eff}$ (2$\sigma$, nG) +DL (nG) +DL+DG (nG)
------------------- ----------------------------------------- --------------- --------------------------- ------------------------------- ---------- -------------
Planck LFI - 30 0.6 33 240 16$^b$ same same
Planck HFI - 100 0.7 9.7 106 23 same same
Polarbear - 90 0.024$^a$ 6.7 7.6 3.3 3.0 same
QUIET II - 40 0.04$^a$ 23 1.7 0.46 0.26 0.25
CMBPOL - 30 0.6 26 19 0.56 0.55 0.51
CMBPOL - 45 0.7 17 8.25 0.38 0.35 0.29
CMBPOL - 70 0.7 11 4.23 0.39 0.32 0.26
CMBPOL - 100 0.7 8 3.22 0.52 0.4 0.34
Suborbital - 30 0.1 1.3 3 0.09 0.07 0.05
Suborbital - 90 0.1 1.3 3 0.63 0.45 same
Space - 30 0.6 (0.2) 4 1.4 0.06 0.04 0.02
Space - 90 0.7 (0.4) 4 1.4 0.26 0.15 0.12
-------------------------------------------------------------------------------------------- -- --
![image](fig6_DL1p0.pdf) -1 cm ![image](fig6_DL0p01.pdf) -1 cm ![image](fig6_DGDL0p01.pdf)
-------------------------------------------------------------------------------------------- -- --
-1 cm
The SNR of the detection of the primordial FR rotation angle is given by ( S N )\^2\_[XB]{} = \_[L=1]{}\^[L\_[max]{}]{} [f\_[sky]{} (2L+1) \[C\_L\^[,PMF]{}\]\^2 2\[C\_L\^[,PMF]{} + f\_[DG]{}C\_L\^[,G]{}+N\^[XB]{}\_L\]\^2]{} . \[eq:ebsnrP\] where $X$ stands for either E or T, depending on whether one uses the EB or TB estimator, $C_L^{\alpha \alpha,PMF}$ is the primordial contribution, while $f_{\rm DG}$ is the “de-galaxing” factor, [*i.e.*]{} the fraction of the galactic rotation angle spectrum that is known from other sources and can be subtracted.
As in the case of the galactic RM, the SNR receives a non-zero contribution from all multipoles $L$, including the smallest, even for a small sky coverage. In the case when a large fraction of the sky is available, there is an optimal cut of the sky, $f^{\rm opt}_{\rm sky}$, for which the S/N is maximal. This is because the galactic RM contribution is weaker when large parts of the sky are cut, but there are more available modes when more of the sky is kept. In what follows, we will state when values of $f^{\rm opt}_{\rm sky}$ are different from $f_{\rm sky}$.
In Fig. \[fig:dsnpmg\], we plot contributions per $\ln L$ to the net SNR in detection of the FR from a scale-invariant PMF with $B_{\rm eff}=0.1$ nG for the same two hypothetical experiments considered in Fig. \[fig:dsngal\]. In addition to the four cases considered in Fig. \[fig:dsngal\], the green dot-dash line shows the case when the galactic contribution is partially subtracted for the space-based probe with $f_{\rm DG}=0.1$. A few observations can be made from this plot. First is that both experiments can detect a $0.1$ nG PMF at high significance, which constitutes a big improvement on prospects of detecting a PMF in CMB via non-FR signatures [@Ade:2013lta; @Paoletti:2008ck; @Paoletti:2012bb]. Second is that de-lensing makes a significant difference for both the space-based and sub-orbital experiments. Third is that subtracting the galactic RM moderately improves the detection at multipoles in the $4\lesssim L \lesssim 70$ range. Finally, we can see that most of the information comes from $4<L<200$, meaning that future CMB experiments can reconstruct FR maps corresponding to PMF of $0.1$ nG strength up to a degree resolution.
In Table \[table:pmf\] we present our forecasts for $2\sigma$ bounds on $B_{\rm eff}$ that can be expected from various ongoing and future CMB experiments. We present results obtained from the EB estimator, because they lead to the strongest bounds, except for the case of Planck’s $30$ GHz channel, for which the TB estimator gives better constraints. For each experiment, we check the effect of de-lensing (+DL) with $f_{\rm DL}=0.01$ as well as further partial subtraction of the galactic RM (+DL+DG) with $f_{\rm DG}=0.1$. The optimal sky cuts are indicated when relevant. One of the interesting facts one can extract from this table is that very competitive bounds can be placed by POLARBEAR and QUIET, while even ${\cal O}(10^{-11}{\rm G})$ fields can be constrained in principle by future experiments.
To show the dependence of the PMF bound on the instrumental noise and resolution, we plot contours of constant $2\sigma$ bounds on $B_{\rm eff}$ in Fig. \[fig:contour\]. The three panels show the cases with and without de-lensing and with an additional partial subtraction of the galactic RM. An optimal sky cut was used for each set of parameters.
Summary and Outlook {#sec:summary}
===================
We have studied the detectability of a scale-invariant PMF due to FR of the CMB using polarization correlators in the presence of the foreground imposed by the Milky Way magnetic field. We have found that the galactic RM is not a serious barrier to observe a scale-invariant PMF and that the EB quadratic correlator is in general more powerful than the BB corelator.
We have also studied the detectability of the PMF as a function of the sky coverage. We find that suborbital experiments can be almost as effective as space borne experiments, and the obstruction caused by the galaxy is relatively weak if the observed patch is near the poles. Also, as the mode-coupling correlations of CMB are mostly sourced by the largest scale features (low $L$) of the rotation measure, a full sky CMB map is not necessary to access the PMF on the largest scales.
Our results on the sensitivities of various experiments to the galactic magnetic field are summarized in Table I. Similarly in Table II we show the effective magnetic field strength that can be constrained by those experiments, taking into account the lensing of the CMB and the FR due to the Milky Way. The dependence of the observable PMF to the parameters of the experiment are shown in Fig. \[fig:contour\].
Here we have focussed on the observability of CMB FR at a single frequency. Cross-correlating polarization maps at multiple frequencies with comparable sensitivity to FR can further boost the significance of detection. We will address this possibility in a future publication. Also, frequency dependence of polarization can be used to separate out FR induced B-modes from those induced by gravitational waves.
We thank the two anonymous referees for their careful reading of the initially submitted version and constructive suggestions that helped to improve the paper. We are grateful to Niels Oppermann and collaborators [@Oppermann:2011td] for making their rotation measure maps publicly available at [@RMmaps]. Some of the results in this paper have been derived using the HEALPix [@healpix; @Gorski:2004by] package. LP and TV benefited from previous collaborations with Amit Yadav. We thank Steven Gratton, Yin-Zhe Ma, Phil Mauskopf and Vlad Stolyarov for discussions and helpful comments. SD is supported by a SESE Exploration postdoctoral fellowship; LP is supported by an NSERC Discovery grant; TV is supported by the DOE at Arizona State University.
[99]{}
D. Grasso and H. R. Rubinstein, Phys. Rept. [**348**]{}, 163 (2001) \[astro-ph/0009061\]. L. M. Widrow, Rev. Mod. Phys. [**74**]{}, 775 (2002) \[astro-ph/0207240\]. P. A. R. Ade [*et al.*]{} \[ Planck Collaboration\], arXiv:1303.5076 \[astro-ph.CO\]. D. Paoletti, F. Finelli and F. Paci, Mon. Not. Roy. Astron. Soc. [**396**]{} (2009) 523 \[arXiv:0811.0230 \[astro-ph\]\]. D. Paoletti and F. Finelli, arXiv:1208.2625 \[astro-ph.CO\]. M. S. Turner and L. M. Widrow, Phys. Rev. D [**37**]{}, 2743 (1988). B. Ratra, Astrophys. J. [**391**]{}, L1 (1992). T. Kahniashvili, Y. Maravin, A. Natarajan, N. Battaglia and A. G. Tevzadze, Astrophys. J. [**770**]{}, 47 (2013) \[arXiv:1211.2769 \[astro-ph.CO\]\]. R. A. C. Croft, D. H. Weinberg, M. Bolte, S. Burles, L. Hernquist, N. Katz, D. Kirkman and D. Tytler, Astrophys. J. [**581**]{}, 20 (2002) \[astro-ph/0012324\]. T. Kahniashvili, Y. Maravin and A. Kosowsky, Phys. Rev. D [**80**]{}, 023009 (2009) \[arXiv:0806.1876 \[astro-ph\]\]. A. Yadav, L. Pogosian and T. Vachaspati, Phys. Rev. D [**86**]{}, 123009 (2012) \[arXiv:1207.3356 \[astro-ph.CO\]\]. T. Vachaspati, Phys. Lett. B [**265**]{}, 258 (1991). R. Durrer and C. Caprini, JCAP [**0311**]{}, 010 (2003) \[astro-ph/0305059\]. K. Jedamzik and G. Sigl, Phys. Rev. D [**83**]{}, 103005 (2011) \[arXiv:1012.4794 \[astro-ph.CO\]\]. K. Jedamzik and T. Abel, arXiv:1108.2517 \[astro-ph.CO\]. N. Oppermann, H. Junklewitz, G. Robbers, M. R. Bell, T. A. Ensslin, A. Bonafede, R. Braun and J. C. Brown [*et al.*]{}, arXiv:1111.6186 \[astro-ph.GA\]. M. Haverkorn, P. Katgert and A. G. de Bruyn, Astron. Astrophys. [**403**]{}, 1045 (2003) \[astro-ph/0303644\].
A. H. Minter and S. R. Spangler, Ap. J. [**458**]{}, 194 (1996).
L. Pogosian, A. P. S. Yadav, Y. -F. Ng and T. Vachaspati, Phys. Rev. D [**84**]{}, 043530 (2011) \[Erratum-ibid. D [**84**]{}, 089903 (2011)\] \[arXiv:1106.1438 \[astro-ph.CO\]\]. A. Kosowsky and A. Loeb, Astrophys. J. [**469**]{}, 1 (1996) \[astro-ph/9601055\]. D. D. Harari, J. D. Hayward and M. Zaldarriaga, Phys. Rev. D [**55**]{}, 1841 (1997) \[astro-ph/9608098\]. M. Kamionkowski, Phys. Rev. Lett. [**102**]{}, 111302 (2009) \[arXiv:0810.1286 \[astro-ph\]\]. A. P. S. Yadav, R. Biswas, 1, M. Su and M. Zaldarriaga, Phys. Rev. D [**79**]{}, 123009 (2009) \[arXiv:0902.4466 \[astro-ph.CO\]\]. V. Gluscevic, M. Kamionkowski and A. Cooray, Phys. Rev. D [**80**]{}, 023510 (2009) \[arXiv:0905.1687 \[astro-ph.CO\]\]. V. Gluscevic, D. Hanson, M. Kamionkowski and C. M. Hirata, arXiv:1206.5546 \[astro-ph.CO\]. A. P. S. Yadav, M. Shimon and B. G. Keating, Phys. Rev. D [**86**]{}, 083002 (2012) \[arXiv:1207.6640 \[astro-ph.CO\]\]. W. Hu and T. Okamoto, Astrophys. J. [**574**]{}, 566 (2002) \[astro-ph/0111606\]. A. R. Pullen and M. Kamionkowski, Phys. Rev. D [**76**]{}, 103529 (2007) \[arXiv:0709.1144 \[astro-ph\]\]. Monin A. S., Iaglom A. M., Statistical fluid mechanics: Mechanics of turbulence. Volume 2, Cambridge, Mass., MIT Press, (1975).
<http://pla.esac.esa.int/pla/aio/planckProducts.html>
<http://healpix.jpl.nasa.gov>
K. M. Gorski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke and M. Bartelman, Astrophys. J. [**622**]{}, 759 (2005) \[astro-ph/0409513\]. E. Hivon, K. M. Gorski, C. B. Netterfield, B. P. Crill, S. Prunet and F. Hansen, Astrophys. J. [**567**]{}, 2 (2002) \[astro-ph/0105302\]. C. M. Hirata and U. Seljak, Phys. Rev. D [**67**]{}, 043001 (2003) \[astro-ph/0209489\]. U. Seljak and C. M. Hirata, Phys. Rev. D [**69**]{}, 043005 (2004) \[astro-ph/0310163\]. <http://camb.info/>
A. Lewis, A. Challinor and A. Lasenby, Astrophys. J. [**538**]{}, 473 (2000) \[arXiv:astro-ph/9911177\]. J. R. Bond, G. Efstathiou and M. Tegmark, Mon. Not. Roy. Astron. Soc. [**291**]{}, L33 (1997) \[astro-ph/9702100\]. P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], arXiv:1303.5062 \[astro-ph.CO\]. B. Keating, S. Moyerman, D. Boettger, J. Edwards, G. Fuller, F. Matsuda, N. Miller and H. Paar [*et al.*]{}, arXiv:1110.2101 \[astro-ph.CO\]. Y. -Z. Ma, W. Zhao and M. L. Brown, JCAP [**1010**]{}, 007 (2010) \[arXiv:1007.2396 \[astro-ph.CO\]\]. D. Samtleben \[QUIET Collaboration\], Nuovo Cim. B [**122**]{}, 1353 (2007) \[arXiv:0802.2657 \[astro-ph\]\]. D. Baumann [*et al.*]{} \[CMBPol Study Team Collaboration\], AIP Conf. Proc. [**1141**]{}, 10 (2009) \[arXiv:0811.3919 \[astro-ph\]\]. <http://www.mpa-garching.mpg.de/ift/faraday/>
[^1]: In the context of this paper, “primordial” means that the magnetic field was generated prior to last scattering.
| {
"pile_set_name": "ArXiv"
} |
Un sous-système d’un système plat de dimension différentielle $2$ est plat
A subsystem of a flat system of differential dimension $2$ is flat.
to 0.3cm
Mai 2017
[to ]{}
to 0pt-1truecm
Abridged English version {#abridged-english-version .unnumbered}
========================
The result that will be proved corresponds in control theory to the fact that a system with $2$ controls which is linearizable by exogenous feedback is linearizable by endogenous feedback or also that any of a is flat. We refer to [@fliess; @fliess3; @Sira-Ramirez2004; @Levine2009] for more details on flat systems, a notion that goes back to Monge’s problem [@Monge1787; @Hilbert1912; @Cartan1915; @Zervos1932], and to [@vinogradov; @zharinov; @fliess2] for the formalism of diffiety theory. We will use Rouchon’s lemma [@Rouchon1994; @Ollivier1998], following the notations and definitions of [@OllivierSadik2006a]. A , or , $V/U$ will be a diffiety with a projection $V\mapsto U$ that is a diffiety morphism and a $W/U$ of $V/U$ is such that $\cO(W)\subset\cO(V)$. For brevity, we denote $\partial/\partial x$ by $\partial_{x}$.
A $m$ $V/U_{\delta_{0}}$ is an open subset of $\R^{n}\times\left(\R^{\N}\right)^{m}\times U$ with a derivation $\delta_{0}+\sum_{i=1}^{n}f_{i}(x,z,u)\partial_{x_{i}}+\sum_{j=1}^{m}\sum_{k=0}^{\infty}
z_{j}^{(k+1)}\partial_{z_{j}^{(k)}}$.
The , $T^{m}/U_{\delta_{0}}$, is $\left(\R^{\N}\right)^{m}\times U$ equipped with the derivation $\delta_{0}+\sum_{j=1}^{m}\sum_{k=0}^{\infty}
z_{j}^{(k+1)}\partial_{z_{j}^{(k)}}$. We may then define flat systems.
\[e-flatness\] A system $V/U_{\ast}$ is if there exist a diffiety extension $U/U_{\ast}$ and $\phi:W/U\mapsto
V\times_{U_{\ast}}U$ a morphism of extensions of $U$, where $W$ is an open subset of $T^{\mu}/U$, such that $\mathrm{Im}\phi$ is dense in $V\times_{U_{\ast}}U$.
A system $V/U$ is if there exists a dense open set $W\subset V$ such that any $x\in W$ admits a neighbourhood $O$ isomorphic by $\phi: O\mapsto \tilde O$ to an open subset $\tilde
O$ of $T^{m}\times U$; the functions $\phi^{\ast}(z_{j})$ that generate $\cO(O)$ are called .
By convention, $\ord_{z}A=-\infty$ if $A$ is free from $z$ and its derivatives. In the case of a parametrizable diffiety, we identify a function $x\in\cO(V)$ with the function $x(z)=\phi^{\ast}(x)\in\cO(W)$.
Our theorem may be stated as follows. The case $m=1$ is classical [@Charlet1989]. The proof will make a repeated use of Rouchon’s lemma [@Rouchon1994; @Ollivier1998], given below.
\[e-endo\] A parametrizable extension of differential dimension at most $2$ is flat.
\[e-Rouchon\] Let $V/U_{\ast}$ be a parametrizable diffiety extension, $x_{i}\in\cO(V)$, $i=1\ldots n$ a family of nonconstant functions on $V$, $H(x)=0$ a differential equation satisfied by the $x_{i}$ and $e_{i}:=\ord_{x_{i}}H$.
Using the notations of def. \[e-flatness\], let $z_{j}$ be coordinates on $T^{\mu}$ and assume the $\max_{i=1}^{n}\ord_{z_{1}}
x_{i}=r>-\infty$. If $e_{i}=0$ $\Rightarrow$ $\ord_{z_{1}}
x_{i_{0}}<r$, then using the notation $D:=\sum_{e_{i}>0}C_{i}\partial_{x_{i}^{(e_{i})}}$ where the $C_{i}$ are new variables with $DC_{1}=0$, the $n$-tuple $(\partial_{z_{1}^{(r)}}
x_{i}^{(e_{i})})$ is a solution of the equations $\phi^{\ast}(D^{k}P)=0$.
It suffices to remark that, $\partial_{z_{1}^{(r+1)}}x_{i}^{(e_{i})}=\partial_{z_{1}^{(r)}}x_{i}$ for $e_{i}>0$, so that $\partial_{z_{1}^{(r+1)}}^{2}x_{i}=0$. Substituting $\partial_{z_{1}^{(r+1)}}x_i$ to $C_{i}$ in $D^{k}P$, one gets $\partial_{z_{1}^{(r+1)}}^{k}P=0$.
From now on, we only consider differential dimension $2$. We may complete some local coordinates on $V$, considered as functions of the $z_{j}$ using the morphism $\phi$, to get local coordinates on $W$, say by choosing $z_{i}, \ldots, z_{i}^{(s_{i})}$, $i=1,2$, ($s_{1}\le r$), and the $z_{j}^{(k)}$, $j>2$, $k\in\N$. We may then express the derivations $\partial/\partial z_{1}^{(k)}$ in the $V$ coordinates. In this setting, for state equations defining $V$, Rouchon’s lemma means that $\partial_{z_{1}^{(r+1)}}$ and $\partial_{z_{1}^{(r)}}=[\delta,\partial_{z_{1}^{(r+1)}}]$ do commute. The next lemma goes one step further by considering $\partial_{z_{1}^{(r-1)}}=[\delta,\partial_{z_{1}^{(r)}}]$ (*cf.* [@zharinov1996]).
\[e-Rouchon2\] Under the hypotheses of th. \[e-Rouchon\], assume that the system $V/U$ is locally defined by explicit equations of order $1$. $P_{i}:=x_{i}'-h_{i}(x,x_{1}',x_{2}',u)=0$, $2< i\le n$.
If $V/U$ is parametrizable, the homogeneous ideal $(D^{k}P_{i}|1\le i\le n-2,\> k\in\N)$ is of projective dimension $0$, iff at least one of the equations $P_{i}$ is non linear in the derivatives $x_{i}'$.
In this case, the state equations can be rewritten $x_{i}'=f_{i}(x,v_{2},u)v_{1}+g_{i}(x,v_{2},u)$, where the $v_{i}$ are functions of the $x_{i}$ and the $x_{j}'$, $j=1,2$, such that $\partial_{z_{1}^{(r+1)}}v_{2}=0$, $f_{1}=1$ ( $v_{1}=x_{1}'$) and $f_{2}=v_{2}$ if $f_{2}$ depends on $v_{2}$.
We have moreover $\partial_{z_{1}^{(r)}}f_{i}(x,v_{2},u)=0$.
— By th. \[e-Rouchon\], the dimension is at least $1$. Now, if $P_{i}$ is non linear in $x_{i}'$, $i=1,2$, then a non trivial relation $D^{2}P_{i}=0$, so that the dimension is at most $1$. — Up to a permutation of indices, we may assume that $C_{1}\neq0$. Now, $C_{2}=F(x,x_{1}',x_{2}')C_{1}$ with $\partial_{z_{1}^{(r)}}F=0$. We take $v_{1}=x_{1}'$. If $F$ depends on $x_{2}'$, we choose $v_{2}=F$, $f_{2}(v_{2})=v_{2}$ and $g_{2}=x_{2}'-v_{2}$ or else $f_{2}(x)=F(x)$, $v_{2}=x_{2}'-f_{2}(x)v_{1}$ and $g_{2}(v_{2})=v_{2}$. — We have $\partial_{z_{1}^{(r+1)}}=A\partial_{v_{1}}+2A'\partial_{v_{1}'}+\cdots+B\partial
v_{2}'+\cdots$ So $\partial_{z_{1}^{(r)}}=\sum_{i=1}^{n}Af_{i}\partial_{x_{i}}+
A'\partial_{v_{1}}\cdots+B\partial_{v_{2}}+\cdots$ Going one step further, we get, as terms in $A'$ cancel: $\partial_{z_{1}^{(r-1)}}=\sum_{i=1}^{n}\left(Af_{i}'+
\hbox{terms of order at most $r$ in $z_{1}$}\right)\partial_{x_{i}}+\cdots$ As $[\partial_{z_{1}^{(r+1)}},\partial_{z_{1}^{(r-1)}}]=0$, we need have $\partial_{z_{1}^{(r+1)}}f_{i}'=\partial_{z_{1}^{(r)}}f_{i}=0$.
<span style="font-variant:small-caps;">Sketch of the proof of th. \[e-endo\].</span> — Denote by $z$, $v$, $u$ $u_{\ast}$ coordinate functions on $T^{\mu}$, $V$, $U$, $U_{\ast}$. If the result is false, there exists an open subset $O=\phi(\tilde O)\subset V$ such that no open subset $O_{\prime}=\phi(\tilde O_{\prime})\subset O$ is isomorphic to an open subset of $T^{m}$. We will look for a contradiction.
In the case $m=1$, we choose $x_{i}$, $1\le i\le n$ such that the $x_{i}$ and their derivatives are local coordinates of $O_{\prime}\subset O$. We assume moreover that these function satisfy equations of order $1$ $x_{i}'=h_{i}(x, x_{1}', u_{\ast})$, $1<i\le n$ and that $n$ is minimal. If $n=1$, $O_{\prime}$ is flat.
If not, th. \[e-Rouchon\] implies that the $h_{i}$ are linear in $x_{1}'$: $h_{i}=f_{i}x_{i}'+g_{i}$. Then, we may replace the $x_{i}$ by $n-1$ independent solutions $y(x,u_{\ast})$ of the differential system $\partial_{x_{1}} Y+\sum_{i=2}^{n}f_{i}\partial_{x_{i}}
Y=0$. The derivatives $y_{i}'$ do not depend on $x_{1}'$, and the $y_{i}$ satisfy a new system of order $1$ contradicting the minimality of $n$.
In the case $m=2$, we also consider $x_{i}(z,u,u_{\ast})$, $1\le i\le n$, that satisfy a system of order $1$ $x_{i}'=h_{i}(x, x_{1}',
x_{2}',u_{\ast})$, $2<i\le n$ and $\max_{i=1}^{n}\ord_{z_{1}} x_{i}=r>-\infty$. We assume that the couple $(r, n)$ is minimal for lexicographic ordering.
If $n=2$, $O_{\prime}$ is flat. If the $h_{i}$ do not depend on $x_{1}'$ or $x_{2}'$, we are reduced to the case $m=1$, already considered. We distinguish two cases.
i\) If the $h_{i}$ are all linear in $x_{1}'$ and $x_{2}'$: $h_{i}=\sum_{j=1}^{2}f_{i,j}x_{j}'+g_{i}$, $1\le i\le n$, we replace the $x_{i}$ by $n-1$ independent solutions $y(x, u_{\ast})$, of the differential equation $\partial_{x_{1}}
Y+\sum_{i=3}^{n}f_{i,1}\partial_{x_{i}} Y=0$. The order $\max_{i=1}^{n-1}\ord_{z_{1}} y_{i}$ is at most $r$. The $y_{i}'$ do not depend on $x_{1}'$, so that they satisfy a system of order $1$. If the $y_{i}'$ depend on $x_{1}$, they satisfy our hypotheses, which contradicts the minimality of $n$. If not, we are reduced to the case $m=1$ and we just have to complete the flat output for the diffiety defined by the $y_{i}$ with $x_{1}$ to conclude.
ii\) If the $h_{i}$ are not linear in $x_{1}'$ and $x_{2}'$, by \[e-Rouchon2\] the state equations can be rewritten $x_{i}'=f_{i}(x,v_{2},u_{\ast})*v_{1}+g_{i}(x,v_{2},u_{\ast})$, with $\partial_{z_{1}^{(r+1)}}v_{2}=0$ and $\partial_{z_{1}^{(r)}}f_{i}(x,v_{2},u)=0$. We can replace the $x_{i}$ by $n-1$ independent solutions $y_{i}(x,v_{2},u_{\ast})$ of the differential equation $\sum_{i=1}^{n}f_{i}\partial_{x_{i}}Y=0$, completed with $v_{2}$ if the $f_{i}$ depend on $v_{2}$. The $y_{i}'$ must depend on $x_{1}$; if not the $f_{i}$ must be constants, the $y_{i}'$ satisfy linear equations so that the $h_{i}$ should have been linear in the $x_{j}'$, $j=1,2$.
So, the $y_{i}$ (and $v_{2}$ if $f_{2}=v_{2}$) must satisfy a system of order $1$ and, according to \[e-Rouchon2\] are of order less than $r$ in $z_{1}$. A final contradiction that concludes the proof.
*E.g.*, the diffieties $U$ and $U_{\ast}$ may be respectively $\R$, standing for the time variable $t$ with a derivation $\partial/\partial t$ and a single point with derivation $0$, if $V$ is associated to a stationary model. Then, the theorem asserts that $V$ is flat with flat outputs not depending on the time, answering a problem raised by Pereira Da Silva and Rouchon [@PereiraRouchon2004].
Introduction {#intro .unnumbered}
============
Si la notion mathématique remonte aux travaux de Monge [@Monge1787] et a été étudiée au début du <span style="font-variant:small-caps;">xx</span>$^{\rm
e}$ siècle par Hilbert [@Hilbert1912], Cartan [@Cartan1915] ou Zervos [@Zervos1932], les systèmes plats ont été inventés sous ce nom pour les besoins de l’automatique [@fliess; @fliess3; @Levine2009]. Dans ce cadre, le résultat qui va être prouvé en dimension différentielle au plus $2$, signifie qu’un système linéarisable par bouclage exogène est linéarisable par bouclage endogène. Sommairement, , c’est-à-dire que si les solutions d’un système d’EDO sont paramétrables par $m$ fonctions arbitraires, il existe un tel paramétrage localement bijectif. Le cas $m=1$ est une conséquence des résultat de Charlet *et al.* [@Charlet1989] ou dans le cas d’un paramétrage rationnel du théorème de Lüroth–Ritt, mais qui n’a pas d’analogue en dimension différentielle $2$ [@Ollivier1998].
La définition de la platitude peut varier selon que l’on impose ou non à un système stationnaire de posséder un paramétrage indépendant du temps (*cf.* Pereira da Silva et Rouchon [@PereiraRouchon2004]). On montrera que, en dimension différentielle au plus $2$, ces deux définitions coïncident, c’est-à-dire que .
Diffiétés plates
================
Pour la notion de diffiété [@vinogradov; @zharinov], nous adoptons les conventions de [@OllivierSadik2006a]. Soit $I$ un ensemble dénombrable, on appellera un ouvert $V$ de $\R^{I}$ pour la topologie la plus grossière rendant pour tout $i_{0}\in I$ les projections $\pi_{i_{0}}:(x_{i})_{i\in I}\mapsto x_{i_{0}}$ continues, muni d’une dérivation $\delta=\sum_{i\in I} c_{i}(x){\partial\over\partial
x_{i}}$, où les $c_{i}$ appartiennent à $\cO(V)$, l’anneau des applications $\cC^{\infty}$ de $V$ dans $\R$ ne dépendant que d’un nombre de coordonnées. Par concision, $\partial/\partial x$ sera noté $\partial_{x}$.
Un est une application $\phi:V_{\delta_{1}}\mapsto V_{\delta_{2}}$, définie par des fonctions $\cO(V)$ et telle que $\delta_{1}\circ\phi^{\ast}=\phi^{\ast}\circ\delta_{2}$, où $\phi^{\ast}:\cO(V_{2})\mapsto\cO(V_{1})$ est l’application duale de $\phi$.
Une , ou un , noté $V/U$, est un couple de diffiétés muni d’une projection $\pi:V\mapsto
U$ surjective qui est un morphisme de diffiétés. Il s’agit donc d’un fibré sur $U$, avec une projection compatible avec la structure de diffiété. Un morphime d’extensions $\phi:V_{1}/U\mapsto V_{2}/U$ est un morphisme de $V_{1}$ dans $V_{2}$ tel que $\pi_{2}\circ\phi=\pi_{1}$. Un $W/U$ de $V/U$ est une extension de $U$ telle que $\cO(W)\subset\cO(V)$. Soient $V/U$ et $W/U$ deux extensions de diffiétés, leur produit fibré est muni d’une structure naturelle d’extension de $U$ (ainsi que de $V$ ou de $W$), notée $V\times_{U} W$.
Un $m$ $V/U_{\delta_{0}}$ est un ouvert de $\R^{n}\times\left(\R^{\N}\right)^{m}\times U$ muni d’une dérivation de la forme $\delta_{0}+\sum_{i=1}^{n}f_{i}(x,z)\partial_{x_{i}}+\sum_{j=1}^{m}\sum_{k=0}^{\infty}
z_{j}^{(k+1)}\partial_{z_{j}^{(k)}}$. de dimension différentielle $m$, que l’on note $T^{m}/U_{\delta_{0}}$, est $\left(\R^{\N}\right)^{m}\times U$ muni de la dérivation $$\delta_{0}+\sum_{i=1}^{m}\sum_{k=0}^{\infty}
z_{i}^{(k+1)}\partial_{z_{i}^{(k)}}.$$
\[platitude\] Un système $V/U_{\ast}$ est s’il existe un système $U/U_{\ast}$ et $\phi:W/U\mapsto
V\times_{U_{\ast}}U$ un morphisme d’extension de $U$, où $W$ est un ouvert de $T^{\mu}/U$, tel que $\mathrm{Im}\phi$ est dense dans $V\times_{U_{\ast}}U$.
Un système $V/U$ sera dit s’il existe un ouvert dense $W$ de $V$ tel que tout $x$ appartenant à $W$ admette un voisinage $O$ isomorphe par $\phi: O\mapsto \tilde O$ à un ouvert $\tilde O$ de $\T^{m}\times U$. Les fonctions ${\phi^{\ast}}(z_{i})$, où les $z_{i}$ définissent l’extension triviale, sont appelées .
\[endogene=exogene\] Une extension paramétrable de dimension différentielle au plus $2$ est plate.
Les diffiétés $U_{\ast}$ et $U$ (déf. \[platitude\]) peuvent, par exemple, faire intervenir une variable temps $t$ avec $t'=0$: si celui-ci n’apparaît pas dans $U_{\ast}$, il est absent des sorties plates de $V/U_{\ast}$.
Lemme de Rouchon et itération
=============================
Par convention, $\ord_{z}A=-\infty$ si $A$ est indépendent de $z$ et de ses dérivées.derivatives. Pour une diffiété paramétrable, on identifiera la fonction $x\in\cO(V)$ avec la fonction $x(z)=\phi^{\ast}(x)\in\cO(W)$.
\[Rouchon\] Soit $V/U_{\ast}$ un système paramétrable, $x_{i}\in\cO(V)$, $i=1\ldots n$ une famille de fonctions non constantes sur $V$, $H(x)=0$ une équation différentielle satisfaite par les $x_{i}$ et $e_{i}:=\ord_{x_{i}}H$.
Avec les notations de la déf. \[platitude\], soient $z_{j}$ des coordonnées sur $T^{\mu}$ telles que $\max_{i=1}^{n}\ord_{z_{1}}
x_{i}=r>-\infty$. Si $e_{i}=0$ $\Rightarrow$ $\ord_{z_{1}}
x_{i_{0}}<r$, alors notant $D:=\sum_{e_{i}>0}C_{i}\partial_{x_{i}^{(e_{i})}}$ où les $C_{i}$ sont de nouvelles variables avec $DC_{1}=0$, le $n$-uplet $(\partial_{z_{1}^{(r)}}
x_{i}^{(e_{i})})$ est solution des équations $\phi^{\ast}(D^{k}P)=0$.
Il suffit de remarquer que $\partial_{z_{1}^{(r+1)}}x_{i}^{(e_{i})}=\partial_{z_{1}^{(r)}}x_{i}$ pour $e_{i}>0$, de sorte que $\partial_{z_{1}^{(r+1)}}^{2}x_{i}=0$. Substituant $\partial_{z_{1}^{(r+1)}}x_i$ to $C_{i}$ dans $D^{k}P$, on obtient $\partial_{z_{1}^{(r+1)}}^{k}P=0$.
Nous nous limitons maintenant à la dimension différentielle $2$. On peut compléter des coordonnées $x_{i}$ sur $V$, considérées comme des fonctions des $z_{j}$ grâce au morphisme $\phi$, pour obtenir des coordonnées locales sur $W$, *e.g.* en choisissant $z_{i}, \ldots, z_{i}^{(s_{i})}$, $i=1,2$, ($s_{1}\le r$), et les $z_{j}^{(k)}$, $j>2$, $k\in\N$. On peut alors exprimer les dérivations $\partial_{z_{1}^{(k)}}$ dans les coordonnées de $V$. De la sorte, pour des équations d’état définissant $V$, le *lemme* \[Rouchon\] signifie que $\partial_{z_{1}^{(r+1)}}$ et $\partial_{z_{1}^{(r)}}=[\delta,\partial_{z_{1}^{(r+1)}}]$ commutent. Le lemme suivant va un cran plus loin en considérant $\partial_{z_{1}^{(r-1)}}=[\delta,\partial_{z_{1}^{(r)}}]$ (*cf.* [@zharinov1996]).
\[Rouchon2\] Sous les hypothèses du th. \[Rouchon\], supposons que le système $V/U$ est localement défini par des équations explicites d’ordre $1$: $$\label{state-eq}
P_{i}:=x_{i}'-h_{i}(x,x_{1}',x_{2}',u)=0,\quad 2< i\le n.$$
Si $V/U$ est paramétrable, l’idéal homogène $(D^{k}P_{i}|1\le i\le n-2,\> k\in\N)$ est de dimension projective $0$ ssi ssi l’une des équations $P_{i}$ est non linéaire en les dérivées $x_{i}'$.
Alors, les équations (\[state-eq\]) se réécrivent $x_{i}'=f_{i}(x,v_{2},u)v_{1}+g_{i}(x,v_{2},u)$ $(2)$, où les $v_{i}$ sont des fonctions des $x_{i}$ et des $x_{j}'$, $j=1,2$, telles que $\partial_{z_{1}^{(r+1)}}v_{2}=0$, $f_{1}=1$ ( $v_{1}=x_{1}'$) et $f_{2}=v_{2}$ si $f_{2}$ dépend de $v_{2}$.
On a en outre $\partial_{z_{1}^{(r)}}f_{i}(x,v_{2},u)=0$.
— Par le th. \[Rouchon\], la dimension est au moins $0$. Si $P_{i}$ est non linéaire en les $x_{i}'$, $i=1,2$, alors il existe une équation non triviale $D^{2}P_{i}=0$, et la dimension est au plus $0$. — À permutation près, on peut supposer $C_{1}\neq0$. Alors, $C_{2}=F(x,x_{1}',x_{2}')C_{1}$ avec $\partial_{z_{1}^{(r)}}F=0$. Nous prenons $v_{1}=x_{1}'$. Si $F$ dépend de $x_{2}'$, on pose $v_{2}=F$, $f_{2}(v_{2})=v_{2}$ et $g_{2}=x_{2}'-v_{2}$ ou sinon $f_{2}(x)=F(x)$, $v_{2}=x_{2}'-f_{2}(x)v_{1}$ et $g_{2}(v_{2})=v_{2}$. — Nous avons $\partial_{z_{1}^{(r+1)}}=A\partial_{v_{1}}+2A'\partial_{v_{1}'}+\cdots+B\partial
v_{2}'+\cdots$ Donc $\partial_{z_{1}^{(r)}}=\sum_{i=1}^{n}Af_{i}\partial_{x_{i}}+
A'\partial_{v_{1}}\cdots+B\partial_{v_{2}}+\cdots$ Au cran suivant, comme les terme en $A'$ s’annulent, on a: $\partial_{z_{1}^{(r-1)}}=\sum_{i=1}^{n}\left(Af_{i}'+
\hbox{des termes d'ordre au plus $r$ en $z_{1}$}\right)\partial_{x_{i}}+\cdots$ Comme $[\partial_{z_{1}^{(r+1)}},\partial_{z_{1}^{(r-1)}}]=0$, on doit avoir, $\partial_{z_{1}^{(r+1)}}f_{i}'=\partial_{z_{1}^{(r)}}f_{i}=0$.
Mise en œuvre du lemme de Rouchon
=================================
Soient $z$, $v$, $u$ $u_{\ast}$ des coordonnées sur $T^{\mu}$, $V$, $U$, $U_{\ast}$. Si le théorème est faux, il existe un ouvert $O=\phi(\tilde O)\subset V$ tel qu’aucun ouvert $O_{\prime}=\phi(\tilde O_{\prime})\subset O$ n’est plat. Nous allons chercher une contradiction.
Dans le cas $m=1$, soient $x_{i}$, $1\le i\le n$ des fonctions définissant avec leurs dérivées des coordonnées locales de $O_{\prime}\subset O$. On suppose en outre qu’elles satisfont des équations d’ordre $1$ $x_{i}'=h_{i}(x, x_{1}', u_{\ast})$, $1<i\le n$ et que $n$ est minimal. Si $n=1$, $O_{\prime}$ est plat.
Sinon, le th. \[Rouchon\] implique que les $h_{i}$ sont linéaires en $x_{1}'$: $h_{i}=f_{i}x_{i}'+g_{i}$. On peut alors remplacer les $x_{i}$ par $n-1$ solutions indépendantes $y(x,u_{\ast})$ de l’équation $\partial_{x_{1}} Y+\sum_{i=2}^{n}f_{i}\partial_{x_{i}}
Y=0$. Les dérivées $y_{i}'$ ne dépendent pas de $x_{1}'$, de sorte que les $y_{i}$ satisfont un nouveau système d’ordre $1$ contredisant la minimalité de $n$.
Dans le cas $m=2$, on considère aussi $x_{i}(z,u,u_{\ast})$, $1\le i\le n$, qui satisfont un système d’ordre $1$ $x_{i}'=h_{i}(x, x_{1}',
x_{2}',u_{\ast})$, $2<i\le n$ and $\max_{i=1}^{n}\ord_{z_{1}} x_{i}=r>-\infty$. Nous supposons le couple $(r, n)$ minimal pour l’ordre lexicographique.
Si $n=2$, $O_{\prime}$ est plat. Si les $h_{i}$ ne dépendent pas de $x_{1}'$ ou $x_{2}'$, on se ramène au cas $m=1$, déjà traité. On distingue deux situations.
i\) Si les $h_{i}$ sont tous linéaires en $x_{1}'$ et $x_{2}'$: $h_{i}=\sum_{j=1}^{2}f_{i,j}x_{j}'+g_{i}$, $1\le i\le n$, on remplace les $x_{i}$ par $n-1$ solutions indépendantes $y(x, u_{\ast})$ de l’équation $\partial_{x_{1}}
Y+\sum_{i=3}^{n}f_{i,1}\partial_{x_{i}} Y=0$. L’ordre $\max_{i=1}^{n-1}\ord_{z_{1}} y_{i}$ est au plus $r$. Les $y_{i}'$ ne dépendent pas de $x_{1}'$, et satisfont donc un nouveau système d’ordre $1$. Si les $y_{i}'$ dependent de $x_{1}$, ils satisfont nos hypothèses, contredisant la minimalité de $n$. Sinon, on est ramené au cas $m=1$ et il suffit de compléter la sortie plate de la diffiété définie par les $y_{i}$ avec $x_{1}$ pour conclure.
ii\) Si les $h_{i}$ ne sont pas linéaires en $x_{1}'$ et $x_{2}'$, par le \[Rouchon2\] les équation d’état peuvent être réécrites $x_{i}'=f_{i}(x,v_{2},u_{\ast})*v_{1}+g_{i}(x,v_{2},u_{\ast})$, avec $\partial_{z_{1}^{(r+1)}}v_{2}=0$ et $\partial_{z_{1}^{(r)}}f_{i}(x,v_{2},u)=0$. On remplace les $x_{i}$ par $n-1$ solutions indépendantes $y_{i}(x,v_{2},u_{\ast})$ de l’équation différentielle $\sum_{i=1}^{n}f_{i}\partial_{x_{i}}Y=0$, cpomplétées avec $v_{2}$ si les $f_{i}$ dépendent de $v_{2}$. Les $y_{i}'$ doivent dépendre de $x_{1}$; sinon les $f_{i}$ seraient des constantes, et les $y_{i}'$ satisferaient un système linéaire de sorte que $h_{i}$ seraient aussi linéaires en les $x_{j}'$, $j=1,2$.
Donc, les $y_{i}$ (et $v_{2}$ si $f_{2}=v_{2}$) engendrent $\cO(O_{\prime})$, doivent satisfaire un système d’ordre $1$ et, selon le \[Rouchon2\] sont d’ordre strictement inférieur à $r$ en $z_{1}$: une contradiction finale qui achève la preuve.
Considérons $V$ défini par un modèle de voiture $\theta'=u/d\tan \phi$, $x'=u\cos\theta$, $y'=u\sin\theta$ (*cf.* [@fliess (18)]). Les diffiétés $U=U_{0}$ sont définies par $d'=0$. Soient les paramétrages $\phi$ exprimés par $\theta
=\arctan(z_{2}'/z_{1}') \mp
\arctan(z_{3}'/((z_{1}')^{2}+(z_{2}')^{2}-(z_{3}')^{2})^{1/2})+(1\mp
1)\pi/2$, $x=z_{1}+z_{3}\sin\theta$ et $y=z_{2}-z_{3}\cos\theta$. Quand ${x_{3}'}^2$ tend vers ${x_{1}'}^{2}+{x_{2}'}^{2}$, les deux paramétrages tendent vers le lieu où $x'=y'=0$. Une famille de sorties plates est donnée, *e.g.*, par $B_{1}:=x+C\sin\theta$ et $B_{2}:=y-C\cos\theta$, $C\in\R$.
Tout choix de fonctions $B_{i}$ de $x$, $y$ et $\theta$ minimise $\epsilon(=2)$, mais pas nécessairement $e_{2}(\in\{0,1\})$. Si $e_{2}=1$, il peut être abaissé en remplaçant $P_{1}$ par une solution de (\[state-eq\]) (*cf.* i) *supra*). Si $e_{2}=0$ et $s=1$, alors on obtient $s=0$ en remplaçant $A_{2}$ par $P_{1}$ (*cf.* ii) a) *supra*).
En prenant $U$ défini par $t'=1$ et en substituant $t$ à $z_{3}$ dans les formules de l’exemple précédent, on obtient bien de nouveaux paramétrages indépendants de $t$.
Conclusion {#conclusion .unnumbered}
==========
Nous espérons une adaptation de cette preuve dans le cadre de l’algèbre différentielle ou en dimension différentielle supérieure à $2$, mais de nombreuses difficultés se présentent que nous ne savons pas surmonter.
[00]{}
É. <span style="font-variant:small-caps;">Cartan</span>, Sur l’intégration de certains systèmes indéterminés d’équations différentielles», *Journal für die reine und angewandte Mathematik*, [**145**]{}, 86–91, 1915.
B. Charlet, J. Lévine et R. Marino, On dynamic feedback linearization, *Systems & Control Letters*, [**13**]{}, 143–151, North-Holland, 1989.
M. <span style="font-variant:small-caps;">Fliess</span>, J. <span style="font-variant:small-caps;">Lévine</span>, Ph. <span style="font-variant:small-caps;">Martin</span> et P. <span style="font-variant:small-caps;">Rouchon</span>, Flatness and defect of nonlinear systems: introductory theory and applications, *Internat. J. Control*, 61, p. 1327–1887, 1995.
M. <span style="font-variant:small-caps;">Fliess</span>, J. <span style="font-variant:small-caps;">Lévine</span>, Ph. <span style="font-variant:small-caps;">Martin</span> et P. <span style="font-variant:small-caps;">Rouchon</span>, “Deux applications de la géométrie locale des diffiétés”, *Annales de l’IHP, section A*, **66**, (3), 275–292, 1997.
M. <span style="font-variant:small-caps;">Fliess</span>, J. <span style="font-variant:small-caps;">Lévine</span>, Ph. <span style="font-variant:small-caps;">Martin</span> et P. <span style="font-variant:small-caps;">Rouchon</span>, A Lie-Bäcklund approach to equivalence and flatness of nonlinear systems, *IEEE AC.* 44:922–937, 1999.
D. <span style="font-variant:small-caps;">Hilbert</span>, Über den Begriff der Klasse von Differentialgleichungen, [ *Math. Annalen*]{}, 73, 95–108, 1912.
G. <span style="font-variant:small-caps;">Monge</span>, Supplément où l’on fait savoir…», [ *Histoire de l’Académie royale des sciences*]{}, Paris, 502–576, 1787.
F. <span style="font-variant:small-caps;">Ollivier</span>, Une réponse négative au problème de Lüroth différentiel en dimension 2, *C. R. Acad. Sci. Paris*, t. 327, Série I. p. 881–886, 1998.
J. Lévine, *Analysis and Control of Nonlinear Systems: A Flatness-based Approach*, Springer, 2009.
F. Ollivier et B. Sadik, La borne de Jacobi pour une diffiété définie par un système quasi régulier, , [**345**]{}, 3, 139–144, 2007.
P.S. Pereira da Silva et P. Rouchon, On time-invariant systems possessing time-dependent flat outputs, actes de *NOLCOS 2004*, Elsevier, 2004.
P. Rouchon, Necessary condition and genericity of dynamic feedback linearization, *Journal of Mathematical Systems Estimation and Control*, Birkhaüser Boston, **4**, (2), 1–14, 1994.
H. <span style="font-variant:small-caps;">Sira-Ramírez</span> and S. <span style="font-variant:small-caps;">Agrawal</span>, *Differentially Flat Systems*, Marcel Dekker, New York, 2004.
I.S. <span style="font-variant:small-caps;">Krasil’shchik</span>, V.V. <span style="font-variant:small-caps;">Lychagin</span> (V.V.) et A.M. <span style="font-variant:small-caps;">Vinogradov</span>, *Geometry of Jet Spaces and Nonlinear Partial Differential Equations*, Gordon and Breach, New York, 1986.
V.V. <span style="font-variant:small-caps;">Zharinov</span>, *Geometrical aspects of partial differential equations*, Series on Soviet and East European Mathematics, vol. 9, World Scientific, Singapore, 1992.
V.V. <span style="font-variant:small-caps;">Zharinov</span>, “On differentiations in differential algebras”, *Integral Transforms and Special Functions*, **4**, (1,2), 163–180, 1996.
P. <span style="font-variant:small-caps;">Zervos</span>, , Mémorial des Sciences Math., fasc. LIII, Gauthier-Villars, Paris, 1932.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We propose a mechanism to actively tune the operation of plasmonic cloaks with an external magnetic field by investigating electromagnetic scattering by a dielectric cylinder coated with a magneto-optical shell. In the long wavelength limit we show that the presence of a magnetic field may drastically reduce the scattering cross-section at all observation angles. We demonstrate that the application of magnetic fields can modify the operation wavelength without the need of changing material and/or geometrical parameters. We also show that applied magnetic fields can reversibly switch on and off the cloak operation. These results, which could be achieved for existing magneto-optical materials, are shown to be robust to material losses, so that they may pave the way for developing actively tunable, versatile plasmonic cloaks.'
author:
- 'W. J. M. Kort-Kamp'
- 'F. S. S. Rosa'
- 'F. A. Pinheiro'
- 'C. Farina'
title: Tuning plasmonic cloaks with an external magnetic field
---
The idea of rendering an object invisible in free space, which had been restricted to human imagination for many years, has become an important scientific and technological endeavour since the advent of metamaterials. Progress in micro- and nanofabrication have fostered the construction of artificial metamaterials with unusual electromagnetic (EM) properties [@zheludev2012], which have been exploited in the development of a variety of approaches for designing EM cloaks and achieving invisibility. Among these approaches one can highlight the coordinate-transformation method [@pendry2006; @leonhardt2006; @schurig2006] and the scattering cancelation techniques [@alu2005; @alu2008; @edwards2009; @filonov2012; @chen2012; @rainwater2012; @KortKamp2013; @nicorovici1994]. The coordinate-transformation method, grounded in the emerging field of transformation optics [@chen2010], requires metamaterials with anisotropic and inhomogeneous profiles, which are able to bend the incoming EM radiation around a given region of space, rendering it invisible to an external observer. This method has been first experimentally realized for microwaves [@schurig2006], and later extended to infrared and visible frequencies [@valentine2009; @ergin2010]. The scattering cancellation technique, which constitutes the basis for the development of plasmonic cloaks, was proposed in [@alu2005]. Applying this technique, a dielectric or conducting object can be effectively cloaked by covering it with a homogeneous and isotropic layer of plasmonic material with low-positive or negative electric permittivity. In these systems, the incident radiation induces a local polarization vector in the shell that is out-of-phase with respect to the local electric field so that the in-phase contribution given by the cloaked object may be partially or totally canceled [@alu2005; @alu2008; @chen2012]. Experimental realizations of cylindrical plasmonic cloaks for microwaves exist in 2D [@edwards2009] and 3D [@rainwater2012], paving the way for many applications in camouflaging, low-noise measurements and non-invasive sensing [@alu2008; @chen2012].
Despite the notable progresses in all cloaking techniques, the development of a practical, versatile cloaking device remains a challenge. One of the reasons is that the operation frequency of many cloaking devices is typically narrow, although some photonic crystals with large, complete photonic band gaps exist and could be explored in the design of wide band cloaking systems [@lin1]. Another reason is that the majority of cloaking devices developed so far are generally tailored to operate at a given frequency band, that cannot be freely modified after fabrication. As a result, if there is a need to modify the operation frequency band it is usually necessary to engineer a new device, with different geometric parameters and materials, limiting its applicability [@alu2008]. To circumvent this limitation, some proposals of tunable cloaks have been developed [@peining; @zharova2012; @milton2009]. One of them is to introduce a nonlinear layer in multi-shell plasmonic cloaks, so that the scattering cross-section can be controlled by changing the intensity of the incident EM field [@zharova2012]. Core-shell nonlinear plasmonic particles can also be designed to exhibit Fano resonances, allowing a swift switch from being completely cloaked to being strongly resonant [@monticone2013]. Another implementation of tunable plasmonic cloaks involves the use of a graphene shell [@chen2011]. However, most of these proposals are based either on a [*passive*]{} mechanism of tuning the cloaking device ([*e.g.*]{}, by chemically modifying the graphene surface by carboxylation and thiolation [@chen2011; @chuang2007]) or depends on a given range of intensities for the incident excitation, as in the case of nonlinear plasmonic cloaks [@zharova2012].
Here we propose an alternative mechanism to [*actively*]{} tune the operation of plasmonic cloaks with an external magnetic field ${\bf B}$. We show that this is feasible by investigating EM scattering by a dielectric cylinder coated by a magneto-optical shell. The application of ${\bf B}$ drastically reduces the differential scattering cross-section; this reduction can be as high as $95\%$ if compared to the case without ${\bf B}$. The presence of ${\bf B}$ modifies the operation wavelength without the need of changing material and/or geometrical parameters. Besides, ${\bf B}$ could be used as an external agent to reversibly switch on and off the operation of the cloak. Our results are robust to material losses and could be achieved for existing magneto-optical materials and moderate magnetic fields, so that they might be useful for developing actively tunable, versatile plasmonic cloaks.
In Fig. \[Figura1\] we show the scheme of the system: an infinitely long, non-magnetic cylinder with permittivity $\varepsilon_c$ and radius $a$ coated with a magneto-optically active cylindrical shell with magnetic permeability $\mu_0$ and radius $b>a$. The system is subjected to a uniform ${\bf B}$ parallel to the cylindrical symmetry axis. Let us consider a TM plane wave of frequency $\omega$ impinging normally on the cylinder. The background medium is vacuum. In the presence of ${\bf B}$ the shell permittivity $\stackrel{\leftrightarrow}{\varepsilon}_s$ is anisotropic; for the geometry of Fig. \[Figura1\] it reads [@Stroud1990]
![Schematic representation of the scattering system.[]{data-label="Figura1"}](Fig1.eps)
$$\begin{aligned}
\label{TensoresEeM}
\stackrel{\leftrightarrow}{\varepsilon}_s =
\left(
\begin{array}{ccc}
\varepsilon_{xx} & \varepsilon_{xy} & 0 \\
\varepsilon_{yx} & \varepsilon_{yy} & 0 \\
0 & 0 & \varepsilon_{zz} \\
\end{array}
\right)
= \left(
\begin{array}{ccc}
\varepsilon_{s} & i\gamma_s & 0 \\
-i\gamma_s & \varepsilon_{s} & 0 \\
0 & 0 & \varepsilon_{zz}, \\
\end{array}
\right)\end{aligned}$$
where the off-diagonal term $\gamma_s$ has a dispersive character and depends upon $B$, vanishing in its absence. EM scattering from a coated cylinder can be analyzed with Mie theory [@BohrenHuffman; @Stroud1990]. The scattering cross-section efficiency $Q_{\textrm{sc}}$ is $$\label{Qsc}
Q_{\textrm{sc}} = \dfrac{2}{k_0b}\,\displaystyle{\sum_{-\infty}^{+\infty}|D_m|^2},
$$ where $D_m$ are the scattering coefficients and $k_0 = 2\pi/\lambda$ is the incident wavenumber. We consider the dipole approximation, i.e. $k_0b\ll 1\, , \ k_cb\ll 1\, , \ k_sb\ll1$, where the dominant scattering terms are $m = 0\, , \ m = \pm 1$, and $k_c$ ($k_s$) is the core (shell) wavenumber. The condition for invisibility is determined by imposing that each scattering coefficient vanishes separately. In the dipole approximation, $D_0$ is identically zero for non-magnetic core and shell. For $D_{-1}$ and $D_{+1}$ to vanish, the ratio $\eta \equiv a/b$ must be $$\label{razaoraios}
\eta_{\pm 1} = \sqrt{\dfrac{(\varepsilon_0\pm \gamma_s-\varepsilon_s)(\varepsilon_c \pm \gamma_s+\varepsilon_s)}
{(\varepsilon_c \pm \gamma_s-\varepsilon_s)(\varepsilon_0 \pm \gamma_s+\varepsilon_s)}},$$ where $\eta_{-1}$ and $\eta_{+1}$ represent the values of $\eta$ that vanish $D_{-1}$ and $D_{+1}$, respectively.
The conditions for achieving the cancellation of $D_{-1}$ and $D_{+1}$, obtained from Eq. (\[razaoraios\]), are shown in Fig. \[Figura2\], as a function of the permittivities of the inner core and of the outer cloaking shell. In Fig. \[Figura2\], blue regions correspond to situations where either $D_{-1}$ or $D_{+1}$ vanish for a given $\gamma_s$ and different ratios $\eta_{\pm1}$ ($0\leq \eta_{\pm1} \leq 1$). From Fig. \[Figura2\] and Eq. (\[razaoraios\]) we conclude that it is not possible to obtain $\eta_{-1} = \eta_{+1}$ for $\gamma_s \neq 0$, [*i.e.*]{} the contributions of $D_{-1}$ and $D_{+1}$ to $Q_{sc}$ cannot be simultaneously canceled for the same $\eta$ in the presence of ${\bf B}$. However, it is still possible to drastically reduce $Q_{sc}$ and, for a given $\eta$, obtain values for the scattering cross-section that are substantially smaller in the presence of ${\bf B}$. Indeed, guided by Fig. \[Figura2\] and by inspecting Eq. (\[razaoraios\]), one can choose a ratio $\eta_{-1} $ or $\eta_{+1}$ that vanishes the contribution of either $D_{-1}$ or $D_{+1}$ to $Q_{sc}$, respectively. As a result, a strong reduction of $Q_{sc}$ can be achieved since $D_{0}$ is already zero for non-magnetic materials.
![Regions for which the invisibility condition (\[razaoraios\]) is satisfied for the corresponding $\eta_{\pm1}$ between 0 and 1 in the presence of off-diagonal terms in electric permittivity, proportional to the applied magnetic field.[]{data-label="Figura2"}](Fig2.eps)
In Fig. \[Figura3\], $Q_{sc}$ in the presence of ${\bf B}$ ($\gamma_s \neq 0$) is calculated as a function of $\eta$ for different $\varepsilon_c$ and $\varepsilon_s$. $Q_{sc}$ is normalized by the scattering efficiency in the absence of ${\bf B}$, $Q_{sc}^{(0)}$, and $b=0.01 \lambda$ (i.e. dipole approximation is valid). Figure \[Figura3\]a corresponds to the situation where, given the parameters $\varepsilon_c = 10\varepsilon_0$ and $\varepsilon_s = 0.1 \varepsilon_0$, it is possible to achieve invisibility for $\eta \simeq 0.92$ in the absence of the magnetic field [@alu2005]. To see this it suffices to put $\gamma_s = 0$ and substitute the set of material parameters used in Fig. \[Figura3\]a in Eq. (\[razaoraios\]). The application of ${\bf B}$ shifts the values of $\eta$ that cause a strong reduction of $Q_{sc}$ to higher values. Interestingly, by increasing ${\bf B}$ one can not only shift the operation range of the device for $\eta$ close to 1, but also significantly decrease $Q_{sc}$. Indeed, for $0.98 \leq \eta < 1$ the application of ${\bf B}$ leads to a reduction of the order of 50% in $Q_{sc}$ if compared to the case where ${\bf B} = {\bf 0}$. This result indicates that, for this set of parameters, an optimal performance of the cloak could be achieved for very thin magneto-optical films.
In Fig. \[Figura3\]b, for the chosen set of parameters ($\varepsilon_c = 0.1\varepsilon_0$ and $\varepsilon_s = 10\varepsilon_0$) EM transparency also occurs for $\eta \simeq 0.92$ in the absence of ${\bf B}$. In the presence of ${\bf B}$ achieving perfect invisibility (vanishing scattering cross-section) is no longer possible for $\eta \simeq 0.92$, since a peak in $Q_{sc}$ emerges precisely for this value of $\eta$. Physically, this peak in $Q_{sc}$ can be explained by the fact that the EM response of the shell is no longer isotropic in the presence of ${\bf B}$; hence the induced electric polarization within the shell is not totally out-of-phase with respect to the one induced within the inner cylinder. The net effect is that these two induced electric polarizations are not capable to cancel each other anymore in the presence of ${\bf B}$, resulting in a non-vanishing scattered field. The fact that the application of ${\bf B}$ induces a peak in $Q_{sc}$ in a system originally conceived to achieve perfect invisibility (in the absence of ${\bf B}$) suggests that a magnetic field could be used as an external agent to reversibly switch on and off the operation of the cloak. On the other hand, for $\eta \lesssim 0.9$ and this set of parameters the application of ${\bf B}$ always lead to a reduction of $Q_{sc}$ with respect to $Q_{sc}^{(0)}$, increasing the efficiency of the cloak. For example, for $\gamma_s = 5\varepsilon_0$ and $\eta = 0.88$ it is possible to achieve a reduction of $Q_{sc}$ of the order of 93% with respect to $Q_{sc}^{(0)}$; for $\gamma_s = 3\varepsilon_0$ and $\eta \simeq 0.9$ this reduction is of the order of 80%. These results demonstrate that the reduction of $Q_{\textrm{sc}}$ is robust against the variation of $B$. For $\eta \lesssim 0.9$ the increase of ${\bf B}$ not only improves the efficiency of the cloak, by further decreasing $Q_{sc}$ with respect to $Q_{sc}^{(0)}$, but also shifts the operation of the device to lower values of $\eta$. This result contrasts to the behavior of $Q_{sc}$ shown in Fig. \[Figura3\]a for different $\varepsilon_c$ and $\varepsilon_s$, where an increase in $B$ shifts the operation range of the cloak to higher values of $\eta$. This means that, for fixed $\eta$, the scattering signature of the system, and hence the cloaking operation functionality, can be drastically modified by varying either $B$ or the frequency ([*i.e.*]{}, by varying $\varepsilon_c$ and $\varepsilon_s$), demonstrating the versatility of the magneto-optical cloak.
![Scattering efficiency $Q_{sc}$ (normalized by its value in the absence of ${\bf B}$, $Q_{sc}^{(0)}$) as a function of the ratio between the external and internal radii $\eta$ for (a) $\varepsilon_c = 10\varepsilon_0$ and $\varepsilon_s = 0.1 \varepsilon_0$, (b) $\varepsilon_c = 0.1\varepsilon_0$ and $\varepsilon_s = 10\varepsilon_0$, (c) $\varepsilon_c = 10\varepsilon_0$ and $\varepsilon_s = -1.5\varepsilon_0$, and (d) $\varepsilon_c = 10\varepsilon_0$ and $\varepsilon_s = 1.5\varepsilon_0$. Different curves correspond to different magnitudes of the off-diagonal term of the electric permittivity $\gamma_s$, proportional to $B$: $\gamma_s = \varepsilon_0$ (dash-dotted curve), $\gamma_s = 3\varepsilon_0$ (solid curve), $\gamma_s = 5\varepsilon_0$ (dashed curve), $\gamma_s = 12\varepsilon_0$ (dotted curve).[]{data-label="Figura3"}](Fig3.eps)
In Fig. \[Figura3\]c, $Q_{sc}$ is calculated as a function of $\eta$ for $\varepsilon_c = 10\varepsilon_0$ and $\varepsilon_s = -1.5\varepsilon_0$, that are values of permittivity that preclude the possibility of invisibility for any $\eta$ and ${\bf B} = {\bf 0}$. Figure \[Figura3\]c reveals that for ${\bf B} \neq {\bf 0}$ a very strong reduction in $Q_{sc}$ can occur for a wide range of values of $\eta$, showing that one can drastically reduce the scattered field by applying a magnetic field to a system that is not originally designed to perform as an EM cloak. Indeed, for $\gamma_s = 3\varepsilon_0, 5\varepsilon_0, 12\varepsilon_0$ the reduction in $Q_{sc}$ ranges from $80\%$ to $95\%$ for $0 < \eta \leq 0.5$. This result complements the one depicted in Fig. \[Figura3\]b, where the presence of ${\bf B}$ was also shown to be capable to switch off the functionality of the system as a cloak. The crossover between these distinct scattering patterns could be achieved by varying the incident wave frequency.
Figure \[Figura3\]d also corresponds to the case where invisibility cannot occur for ${\bf B} = {\bf 0}$ ($\varepsilon_c = 10\varepsilon_0$ and $\varepsilon_s = 1.5\varepsilon_0$). For ${\bf B} \neq {\bf 0}$ again there is a reduction of $Q_{sc}$ for $ \eta > 0.8$, which can be of the order of $50\%$ for $\gamma_s = 12\varepsilon_0$, further confirming that the application of ${\bf B}$ to a cylinder coated with a magneto-optical shell can switch on the functionality of the device as an invisibility cloak.
![Spatial distribution of the scattered field $H_{z}$ for $B=0$ (a) and (b) $B=9\,T$ for $\varepsilon_c = 10\varepsilon_0$. The incident frequency is $\omega/2\pi = 2.93$THz and $a/b= 0.6$ with $b = 0.01\lambda$. Differential scattering cross-section corresponding to the cases in (a) (blue solid curve) and (b) (dashed red curve) is shown in panel (c). (d) Scattering efficiency (normalized by $Q_{sc}^{(0)}$) as a function of the frequency and $B$. []{data-label="Figura4"}](Fig4.eps)
To discuss a possible experimental realization of the system proposed here, let us consider the existing cylindrical implementation of a cloaking device described in Ref. [@rainwater2012], but without the parallel-plate implants. The shell is assumed to be made of a magneto-optical dielectric material described by the Drude-Lorentz model [@King2009] with realistic material parameters, including losses: oscillating strength frequency $\Omega/2\pi = 3$THz, resonance frequency $\omega_0/2\pi = 1.5$THz, and damping frequency $\Gamma/2\pi = 0.03$THz. Drude-Lorentz permittivities are extensivelly used to describe a wide range of media, including magneto-optical materials. A particularly recent example is monolayer graphene epitaxially grown on SiC [@crassee2012], where the Faraday rotation is very well described by a anisotropic Drude-Lorentz model, with similar parameters to the ones proposed here. For concreteness, we set the device to operate around the frequency $f = 2.93$ THz; the inner and outer radii are $a = 0.6b$ and $b = 0.01\lambda$, respectively. The core is made of a dielectric with $\varepsilon_c = 10\varepsilon_0$, and negligible losses. The spatial distribution of the scattered field $H_z$ in the $xy$ plane is shown in Fig. \[Figura4\]. For this set of parameters invisibility cannot occur for ${\bf B}= {\bf 0}$, and the corresponding spatial distribution of the scattered field is dipole-like (Fig. \[Figura4\]a), as expected since for $b\ll \lambda$ the electric dipole term is dominating. Figure \[Figura4\]b reveals that the presence of ${\bf B}$ strongly suppresses the scattered field for all angles, further indicating that ${\bf B}$ could play the role of an external agent to switch on and off the cloaking device. This result is corroborated by Fig. \[Figura4\]c, where the differential scattering efficiency is calculated without and with the presence of ${\bf B}$. Figure \[Figura4\]c confirms that the scattered radiation, which is initially dipole-like for ${\bf B}= {\bf 0}$, is strongly suppressed in all directions when ${\bf B}$ is applied, even in the presence of material losses. Figure \[Figura4\]d exhibits a contour plot of the scattering efficiency $Q_{sc}$ (normalized by $Q_{sc}^{(0)}$) as a function of both $B$ and the incident wave frequency $f$. From Fig. \[Figura4\]d one can see that the reduction of $Q_{\textrm{sc}}$ can be as large as $95$% if compared to the case without ${\bf B}$. We emphasize that these findings are robust against material losses, illustrated by the fact that we are allowing for quite typical dissipation parameters and the cloak works at the levels discussed above. Furthermore, we checked that even for much larger dissipative systems, characterized by $b=0.1\lambda$ and $b=0.2\lambda$, the cross section reduction (calculated including up to the electric octopole contribution) can be as impressive as $92\%$ and $83\% $, respectively. In addition, our results suggest that the efficiency of the proposed system is comparable to state-of-the-art existing cloaking apparatuses [@edwards2009; @rainwater2012] with the advantage of being highly tunable in the presence of magnetic fields. Besides, Fig. \[Figura4\]d demonstrates that for $B \simeq 15 T$ the reduction of $Q_{\textrm{sc}}$ of the order of $95$% occurs for a relatively broad band of frequencies, of the order of 30 GHz. Finally, the effect of increasing $B$ around the design operation frequency is to broaden the frequency band for which the reduction of $Q_{\textrm{sc}}$ induced by the magnetic field is as large as $95$%.
There are dielectric materials that exhibit strong magneto-optical activity that could be used in the design of a magneto-optical cloak. Single- and multilayer graphene are promising candidates, as they exhibit giant Faraday rotations for moderate magnetic fields [@crassee2011]. Garnets are paramagnetic materials with huge magneto-optical response, with Verdet constants as high as $10^{4}$ deg./\[T.m.\] for visible and infrared frequencies [@barnes1992]. There are composite materials made of granular magneto-optical inclusions that show large values of $\gamma_s$ for selected frequencies and fields in the 10-100 T range [@Stroud1990; @Reynet2002]. These materials offer an additional possibility of tuning EM scattering by varying the concentration of inclusions, and have been successfully employed in plasmonic cloaks [@farhat2011]. It is worth mentioning that the magneto-optical response of typical materials, which will ultimately govern the tuning speed of the system, is usually very fast. Indeed, magneto-optical effects manifest themselves in a time scale related to the spin precession; for typical paramagnetic materials this time is of the order of nanoseconds [@kirilyuk2010]. Hence the tuning mechanism induced by magneto-optical activity is expected to be almost instantaneous after the application of ${\bf B}$.
In conclusion, we investigate EM scattering by a dielectric cylinder coated with a magneto-optical shell to conclude that one could actively tune the operation of plasmonic cloaks with an external magnetic field ${\bf B}$. In the long wavelength limit we show that the application of ${\bf B}$ may drastically reduce the scattering cross-section for all observation angles. The presence of ${\bf B}$ can also largely modify the operation range of the proposed magneto-optical cloak in a dynamical way. Indeed, for a fixed $\eta$, we demonstrate that invisibility can be achieved by applying a magnetic field for situations where the condition for transparency cannot be satisfied without ${\bf B}$. Conversely, a magnetic field can suppress invisibility in a system originally designed to act as an invisibility cloak. Together, these results suggest that one can dynamically switch on and off the magneto-optical cloak by applying ${\bf B}$. In terms of frequency, the application of ${\bf B}$ in a system with fixed $\eta$ could largely shift the cloak operation range, both to higher and lower frequencies. We also show that these results are robust against material losses and discuss the feasibility of designing a magneto-optical cloak using existing materials and moderate magnetic fields. We hope that our results could guide the design of dynamically tunable, versatile plasmonic cloaks, and optical sensors.
We thank R. M. de Souza, V. Barthem, and D. Givord for useful discussions, and FAPERJ, CNPq and CAPES for financial support.
[99]{}
N. I. Zheludev and Y. S. Kivshar, Nature Materials **11**, 917 (2012).
J. B. Pendry, D. Shurig, and D. R. Smith, Science **312**, 1780 (2006).
U. Leonhardt, Science **312**, 1777 (2006).
D. Schurig, J. J. Mock, B. J. Justice, S. A. Cummer, J. B. Pendry, A.F. Starr, and D.R. Smith, Science **312**, 977 (2006).
A. Alù and N. Engheta, **72**, 016623 (2005).
A. Alù and N. Engheta, J. Opt. A **10**, 093002 (2008).
B. Edwards, A. Alù, M. G. Silveirinha, and N. Engheta, **103**, 153901 (2009).
D. S. Filonov, A. P. Slobozhanyuk, P. A. Belov, and Y. S. Kivshar, Phys. Status Solidi (RRL) **6**, 46 (2012).
P. Y. Chen, J. Soric, and A. Alù, Adv. Mater. **24**, OP281 (2012).
D. Rainwater, A. Kerkhoff, K. Melin, J. C. Soric, G. Moreno, and A. Alú. New J. Phys. [**14**]{} 013054 (2012).
W. J. M. Kort-Kamp, F. S. S. Rosa, F. A. Pinheiro, and C. Farina, Phys. Rev. A [**87**]{}, 023837 (2013).
N. A. Nicorovici, R. C. McPhedran, and G. W. Milton, Phys. Rev. B [**49**]{}, 8479.
H. Chen, C. T. Chan, and P. Sheng, Nature Mater. **9**, 387 (2010).
J. Valentine, J. Li, T. Zentgraf, G. Bartal, and X. Zhang, Nat. Mater. **8**, 568 (2009).
T. Ergin, N. Stenger, P. Brenner, J. B. Pendry, and M. Wegener, Science **328**, 337 (2010).
L. Jia, I. Bita, and E. L. Thomas, Adv. Funct. Mater. **22**, 1150 (2012); L. Jia, and E. L. Thomas, **84**, 033810 (2011); L. Jia, I. Bita, and E. L. Thomas **84**, 023831 (2011).
P. Li, Y. Liu, Y. Meng, and M. Zhu, J. Phys. D: Appl. Phys. **43**, 175404 (2010); P. Li, Youwen Liu, Y. Meng, and M. Zhu, J. Phys. D: Appl. Phys. **43**, 485401 (2010).
N. A. Zharova, I. V. Shadrivov, A. A. Zharov, and Y. S. Kivshar, Opt. Express **20**, 14954 (2012).
F. G. Vasquez, G. W. Milton, and D. Onofrei, Phys. Rev. Lett. [**103**]{}, 073901 (2009); F. G. Vasquez, G. W. Milton, and D. Onofrei, Opt. Express [**17**]{}, 14800 (2009).
F. Monticone, C. Argyropoulos, and A. Alù, **110**, 113901 (2013).
P. Y. Chen, and A. Alù, ACS Nano **5**, 5855 (2011).
F. T. Chuang, P. Y. Chen, T. C. Cheng, C. H. Chien, and B. J. Li, Nanotechnology **18**, 395702 (2007).
C. F. Bohren, D. R. Huffman, [*Absorption and Scattering of Light by Small Particles*]{}, John Wiley and sons, Inc. (1983).
T. K. Xia, P. M. Hui and D. Stroud, J. App. Phys. **67**, 2736 (1990).
F. W. King, [*Hilbert Transforms, Vol. 2*]{}, pag. 313-316, Cambridge University Press (2009).
I. Crasee, M. Orlita, M. Potemnski, A. L. Walter, M. Ostler, Th. Seyller, I. Gaponenko, J. Chen, and A. B. Kuzmenko, Nano. Lett. [**17**]{}, 2470 (2012).
I. Crassee, J. Levallois, A. L. Walter, M. Ostler, A. Bostwick, E. Rotenberg, Th. Seyller, D. van der Marel, and A. B. Kuzmenko, Nat. Phys. **7** 48 (2011).
N. P. Barnes and L. B. Petway, J. Opt. Soc. Am. B **9**, 1912 (1992).
O. Reynet, A.-L. Adenot, S. Deprot, O. Acher, and M. Latrach, Phys. Rev. B [**66**]{}, 094412 (2002).
S. Mühlig, M. Farhat, C. Rockstuhl, and F. Lederer, Phys. Rev. B [**83**]{}, 195116 (2011).
A. Kirilyuk, A. V. Kimel, and T. Rasing, Rev. Mod. Phys. [**82**]{}, 2761 (2010).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Constraints on the validity of the hierarchical gravitational instability theory and the evolution of biasing are presented based upon measurements of higher order clustering statistics in the Deeprange Survey, a catalog of $\sim710,000$ galaxies with $I_{AB} \le 24$ derived from a KPNO 4m CCD imaging survey of a contiguous $4^{\circ} \times 4^{\circ}$ region. We compute the 3–point and 4–point angular correlation functions using a direct estimation for the former and the counts-in-cells technique for both. The skewness $s_3$ decreases by a factor of $\simeq 3-4$ as galaxy magnitude increases over the range $17 \le I \le 22.5$ ($0.1 \lesssim z \lesssim 0.8$). This decrease is consistent with a small [*increase*]{} of the bias with increasing redshift, but not by more than a factor of 2 for the highest redshifts probed. Our results are strongly inconsistent, at about the $3.5-4\ \sigma$ level, with typical cosmic string models in which the initial perturbations follow a non-Gaussian distribution – such models generally predict an opposite trend in the degree of bias as a function of redshift. We also find that the scaling relation between the 3–point and 4–point correlation functions remains approximately invariant over the above magnitude range. The simplest model that is consistent with these constraints is a universe in which an initially Gaussian perturbation spectrum evolves under the influence of gravity combined with a low level of bias between the matter and the galaxies that decreases slightly from $z \sim 0.8$ to the current epoch.'
author:
- István Szapudi
- Marc Postman
- 'Tod R. Lauer'
- William Oegerle
title: 'Observational Constraints on Higher Order Clustering up to $z\simeq 1$'
---
Introduction
============
The evolution of the spatial distribution of galaxies is intimately related to the physical processes of galaxy formation, to the initial spectrum and subsequent gravitational growth of matter fluctuations in the early universe, and to the global geometry of space-time. Quantifying the galaxy distribution is, thus, fundamental to cosmology and has dominated extragalactic astronomy for the past two decades. The $n-$point correlation functions provide a statistical toolkit that can be used to characterize the distribution.
The two-point correlation function is the most widely used statistic because it provides the most basic measure of galaxy clustering – the departure from a pure Poisson distribution. It is also popular because its execution is computationally straight forward. The two-point correlation function is defined as the joint moment of the galaxy fluctuation field, $\delta_g$, at two different positions $$\xi_2 = \xi = {\langle{\delta_{g,1}\delta_{g,2}}\rangle} \label{eq:xidef},$$ where ${\langle{}\rangle}$ means ensemble average. The two-point correlation function yields a full description of a Gaussian distribution only, for which all higher order connected moments are zero by definition. The galaxy distribution, however, exhibits non-Gaussian behavior on small scales due to non-linear gravitational amplification of mass fluctuations, even if they grew from an initially Gaussian field. On larger scales, where the density field is well represented by linear perturbation theory, non-Gaussian behavior may still be present if the initial perturbation spectrum was similarly non-Gaussian. In addition, the process of galaxy formation is likely to introduce biases between the clustering properties of the dark and luminous matter. In the presence of such realities, higher order moments are required to obtain a full statistical description of the galaxy distribution and to provide discrimination between different biasing scenarios ([[*e.g., *]{}]{}, Fry & Gaztañaga 1993, Fry 1994, Jing 1997).
An accurate determination of higher order clustering statistics requires a large number of galaxies and, to date, the most accurate measurements have been derived from wide-area angular surveys, such as the APM (Szapudi [[*et al. *]{}]{}1995; Gaztañaga 1994) and the EDSGC (Szapudi, Meiksin, & Nichol 1996), although recent redshift surveys are now becoming large enough to make interesting constraints (e.g., Hoyle, Szapudi, & Baugh 1999; Szapudi [[*et al. *]{}]{}2000a). These surveys are, by design, limited to the study of the current epoch galaxy distribution. Deep surveys add a further dimension to the exploration of clustering by enabling the study of its evolution. Ideally one would like to have deep, wide redshift surveys available for such analyses but it’s observationally infeasible at the current time. Projected surveys thus still provide a unique way to study the evolution of clustering, especially if photometric redshifts are available. Several deep surveys have been used to study the evolution of low order galaxy clustering (Lilly [[*et al. *]{}]{}1995; Le Fevre [[*et al. *]{}]{}1995; Neuschaffer & Windhorst 1995; Campos [[*et al. *]{}]{}1995; Connolly [[*et al. *]{}]{}1996; Lidman & Peterson 1996; Woods & Fahlman 1997; Connolly, Szalay, & Brunner 1998) but most suffer from insufficient contiguous area and are therefore barely large enough to measure the two-point correlation function. The Deeprange survey (Postman [[*et al. *]{}]{}1998; hereafter paper I) was designed to study the evolution of clustering out $z \sim 1$. The resulting catalog contains $\sim710,000$ galaxies with $I_{AB} \le 24$ derived from a KPNO 4m CCD imaging survey of a contiguous $4^{\circ} \times 4^{\circ}$ region. The photometric calibration of the catalog is precise enough over the entire survey to limit zeropoint drifts to ${\lower.5ex\hbox{{$\; \buildrel < \over \sim \;$}}}0.04$ mag that translates to a systematic error in the angular two-point correlation function, $\omega(\theta)$, of ${\lower.5ex\hbox{{$\; \buildrel < \over \sim \;$}}}0.003$ on a $4^{\circ}$ scale and proportionally less on smaller scales (see paper I for details). Accurate measurements of the $I$-band number counts over the range $12 < I_{AB} < 24$ and the two–point angular correlation function up to degree scales are presented in paper I.
The size and quality of the Deeprange catalog are sufficient to enable reliable estimation of the 3–point and 4–point angular correlation functions as well. This paper presents the first attempt to constrain higher order correlation functions down to flux limits of $I_{AB} = 23$, corresponding to an effective redshift of $z \simeq 0.75$. We briefly review the astrophysics contained within the higher moments in section \[sec:statrev\]. Section \[sec:anal\] presents a description of the galaxy sample analyzed and the computational methods used to derive the statistics and section \[sec:results\] summarizes the results. The implications of our calculations are presented in section \[sec:sum\]. Technical issues concerning the fit of the three-point correlation are presented in Appendix A.
A Brief Review of Higher Order Statistics {#sec:statrev}
=========================================
We begin with a summary of the basic definitions of the higher order statistical methods used in this paper, highlighting their most fundamental properties. For a more in depth review, the reader should consult the references cited.
The observed large-scale structures in the local universe are characterized by a high degree of coherence (e.g. de Lapperent, Geller, & Huchra 1986; Shectman et al. 1996; da Costa et al. 1998) and some features, like the “Great Wall" have undergone asymmetric gravitational collapse (dell’Antonio, Geller, & Bothun 1996). Furthermore, the galaxy distribution traces the underlying dark matter in a non-linear way that may depend on time and scale. It is the non-zero higher order correlation functions that uniquely characterize such phenomena and allow discrimination between the observed galaxy distribution and a Gaussian distribution with the same variance, [[*i.e. *]{}]{}two-point correlation function. The $N$-point correlation function $$\xi_N = {\langle{\delta_1\delta_2\ldots\delta_N}\rangle}$$ depends on a large number of parameters ($3N$ coordinates, minus 3 rotations and 3 translations). The number of parameters decreases if the function is integrated over part of the configuration space (the geometric distribution of the $N$-points is often referred to as their [*configuration*]{}, and the $N$-dimensional space describing it as the configuration space).
A simple model for the $N$-point correlation functions is the clustering hierarchy (e.g., Peebles 1980) defined as $$\xi_N({r_1},\ldots, {r_N}) = \sum_{k=1}^{K(N)}
Q_{Nk} \sum^{B_{Nk}} \prod ^{N-1} \xi(r_{ij}),
\label{hierar}$$ where $\xi(r) \equiv \xi_2(r) = (r/r_0)^\gamma$, and $Q_{Nk}$ are structure constants. Their average is $$Q_N = \frac{ \displaystyle {\sum_{k=1}^{K(N)} Q_{Nk}B_{Nk} F_{Nk}}}
{ \displaystyle {N^{(N-2)}}},$$ where $F_{Nk}$ are the form factors associated with the shape of a cell of size unity (see Boschan, Szapudi & Szalay 1994 for details). For the three-point correlation function, our main concern here, the above form factors amount to a few percent only; if neglected then $Q_3 \simeq S_3/3$ (see the definition of $S_3$ below).
If the integration domain is a particular cell of volume $v$, with the notation $\bar f = \int_v f/v$ for cell averaging, then the amplitude of the $N$-point correlation function can be expressed as $$S_N = \bar\xi_N/\bar\xi^{N-1} = {\langle{\delta^N}\rangle}_c/{\langle{\delta^2}\rangle}^{N-1}.$$ The $S_N$’s, with a suitable normalization motivated by leading order perturbation theory, are commonly used to characterize the higher order, i.e. non-Gaussian, properties of the galaxy distribution in real surveys and N-body simulations. In the second half of the above equation the integration over the cell, i.e. smoothing (or filtering) is implicit. While the $S_N$’s do not retain all the information encoded in the $N$-point correlation functions, in particular their shape dependence, it is an extremely useful measure of clustering. It is directly related to the distribution of counts in cells (in the same cell $v$), as it is the cumulant or connected moment [^1] thereof. If the shape of the cell $v$ is fixed, these quantities depend only on one parameter, the size of the cell. Note the alternative notation $Q_N = S_N/N^{N-2}$. These two notations differ only in their normalizations: for $Q_N$ it follows from the hierarchical assumption (see later), and for $S_N$ from perturbation theory. For Gaussian initial conditions with power spectrum of slope $n$, leading order (tree-level) perturbation theory of the underlying density field in an expanding universe predicts the $S_N$’s. For $N = 3$ (skewness) and $N = 4$ (kurtosis) the prediction is (Peebles 1980; Fry 1984; Juszkiewicz, Bouchet, & Colombi 1993; Bernardeau 1994; Bernardeau 1996): $$\begin{aligned}
S_3 && = \frac{34}{7} - (n+3)\cr
S_4 && = \frac{60712}{1323} - \frac{62(n+3)}{3} +
\frac{7(n+3)^2}{3}.\end{aligned}$$ These equations depend on scale through the local slope, $n$, of the initial power spectrum that, except for scale invariant initial conditions, varies slowly with scale in all popular cosmological models. According to perturbation theory and simulations, geometric corrections from $\Omega, \Lambda$ are negligible, (e.g., Bouchet [[*et al. *]{}]{}1995). The above results are valid on scales $\gtrsim 7{h^{-1}\ {\rm Mpc}}$. On smaller scales accurate measurements from $N$-body simulations exist (e.g., Colombi, Szapudi, Jenkins, & Colberg 2000, and references therein) but the measurements are subject to a particular cosmological model.
If the galaxy field, $\delta_g$, is a general non-linear function of the mass density field, $\delta$, the we can express this function as a Taylor series $f(\delta) = b_1\delta+b_2 \delta^2/2 + \ldots$. The $b_N$ are the non-linear biasing coefficients, of which $b_1 = b$ is the usual bias factor connecting the two-point correlation function of galaxies with that of the dark matter as $$\xi_g = b^2 \xi.$$ For higher order correlation functions, and for the $S_N$’s, analogous calculations relate the statistics of the galaxy and dark matter density fields (Kaiser 1984; Bardeen [[*et al. *]{}]{}1986; Grinstein & Wise 1986; Matarrese, Lucchin, & Bonometto 1986; Szalay 1988; Szapudi 1994; Matsubara 1995; Fry 1996; Szapudi 1999). For example, the result for third moment is (Fry & Gaztañaga 1993) $$S_{3,g} = \frac{S_3}{b} + \frac{3 b_2}{b^2}.$$ This formula is expected to hold on the same or larger scales as leading order (weakly non-linear) perturbation theory, i.e. on scales $\gtrsim 10{h^{-1}\ {\rm Mpc}}$. On smaller scales, Szapudi, Colombi, Cole, Frenk, & Hatton (2000) found the following phenomenological rule from N-body simulations $$S_{N,g} = \frac{S_N}{b_*^{2(N-2)}},
\label{eq:sqbias}$$ where the function $b_* = b_*(b) \simeq b$. For $b \gtrsim 1$ the typical effect of biasing is that it decreases the $S_N$’s. The above theory does not include stochastic effects, when the galaxy density field is a random function of the underlying dark matter field. Stochasticity typically introduces only a slight extra variance on the parameters of the theory and, thus, it will not be considered further in this paper.
Sample Definition and Analysis Methods {#sec:anal}
======================================
The construction of the galaxy catalog is described in paper I. Briefly, the $I$-band survey consists of 256 overlapping CCD images. The field of view of each CCD is 16 arcminutes but the centers are spaced 15 arcminutes apart. The 1 arcminute overlap enables accurate astrometric and photometric calibration to be established over the entire survey. Objects were identified, photometered, and classified using automated software. For consistency, we analyze the higher order clustering properties in the same magnitude slices[^2] used to measure the two–point correlation functions in paper I.
Counts in Cells
---------------
As our goal is to describe the higher order clustering of galaxies, we adopt statistical measures that are closely related to the moments of the underlying density field. The estimation of the higher order correlation amplitudes follows closely the method described in Szapudi, Meiksin, & Nichol (1996), which can be consulted for more details. Only the most relevant definitions are given next, together with the outline of the technique.
The probability distribution of counts in cells, $P_N(\theta)$, is the probability that an angular cell of dimension $\theta$ contains $N$ galaxies. The factorial moments of this distribution, $F_k = \sum P_N (N)_k$ (where $(N)_k = N(N-1)..(N-k+1)$ is the $k$-th falling factorial of $N$), are indeed closely related to the moments of the underlying density field, ${\langle{N}\rangle}(1+\delta)$, through ${\langle{(1+\delta)^k}\rangle} = F_k/{\langle{N}\rangle}^k$ (Szapudi & Szalay 1993).
The most common method employed to relate the discrete nature of the observed galaxy distribution to the underlying continuous density field is known as infinitesimal Poisson sampling, which is effectively a shot noise subtraction technique. This corresponds to the assumption that, for an infinitesimal cell, the number of galaxies follows a Poisson distribution with the mean determined by the underlying field. This assumption must be approximate for galaxies, especially on small scales, because of possible halo interaction or overlap. Nevertheless, on the scales we are studying, Poisson sampling should be a good approximation. Moreover, even on scales where no underlying continuous process exists, factorial moments are the preferred way to deal with the inherent discreteness of galaxies (e.g., Szapudi & Szalay 1997). In what follows, the continuous version of the theory will be used for simplicity, and factorial moments are implicitly assumed wherever continuous moments are used in the spirit of the above.
Next, the mean correlation function, ${\bar{\xi}}= {\langle{\delta^2}\rangle}$, and the amplitudes of the $N$-point correlation functions, $S_N = {\langle{\delta^N}\rangle}_c/{\langle{\delta^2}\rangle}^{N-1}$, are computed [^3]. The galaxy correlation function measured via counts in cells is actually a smoothed version of this function referred to as ${\bar{\xi}}$. If the galaxy correlation function is a power-law then the average correlation function, ${\bar{\xi}}$, is a power-law as well with the same slope but with a different (increased[^4]) amplitude that is determined with Monte-Carlo integration. For square cells and galaxy correlations with the usual slopes ($\gamma \sim 1.7$), the ratio ${\bar{\xi}}/ \xi$ is typically less than a factor of 2. The calculation of ${\bar{\xi}}$ is an alternative way to measure the two-point correlation function and it is used in the computation of the normalization of the higher order cumulants, i.e the $S_N$’s.
In the sequence of calculating the $S_N$’s from counts in cells via factorial moments, the most delicate step is the actual estimation of the distribution of counts in cells in the survey. We begin by transforming celestial coordinates into equal area Cartesian coordinates to assure proper handling of the curvature of the survey boundaries on the sky. We then adopt a series of masks ([[*e.g., *]{}]{}around bright stars) to denote regions in the survey that should be excluded from analysis. These masks were defined during the object detection phase and help minimize the inclusion of spurious detections. The infinitely oversampling method of (Szapudi 1997) was used to estimate $P_N$ in square cells. This method completely eliminates the measurement error due to the finite number of sampling cells. It can be sensitive, however, to edge effects on larger scales. When all the survey masks are used, only small scales up to $0.16^\circ$ could be studied because larger cells would nearly always contain an excluded region. As a sensible alternative, we eliminated all but the top 5% of the largest masks and repeated the analysis. This procedure extended the dynamic range of the angular scales probed and typically does not significantly alter the results on smaller scales. In addition to shrinking the possible dynamic range available to our measurement, the large number of masks compromise the accuracy of the error estimate as well: the geometry of the survey (as explained in more detail later) could not be taken into account at the level of including the geometry, position, and distribution of the masks.
Three-point correlation function {#sec:3pcf}
--------------------------------
The moments of counts in cells (i.e., the $S_N$’s) are the simplest descriptors of the non-Gaussianity of a spatial distribution, both in terms of measurement and interpretation. However, there are alternative statistical measures, such as the $N$-th order correlation functions, that reveal more information about the galaxy distribution by incorporating information about the geometry of higher order correlations. Indeed, the $S_N$’s are just higher order correlation functions that have been integrated over a portion of the available configuration space, i.e. over the $N$-points within a given cell. Although averaging over the configuration space suppresses noise, it also erases some important information that is, by contrast, kept by the $N$-th order correlation functions. Specifically, the validity of the hierarchical assumption can be tested directly (e.g., the first term in equation \[eq:expansion\]) and ultimately the effects of bias and gravitational instability can be distinguished. It is therefore desirable to compute as many of the $N$-th order correlation functions as the survey volume will allow. Three-point correlation functions have been calculated for the Zwicky, Lick, and Jagellonian galaxy samples by Peebles & Groth (1975), Groth & Peebles (1977), and Peebles (1975), for Abell clusters by Tóth, Hollósy, & Szalay (1989, hereafter THS), for the APM galaxy survey by Frieman & Gaztañaga (1999), and for an IRAS-selected galaxy sample by Scoccimarro [[*et al. *]{}]{}(2000; in this case the calculation was performed in Fourier space). Only the measurements by Peebles and Groth were performed on scales comparable to us, the rest of the work concentrated on much larger angular scales.
A larger survey is needed to directly measure the three–point correlation function, $\xi_3 = \zeta$, over a large dynamic range in scale than that required for the measurement of skewness, $S_3$. Physically, this can be understood: the total number of triplets available in the survey are divided into finer subsets in the case of $\xi_3$ than for $S_3$, in order to examine a larger number of parameters. A smaller number of triangles available for each bin of the three-point correlation function naturally causes larger fluctuations and, hence, larger statistical errors.
Szapudi & Szalay (1998, hereafter SS) proposed a set of new estimators for the $N$-point correlation functions using either Monte Carlo or grid methods. We denote these series of estimators SS$_{RN}$ and SS$_{WN}$, respectively. They have as their primary advantage the most efficient edge correction of any estimator developed to date. The two-point correlation function was estimated both with the Landy-Szalay estimator (Landy & Szalay 1993; hereafter LS), $(DD-2DR+RR)/RR$, and the grid indicator SS$_{W2}$. We employ both methods primarily to verify that the SS$_{W2}$ results agree with those derived from the well-tested and widely used LS estimator. The three-point correlation function was estimated by the grid indicator SS$_{W3}$.
The estimators SS$_{WN}$ ($N=2,3$) were implemented as follows. If $D$ denotes the data counts in a sufficiently fine grid, and $W$ is the characteristic function of the grid, i.e. taking values of $1$ inside the valid survey boundary, and $0$ elsewhere, the SS$_{W2}$ estimator for the two-point correlation function is $$\omega_2 = (DD-2DW+WW)/WW,$$ where $DD$, $DW$, and $WW$ denote the data-data, data-window, and window-window correlation functions. This is analogous to the LS estimator. The SS$_{W3}$ estimator for the three-point correlation function is $$\omega_3 = (DDD-3DDW+3DWW-WWW)/WWW.$$
The key difference between the LS and SS$_{W2}$ estimators is that the latter statistic is analogous to Euler’s method instead of Monte Carlo integration. In fact, SS$_{RN}$ is the direct generalization of the relation for the two-point function, which is LS $=$ SS$_{R2}$. For the grid estimators SS$_{WN}$ the random catalog R is replaced with the characteristic function of the survey estimated on a grid, $W$. The difference amounts to a slight perturbation of the radial bins.
The main advantage of using a grid instead of a random catalog is computational speed, which starts to become prohibitive for the three-point function (at least with current technology computers and the present algorithms). The CPU time for calculating two-point correlation functions with the LS estimator is on the order of hours for about $n = 10^5$ galaxies, and it roughly scales as $n^2$. For the three-point correlation function $n$-times more CPU power is thus needed, [[*i.e. *]{}]{}it would take $\sim10^5$ hours on a fast workstation to calculate the SS$_{R3}$ estimator. The SS$_{WN}$ series of grid estimators scale again as $n^N$, but here $n$ is the number of grid points. A significant speed up was achieved for the SS$_{W3}$ estimator by storing precalculated distances of the grid-point pairs in a look up table; population of the look up table required $\simeq n^2$ operations. This did not change the scaling of the actual three-point estimator, but did minimize CPU time as floating point operations are eliminated from the main calculation. With this method, the three-point function for all magnitude cuts could be calculated in about a CPU day on a $256^2$ grid.
For the two-point correlation function, where it was computationally feasible, we confirmed that SS$_{W2}$ produced results that were consistent with the LS estimator on scales larger than a few grid spacings, as it should. To repeat the same test for the three-point function would have necessitated supercomputer resources (see discussion above). This was not deemed justifiable as it is clear that no qualitative differences are expected for the three-point function from the slight perturbation of the binning introduced by the grid estimators.
We computed $\omega_3$ for all magnitude slices up to about 1 degree scales. The star-galaxy classification accuracy degrades at faint magnitudes ($I {\lower.5ex\hbox{{$\; \buildrel > \over \sim \;$}}}22$) in ways that are detailed in paper I. In brief, a fraction of the faintest compact galaxies are often classified as stars. The statistical correction for this effect is to include some faint “stars" into the galaxy catalog based upon a selection function that is derived from an extrapolation of the star counts at brighter ($I < 20.5$) magnitudes where classification is extremely reliable. The effect of the correction is to restore, at a statistical level, the correct galaxy-to-star ratios at faint magnitudes. While this introduces some minor contamination of the galaxy sample by stars, the stars are presumably randomly distributed on the angular scales being considered. Therefore, except for the highly improbable case when a star cluster is included, any stellar contamination dilutes the correlations by the usual factor $f_s^2$, where $f_s$ is the fraction of stars in the survey. At the magnitudes where the statistical classification corrections are required ($I > 21.5$), the number of stars introduced into the galaxy catalog as a result of the correction is estimated to be less than 10% of the galaxy population. The effect of the dilution is, thus, negligible.
An alternative approach to dealing with faint object misclassification is to only include in the analysis those objects initially classified as galaxies. If the misclassified galaxies have the same clustering properties as the correctly classified galaxies then the correlation functions will not change significantly (small changes may occur as a result of cosmic variance). However, if the compact objects likely to be misclassified as stars cluster differently, their omission could introduce a bias. In that case, sampling the objects classified as stars statistically according to an estimated misclassification ratio, as discussed above, is the remedy.
Lastly, one can compute the correlation functions using all detected objects [*regardless*]{} of their classification. In this case, one can again estimate the “true" galaxy correlation function by extrapolating the bright stellar number counts and multiplying the correlation amplitude by the factor $N_{Obj}^2/(N_{Obj}-N_{Star})^2$ where $N_{Obj}$ is the total number of objects in a given magnitude bin and $N_{Star}$ is an estimate of the number of stars in the same bin.
To determine if misclassification effects are a significant source of bias, we estimated the two-point correlation function using all three strategies in all slices. The four brightest slices ($17 \le I < 21$) exhibit no discrepancies between any of the methods, indicating that misclassification is negligible in this flux range. For the deepest slice ($22 \le I < 22.5$), the results based on the statistical inclusion of faint stars according to an estimate of the misclassification rate is in excellent agreement with the corrected correlation function derived using all detected objects. Agreement with the results using only objects classified as galaxies was marginal. The $21 \le I < 22$ slice is an intermediate case. These results suggests that correction for misclassification is essential in this survey for $I {\lower.5ex\hbox{{$\; \buildrel > \over \sim \;$}}}21.5$.
For the deepest slices, we have thus performed the estimation of the three-point correlation functions using the first statistical correction discussed above (results from this method are denoted with the letter “s" in Tables 1 and 3) as well as an estimation based on using only those objects classified initially as galaxies. The difference in the results between the two methods provides an upper estimate of the systematic error introduced by star-galaxy misclassification. For the two deepest slices ($21 \le I < 22,\ 22 \le I < 22.5$), the statistical procedure should yield the correct results.
Results {#sec:results}
=======
Counts in Cells {#sec:cic}
---------------
The $s_3$ and $s_4$ results are shown in Figure \[fig:s3s4\]. In what follows lower case letters, such as $s_N$, denote the observed angular quantities and uppercase characters, such as $S_N$, represent the deprojected, spatial statistics. In each plot, open symbols show the exact estimates when all cells containing survey masks are excluded (hereafter referred to as the measurement type “E") and closed symbols show the values when only cells containing the largest 5% of the masks were excluded (hereafter referred to as measurement type “M"). The magnitude cuts are shown in the lower right hand corner of each plot. The triangular symbols refer to $s_3$ while the rectangular ones to $s_4$. The $s_N$ data ($N = 3,\ 4$) and their estimated uncertainties are given in Table 1. The results in Table 1 for $I < 21$ are for the “M" type measurements whereas those for $I \geq 21$ are for the “E" type measurements. We provide the “E" type measurements for the faintest slices because they provide the best fidelity and accuracy in the presence of any faint end systematic errors in the catalog at the expense of angular scale dynamic range (see discussion below).
For the scales considered, deprojection was performed using the hierarchical Limber’s equation: $$s_N = R_N S_N.$$ (see e.g., Szapudi, Meiksin, & Nichol 1996). Table 2 lists the $R_N$’s for a typical choice of the luminosity function. The $I-$band luminosity function is assumed to be the same evolving Schechter function as in paper I with $M^*(z) = -20.9 + 5 log h -\beta z$, $\alpha = -1.1$. Our fiducial parameters for the deprojection are $\beta = 1.5$, and $\gamma = 1.75$ for the slope of the two-point spatial correlation function. The cosmological parameters adopted are $h = 0.65$, $\Omega_o = 0.3$, and $\Lambda = 0$. The results are independent of the normalization of the luminosity function, and robust against variations in these parameters within the accepted range, or even beyond. For instance, the effect of varying all parameters simultaneously over the ranges $\beta = 1-2,\ \gamma = 1.5-1.75,\ h = 0.5-0.75,\ \Omega = 0.3-1$ is a $\sim10-15$% change in $R_3$. A non-zero cosmological constant $\Lambda$ was not considered; the effect of a reasonable non-zero $\Lambda$ on our estimates of the higher order correlation statistics is expected to be bracketed by the changes in $\Omega$ tested above. Table 2 also gives $R_3$ for $\beta = 1$ and for $\Omega = 1$ to explicitly demonstrate the relatively weak dependence of the projection coefficients on these parameters.
The problem with the “E" type measurement is that the large number of excluded cells severely limits the maximum scale for which we can derive constraints. Indeed, on scales above 10 arcminutes any cell would intersect a masks with almost unit probability. With “M" type measurements, we can increase the maximum spatial scale by a factor of 2 at the price of contaminating the estimate slightly. The typical effect of such a contamination is a slight decrease in the $s_N$’s, although it depends on the exact spatial and size distribution of the masks and, as such, is extremely complicated to correct for or predict. The complex distribution and approximately power law size distribution of the masks prevented the application of counts in cells on larger scales, still relatively small compared to the characteristic size of the survey. This unfortunate phenomenon can be understood in terms of the masks effectively chopping up the survey into a large number of smaller surveys. Nevertheless, the large area of the Deeprange survey is still essential for the calculation of higher order statistics, as otherwise the finite volume errors would render the measurement of the $s_N$’s meaningless, even on the present small scales. However, the contamination decreases as the scale increases, since the area-ratio of the ignored mask is decreasing, and, thus, if the “M" and “E" type measurements agree out to a given angular scale, then the “M" measurements on larger scales can be considered relatively free of contamination whereas the “E" measurements may show edge effects. The two faintest slices, however, exhibit some disagreement between the “E" and “M" measurements on the smallest scales (see Figure \[fig:s3s4\]) and therefore it is the “E" measurements that should be considered the most reliable results in these cases.
### Error Estimation {#sec:errors}
The errors of the measurements (columns 4, 6, 9, and 11 in Table 1) are estimated using the full non-linear error calculation by (Szapudi & Colombi 1996, Szapudi, Colombi, & Bernardeau 1999), obtained from the FORCE (FORtran for Cosmic Errors) package. All the parameters needed for such a calculation were estimated from the survey self-consistently. These are the perimeter and area of the survey, the cell area, the average galaxy count and the average two-point correlation function over a cell, and the average two-point correlation function over the survey (estimated from the integral constraint fits of paper I). The higher order $S_N$’s up to 8 were needed as well. They were calculated from extended perturbation theory (Colombi [[*et al. *]{}]{}1997; Szapudi, Meiksin, & Nichol 1996). This theory is defined by one parameter, $n_{eff}$, which is identical to $n$, the local slope of the power spectrum on weakly non-linear scales, and is purely phenomenological on small scales. It can be determined from $S_3$ according to Equation \[eq:sqbias\], and was measured from the average $S_3$ estimated from the survey itself. In addition, the package uses a model for the cumulant correlators, the connected joint moments of counts in cells. The model by Szapudi & Szalay 1993 was chosen, as this was shown to provide a good description of projected data (Szapudi & Szalay 1997).
To estimate the uncertainties in the $S_N$’s, the FORCE package uses a perturbative procedure that is numerically accurate for small errors. When the fractional errors become $> 1$, the value computed is meaningless and only indicates, qualitatively, that the errors are large. In those instances we renormalized the plotted error to $100\%$ and the corresponding entry in Table 1 is given as “L”.
The accuracy of the fully non-linear error calculation method has been thoroughly checked (see Szapudi, Colombi, & Bernardeau 1999; Colombi [[*et al. *]{}]{}1999). However, one cautionary note is in order: although the calculation takes into account edge effects to second order (the ratio of the survey area to survey perimeter), the use of masks introduces complex geometrical constraints and extra edge effects. This is the main reason why the counts in cells method is limited to fairly small scales; the errors associated with these effects are not accurately modeled by the FORCE.
For our two faintest bins ($I > 21$), we also provide an estimate of the fractional systematic errors associated with uncertainties in the statistical correction for object misclassification. These are given as the second error value in the parentheses in Table 1. The systematic errors are just the standard deviations in the mean $s_3$ and $s_4$ computed from 10 realizations of the statistical misclassification correction described in §\[sec:3pcf\]. The fractional systematic errors are shown by the right shifted error bars in Figure \[fig:s3s4\]. Systematic errors for the brighter bins are negligible (${\lower.5ex\hbox{{$\; \buildrel < \over \sim \;$}}}2$%).
Three-point correlations
------------------------
Figure \[fig:w3\] shows our measurements of the angular three-point correlation function, $z(a,b,c)$, as a function of the hierarchical term $\omega(a)\omega(b)+\omega(b)\omega(c)+\omega(c)\omega(a)$, where $\omega$ is the two-point correlation function. The magnitude ranges are shown in the lower right hand corner of each subplot. The three-point correlation function was estimated in logarithmic bins for each side of the corresponding triangle. The figure makes no attempt to display shape information, although it is available from our estimator. The interpretation of projected shape space is extremely difficult because several different three-dimensional triangles project onto identical angular configurations. In the weakly non-linear regime, it is possible to project down the firm predictions for triangular configurations from a given theory and compare them with the observed angular three-point correlation function (e.g., Frieman & Gaztañaga 1999). For the small scales we are considering, however, a typical shape would deproject to a mixture of triangular galaxy configurations that are in the highly and mildly non-linear regimes. Therefore, we have chosen a different, entirely phenomenological, approach pioneered by THS. This essentially consists of fitting the three-point correlation function with a class of functions motivated by a general expansion; details of the fitting procedure and error estimation are presented in Appendix A.
Table 3 summarizes the results of the three-point correlation function fitting procedure. Column 2 of this table gives the number of degrees freedom in the fit. We perform fits with the third order terms ($q^{111}$ and $q^{21}$) free or locked to zero - hence the two different sets of results for each magnitude bin. On scales where the correlations are small ($\xi_2 \lesssim 10^{-4}$) and the relative fluctuations of the estimator are becoming increasingly large, the division by the hierarchical term (see Appendix A) becomes unstable and produces a multitude of outliers that are easily identified in Figure \[fig:w3\]. From the measurement of $S_3$ from Deeprange, $q_3$ can be safely bracketed with $0.1 < q_3 < 10$, at least for the brighter magnitude cuts. These are fairly conservative limits and are in agreement with other measurements (e.g., Szapudi, Meiksin, & Nichol 1996) as well. These limits are displayed as long dashes on the figures representing three-point correlation functions, and were used in the fits to eliminate outliers. For a few of the deepest slices the fit could be sensitive to the exact placement of the lower $q_3$ outlier cut. All the fits were, thus, reevaluated with the limits $0.01 < q_3 < 10$ as well. These extended fits are given in columns 7 through 10 in Table 3. The $\chi^2$ values in columns 3 and 7 of Table 3 are the full (unreduced) values.
Any significant difference between the fits in Table 3 for a given magnitude interval suggests large systematic errors. All the fits were performed by standard computer algebra packages, and the parameters were found to be robust with respect to the initial value assigned to them. The results in Table 3 can be compared with those in Table 1 using the fact that $s_3 \simeq 3 q_3$. This equation is not exact, because the shape of the cell influences the integral performed in the definition of $s_3$. The resulting form factors amount to only $\simeq 2-3\%$, which is the accuracy of the approximation (Boschan, Szapudi & Szalay 1994). Our fitting procedure cannot yield an accurate error on the fitted parameters, since the input errors for the $\chi^2$ were not accurately determined. Nevertheless, the FORCE error bars should give a good indications from the previous tables for the expected statistical uncertainty since the hierarchy is approximately true. Furthermore, any differences in the results introduced either by neglecting the statistical correction for star/galaxy misclassification or by altering the $q_3$ fit limits (see Table 3) should give a reliable indication of the systematic errors.
We did not attempt to compute systematic errors for the three-point correlation function because it would have been very time consuming computationally. We note, however, that one can gauge the amplitude of any systematic errors by comparing the results with and without the statistical misclassification correction in Table 3. Furthermore, the realization of the misclassification-corrected galaxy catalog used in the three-point correlation function computation has $s_3$ values that are very similar to the values derived from the counts-in-cells method.
Discussion and Summary {#sec:sum}
======================
Our constraints on the higher order clustering of the brightest galaxies in our survey are in good agreement with previous work. This can be seen in Figure \[fig:s3s4\] for our $17 \le I < 18$ subsample where we also display the results reported by Szapudi & Gaztañaga (1998, see also Gaztañaga 1994) for the APM $17 \le b_j < 20$ galaxy sample and those reported by Szapudi, Meiksin & Nichol (1996) for the EDSGC with dash-dots, and long dash-dots, respectively. Comparison of our $17 \le I < 18$ subsample with the above larger surveys is justifiable because the projection coefficients only vary slowly with depth. The APM and EDSGC measurements are the present state-of-the-art for shallow angular surveys, to be superseded only with the Sloan Digital Sky Survey. The APM and EDSGC measurements agree with each other on intermediate-large scales (see Szapudi & Gaztañaga 1998 for detailed comparison of counts in cells measurements of the two surveys). On small scales, however, the EDSGC results are higher than that of the APM. The discrepancy between the APM and EDSGC results can most likely be attributed to differences between the deblending procedures used in the construction of the two catalogs (Szapudi & Gaztañaga 1998). Our measurements, especially for $S_3$, appear to lie mostly between the two where they disagree and consistent with them, although perhaps somewhat lower, where they agree. We conclude that the Deeprange counts-in-cells measurements are in agreement with previous estimates in shallow angular surveys, despite the difference in wavelength and the smaller area. Constraints from the three-point correlation function mirror the above results as well – our measurement of $Q_3 = 1.57$ from the $17 \le I < 18$ subsample is remarkably close to that of Groth and Peebles (1977), $Q_3 = 1.29 \pm 0.21$ compiled from the Zwicky, Lick, and Jagellonian samples.
For the collection of magnitude limited cuts, $S_3$ and $S_4$ are approximately scaling. The $S_N$’s appear to decrease with depth, which suggest a small evolution with $z$, even though the redshift distribution of each slice is fairly broad. This is in accordance with theories of structure formation where the initial Gaussian fluctuations grow under the influence of gravity (for perturbation theory see e.g., Peebles 1980; Juszkiewicz, Bouchet, & Colombi 1993; Bernardeau 1992; Bernardeau 1994); for $N$-body simulations see e.g., Colombi [[*et al. *]{}]{}1995; Baugh, Gaztañaga, & Efstathiou 1995; Szapudi [[*et al. *]{}]{}2000b), and where there is a small bias.
The wide area of the Deeprange survey enables a self-contained study of the time evolution of the bias – there are a statistically sufficient number of low-$z$ galaxies contained in the survey that a self-consistent constraint can be derived. Our measurement of the the bias evolution uses a simple model based on the parameterization of the evolution of the two-point correlation function discussed in paper I. Briefly, we parameterize the redshift dependence of the correlation function as $$\xi(r,z) = (\frac{r}{r_0})^{-\gamma} (1+z)^{-3-\epsilon},$$ where $\gamma$ and $r_0$ are the slope and correlation length, and $\epsilon$ is a phenomenological parameter describing the evolution (e.g. Peebles 1980; Efstathiou [[*et al. *]{}]{}1991; Woods & Fahlman 1997). For $\Omega = 1$ and linear evolution, the correlation function at any $z$ can then be compared to that at the current epoch through the mapping $(1+z)^2\xi(r/(1+z),z)$. Although this mapping is strictly for $\Omega = 1$, variations in the mapping due to scenarios with $\Omega < 1$ are well within the range of uncertainties due to the exclusion of all details associated with specific galaxy formation processes in this simple model. Under the [*same*]{} mapping the $S_N$’s would be invariant. Therefore, we can define the ratio of the mapped correlation function to that at the current epoch as $$b(z)^2 = (1+z)^{\gamma-1-\epsilon}.$$ Since our higher order measurements probe galaxies in the highly non-linear regime, it is appropriate to use Equation \[eq:sqbias\] for biasing, which yields $S_3(z) = S_3(0)/b(z)^2$. Figure \[fig:bias\] displays the predicted $s_3$, normalized using the measurement from the $17 \le I < 18$ sample, and the actual measurements at $\theta = 0.04^\circ$ (solid symbols). Redshifts are assigned to the observed data by computing the median $z$ for each magnitude slice as predicted by the $\beta = 1.5$ LF evolution model described in §\[sec:cic\]. The predicted data were computed using $\gamma = 1.75$, and $\epsilon = -1$. Strictly, $S_3$ should be compared at the same comoving scale, but the flat scaling of the $s_3$ allows comparison at the same angular scale instead. This model is clearly an oversimplification of the time evolution of the bias, and yet, it captures the trend presented by the data remarkably well. In addition, the time evolution of the bias is constrained, according to Equation \[eq:sqbias\], to be less than a factor of 2 between the current epoch and $z \sim 0.75$.
On the other hand, the above observations are in strong contrast with expectations from non-Gaussian scenarios. In these models, primordial non-Gaussianity (e.g., skewness) grows in linear theory. For Gaussian initial conditions, the growth of the skewness is a second order effect. Thus according to theoretical calculations (Fry & Scherrer 1994), and simulations (Colombi 1992), the $S_N$’s should have been larger in the past in non-Gaussian models compared to their Gaussian counterparts. As an example, $S_3$ is expected to be a factor of 2 larger at $z \simeq 1$ than at $z \approx 0$ for the typical initial conditions in cosmic string models (Stebbins 1996, private communication; Colombi 1992). The growth of $S_4$ with increasing $z$ is expected to be even more prominent. These effects would have been detectable in our survey, despite the dilution effects of projection and possible systematic errors at the survey magnitude limit. Formally, the non-Gaussian expectation is about $7-8\ \sigma$ from our measurement at the highest redshift. Even generously doubling our error bars (which would make the previous naive biasing model based on Gaussian initial conditions a perfectly reasonable fit to the data in Figure \[fig:bias\]) would still exclude typical cosmic string non-Gaussian initial conditions at about a $3.5-4\ \sigma$ level. The Deeprange data, thus, strongly favor Gaussian initial conditions.
While string initial conditions are not favored for many reasons (e.g., Pen, Seljak, & Turok 1997; Albrecht, Battye, & Robinson 1998), our expectation is that the above arguments would hold for a large class of non-Gaussian models. However, it is possible to invoke biasing schemes that are in accord with our observations but that also mask the signature of non-Gaussian initial conditions. For example, strong bias at early times decreases the higher order moments, counteracting the effects of the initial non-Gaussianity. Later the bias naturally decreases, resulting in an increase of the $S_N$’s. While perhaps such a scenario is not completely unimaginable physically, e.g. by invoking a strong feedback during galaxy formation, it would require unnatural fine tuning in order to assure that the time evolution of biasing effectively cancels the time evolution of the non-Gaussian initial conditions. Thus, while our results cannot rule out initial non-Gaussianity with high certainty, the most natural explanation is that Gaussian initial conditions of the fluctuation field grew via gravity. Galaxies appear to trace mass quite accurately, and the small evolution of bias predicted by our naive model appears to describe the trends of the data fairly well.
The three-point correlation function appears to be hierarchical down to $I = 22.5$. Our estimates of the non-hierarchical term, $Q^{21}$, are uniformly small in amplitude (see Table 3), although a cubic term $Q^{111}$ cannot be excluded with high significance. While the $\chi^2$ analysis is only approximate in nature (the simple error model did not attempt to estimate bin-to-bin cross-correlations), the inclusion of a cubic term does not typically result in a significant change in the goodness of fit. Gravitational instability predicts $Q^{111} \simeq 0$, thus a small cubic term is likely to mean mild bias, as predicted by general bias theory. In the case of a Gaussian field with a completely general bias function $Q^{111} = Q_3^3$ is expected under fairly general conditions (Szalay 1988). Since we find $Q^{111} \ll 1$ and $Q_3 \gtrsim 1$, the galaxy density field cannot be a biased version of a Gaussian field but, rather, a mildly biased version of an underlying non-Gaussian field. We emphasize that the non-Gaussianity being referred to here is that which is induced by non-linear gravitational amplification and does not refer to the nature of the initial perturbation spectrum. This interpretation is subject to cosmic variance on $Q^{111}$, possible systematic errors, and possible stochasticity of the bias, all of which could contribute to the cubic term.
The best fitting $S_3 = 3\times Q_3 \simeq Q^{11}$ from the three-point correlation measurements are also displayed in Figure \[fig:s3s4\] as two horizontal lines: the dotted lines show result of the full fit, while dashed lines display the results of the restricted fit. Despite the fact that the three-point correlation formula extends to larger scales than the counts in cells analysis, the best fitting $Q^{11}$ appears to be in excellent agreement with $S_3$ obtained from counts in cells, with the exception of the $19 \le I < 20$ results where there is a factor of two discrepancy. The generally good agreement between the two methods, however, is a further indication of the insignificance of the cubic term $Q^{111}$. In Figure \[fig:w3\], the dashed lines show the best fitting $Q^{11}$ and the dotted lines display $Q_3 = S_3/3$ as obtained from the counts in cells analysis. This demonstrates the agreement between the counts in cells analysis and the direct three-point correlation function estimation from a different perspective. The disagreement seen above for the $19 \le I < 20$ sample is less significant here and, thus, it must stem from the smallest scales. The points display a slight non-hierarchical curvature as well. More accurate measurements from even larger surveys will be required to assess whether this is a significant behavior.
In summary we have measured moments of counts in cells and the three-point correlation function in the Deeprange survey. These constitute the deepest higher order clustering measurements to date. The moments measured on small scales appear to be hierarchical, and the three-point function, extending to larger scales continues this hierarchy. While the cubic term resulting from possible bias could not be excluded with high significance, the hierarchical assumption holds to a good approximation. This argues that gravity is the dominant process in creating galaxy correlations with bias having a detectable but minor role. Qualitatively, models with Gaussian initial conditions and a small amount of biasing, which increases slightly with redshift, are favored. The large area of the Deeprange survey allows us to study the evolution of bias over a relatively broad magnitude range. We find that the bias between $I-$band selected galaxies and the underlying matter distribution increases slightly with increasing redshift (up to $z \sim 0.8$) but not by more than a factor of 2.
In Durham, IS was supported by the PPARC rolling grant for Extragalactic Astronomy and Cosmology. The FORCE (FORtran for Cosmic Errors) package can be obtained from its authors, S. Colombi, and IS (http://www.cita.utoronto.ca/$^{\sim}$szapudi/istvan.html). IS would like to thank Alex Szalay for stimulating discussions.
Albrecht, A., Battye, R. A., Robinson, J. 1998, PhysRevD, 59, 23508 Bardeen, J.R., Bond, J.R., Kaiser, N. & Szalay, A.S., 1986, , 304, 15 Baugh C.M., Gaztañaga E., Efstathiou G., 1995, MNRAS, 274, 1049 Benson, A. Szapudi, I., & Baugh, C.M. 2000, in prep Bernardeau, F. 1992, , 292, 1 Bernardeau, F. 1994, , 433, 1 Bernardeau, F. 1996, , 312, 11 Boschan, P., Szapudi, I. & Szalay, A.S. 1994, , 93, 65 Campos, A., Yepes, G., Carlson, M., Klypin, A.A., Moles, M. & Joergensen, H. 1995, Clustering in the Universe, ed. S. Maurogordato, C. Balowski, C. Tao, & J. Trán Thanh V’an’ (Editiones Frontières: Gif-sur-Yvette), 403 Colombi, S. 1992, PhD Thesis Colombi, S., Szapudi, I., Jenkins, A., & Colberg, J., 1999, , accepted (astro-ph/9912236) Colombi, S., Bernardeau, F., Bouchet, F.R., & Hernquist, L. 1997, , 287, 241 Colombi, S., Bouchet, F.R., & Hernquist, L. 1995, , 281, 301 Connolly, A.J., Szalay, A.S., & Brunner, R.J. 1998, , 499, L125 Connolly, A.J., Szalay, A.S., Koo, D., Romer, K.A., Holden, B., Nichol, R.C., & Miyaji, T. 1996, , 473, L67 da Costa, L. Nicolaci, Willmer, C. N. A., Pellegrini, P. S., Chaves, O. L., Riti, C., Maia, M. A. G., Geller, M. J., Latham, D. W., Kurtz, M. J., Huchra, J. P., Ramella, M., Fairall, A. P., Smith, C., Lmpari, S. 1998, /aj, 116, 1 de Lapparent, V., Geller, M. J., Huchra, J. P. 1986, , 302, 1 dell’Antonio, I. P., Geller, M. J., Bothun, G. D. 1996, , 112, 1780 Efstathiou G., Bernstein, G., Tyson, J.A., Katz, N., & Guhathakurta, P. 1991, , 380, L47 Frieman, J.A., Gaztañaga, E., 1999, , 521, 83 Fry, J.N. 1984, , 279, 499 Fry, J.N. & Gaztañaga, E. 1993, , 425, 392 Fry, J.N. 1994, Phys.Rev.Lett., 73, 215 Fry, J.N., Scherrer, R.J. 1994, , 429, 36 Fry, J.N. 1996, , 461, 65L Gaztañaga, E. 1994, , 268, 913 Grinstein,B. & Wise, M.B., 1986, , 310, 19 Groth, E.J., & Peebles, P.J.E. 1977, , 217, 385 Hoyle, F., Szapudi, I., & Baugh, C.M. 1999, , submitted (astro-ph/9911351) Jing, Y.P., 1997, IAU Symp. 183, 18 Juszkiewicz, R., Bouchet, F. R., & Colombi, S. 1993, , 412, L9 Kaiser, N., 1984, , 273, L17 Landy, S.D.,& Szalay, A. 1993, , 412, 64 Le Fevre, O., Hudon, D., Lilly, S., Crampton, D., Hammer, F., & Tresse, L. 1996, , 461, 534 Lidman, C. & Peterson, B. 1996 , 279, 1357 Lilly, S., Tresse, L., Hammer, F., Crampton, D.,& Le Fevre, O., 1995, , 455, 108 Matarrese, S., Lucchin, F. and Bonometto, S.A., 1986, , 310, L21 Matsubara, T., 1995, , 101, 1 Neuschaffer, L.W., & Windhorst R.A. , 439, 14 Peebles, P.J.E. & Groth, E.J., 1975, , 196, 1 Peebles, P.J.E. 1975, , 196, 647 Peebles, P.J.E. 1980, The Large Scale Structure of the Universe (Princeton: Princeton University Press) Pen, U. L., Seljak, U., Turok, N. 1997, PhysRevL, 79, 1611 Postman, M., Lauer, T.R., Szapudi, I., & Oegerle, W. 1998, , 506, 33 Scoccimarro, R., Feldman, H.A., Fry, J.N., & Frieman, J. 2000, submitted, (astro-ph/0004087) Shectman, S. A., Landy, S. D., Oemler, A., Tucker, D. L., Lin, H., Kirshner, R. P., Schechter, P. L. 1996, , 470, 172 Szalay, A.S., 1988, , 333, 21 Szapudi, I. 1994, Ph.D. Thesis, Johns Hopkins University Szapudi, I., 1997, , 497, 16 Szapudi, I. 1999, , 300, 35L Szapudi, I., Branchini, E., Frenk C., Maddox, S., & Sutherland, W. 2000a, , accepted. Szapudi, I., & Colombi, S. 1996, , 470, 131 (SC96) Szapudi, I., Colombi, S., Bernardeau, F., 1999, , accepted (astro-ph/9912289) Szapudi, I., Colombi, S., Cole, S., Frenk, C.S., & Hatton, S. 2000b, in preparation Szapudi, I., Dalton, G., Efstathiou, G.P., & Szalay, A. 1995, , 444, 520 Szapudi, I., & Gaztañaga, E. 1998,, 300, 493 Szapudi, I., Meiksin, A., & Nichol, R.C. 1996, , 473, 15 Szapudi, I., Quinn, T., Stadel, J., & Lake, G., 1999, 517, 54 Szapudi, I. & Szalay, A.S. 1993, , 408, 43 Szapudi, I. & Szalay, A.S. 1997, , 481, L1 Szapudi, I. & Szalay, A.S. 1998, , 494, L41 (SS) Toth, G, Hollosi, J, & Szalay, A.S. 1989, , 344, 75 Woods, D., & Fahlman G. 1997, , 490, 11
Fit for the Three-point Correlation Function
============================================
The three-point correlation function can be expanded in terms of powers of the two-point correlation function. This is called a Meyer clustering expansion and it is commonly used in the field of statistical physics as well. The second (leading) order term in the expansion corresponds to the hierarchical assumption, which is predicted by gravitational hierarchy (Peebles 1980). The higher order terms, which are determined by the nature of the galaxy–matter biasing (Szalay 1988), represent corrections to the hierarchy that result in non-trivial shape dependencies.
The Meyer clustering expansion, just like any spatial expansion, can be represented by a Feynman-like graphical representation, shown on Figure \[fig:Meyer\]. This figure also illuminates the terminology of connected components, since they correspond to connected components of the graph in the pictorial representation. By definition, three-point function is a connected third order moment: it is the extra probability of a triangle above those predicted by Poisson and Gaussian (i.e. two-point) terms. Consequently, it makes sense to use a connected Meyer expansion for it.
Our aim is to project a full third order connected Meyer clustering expansion of the three dimensional three-point function to its angular analog. Specifically, the spatial three-point correlation function can be expressed as a third order connected expansion of the two-point correlation function, $\xi$, as $$\begin{aligned}
\xi_3 = \zeta(1,2,3) &=& Q^{11}\left(\xi(1)\xi(2)+\xi(2)\xi(3)+\xi(3)\xi(1)\right)+ \cr
& & Q^{111}\left(\xi(1)\xi(2)\xi(1)\right)+
Q^{21}\left(\xi(1)^2\xi(2)+sym \right).
\label{eq:expansion}\end{aligned}$$ This equation is directly translated to the graph on Figure \[fig:Meyer\]. This expansion contains all the possible [*connected*]{} terms; THS included all possible terms, including disconnected ones. Their notation is modified here slightly to avoid confusion with the cumulant correlator notation introduced since THS.
The projected three-point correlation function can be written as $$z(a,b,c) = \sum_{i = 11,111,21} q^i \omega_i(a,b,c),
\label{eq:zmod}$$ where $a,b,$ and $c$ are the angles of a triangle on the sky. The $q^i$ terms are related to the three-dimensional $Q^i$ via $q^i = R_i Q^i$. For details see THS. The projection coefficient for the hierarchical term $R_{11} \equiv R_3$ is identical to that of $S_3$. Since we have found that all other terms are consistent with zero (i.e. the hierarchy is a good approximation) we only deproject the hierarchical term. The individual terms of equation \[eq:expansion\] project down to two dimensions as (THS) $$\begin{aligned}
\omega_{11}(a,b,c) &&= \omega(a)\omega(b)+\omega(b)\omega(c)+\omega(c)\omega(a)\cr
\omega_{111}(a,b,c) &&= \left(\frac{\omega(a)\omega(b)\omega(c)}{a+b+c}\right)
\frac{\pi^2}{H(\gamma)^3}\left(\frac{180}{\pi}\right)\cr
\omega_{21}(a,b,c) &&= \left(\frac{\omega(a)^2\omega(b)+\omega(a)\omega(b)^2}{a} + sym \right)
\frac{H(2\gamma)}{H(\gamma)^2}\left(\frac{180}{\pi}\right),\end{aligned}$$ where $H(\gamma) = \int_{-\infty}^{\infty}dx(1+x^2)^{{-\gamma}/{2}}$. The second equation is only an approximation, being exact for $\gamma = 2$. Using the above terms, a minimum $\chi^2$ fit to the angular three-point correlation function was performed with the following choice $$\chi^2 = \sum_{a,b,c} \frac{(z(a,b,c) - z_{mod}(a,b,c))^2}{\sigma^2},$$ where $z_{mod}$ is equation \[eq:zmod\]. The variance, from Szapudi & Szalay (1998), is $$\sigma^2 = Var(z) = \frac{6 S_3}{S^2\lambda^3} \simeq\frac{6}{DDD}.$$ Here $S_3 = \int \Phi(a,b,c)^2$, a configuration integral over the definition of the angular bin, described by the function $\Phi(a,b,c) = 1$ when a triangle is in the bin, and zero otherwise.
The simplification arises since $S_3 = S = \int \Phi$, and $S \lambda^3 \simeq DDD$ for our choice of the characteristic function describing the bin ($\Phi = \Phi^2$) used in the three-point estimator (see Szapudi & Szalay (1998) for details). $DDD$ is the number of data triplets in the bin.
Error Estimation {#error-estimation}
----------------
The variance defined in the above equation was derived for a Poisson distribution and therefore accounts only for discreteness effects. Two additional error contributions, edge and finite volume effects (Szapudi & Colombi 1996; Szapudi, Colombi, & Bernardeau 1999), arising from the uneven weights given to data points and from the fluctuations of the universe on scales larger than the survey size, respectively, are not accounted for. However, edge effects are expected to be small for these edge corrected estimators, and it can be shown that at least partial correction for finite volume effects are contained in $DDD$. The $\chi^2$’s suggest that this ansatz for the variance is within factor of 2 of the truth. This is the limit of this simple model ignoring cross correlations of different bins. Without complicated treatment of the full correlation matrix, it would not make sense to improve the above formula with the inclusion of an additional phenomenological term for finite volume effects to tune $\chi^2$ to the number of degrees of freedom. While the results should not sensitively depend on the exact choice of the error model, it should be kept in mind that our errors for $z(a,b,c)$ could be off by as much as a factor of 2. The most likely sense of the offset is that our errors may be overestimated, as judged from the $\chi^2$’s.
-0.78in
[Table 1. $s_3$ and $s_4$ as a function of scale and $I-$band magnitude]{}
------------------- ----------- ------- ------------------ ------- --------------- ------------------- ------- ------------------ ------- ------------------
Scale
$I_{min}-I_{max}$ (degrees) $s_3$ $\Delta s_3/s_3$ $s_4$ $\Delta s_4/s $I_{min}-I_{max}$ $s_3$ $\Delta s_3/s_3$ $s_4$ $\Delta s_4/s_4$
_4$
17-18 0.01 3.54 0.53 21.80 L 18-19 3.48 0.10 L
$''$ 0.02 5.07 0.16 75.42 L $''$ 3.07 0.05 2.76 0.54
$''$ 0.04 4.64 0.10 61.34 L $''$ 3.43 0.04 12.99 0.39
$''$ 0.08 5.04 0.11 41.44 L $''$ 4.21 0.05 24.63 0.49
$''$ 0.16 5.46 0.19 33.71 L $''$ 4.59 0.15 41.71 L
$''$ 0.32 3.68 0.63 12.30 L $''$ 3.65 0.84 20.50 0.36
19-20 0.01 3.40 0.04 41.57 0.49 20-21 2.13 0.04 13.48 L
$''$ 0.02 3.65 0.02 41.69 0.19 $''$ 2.56 0.02 16.98 0.48
$''$ 0.04 4.54 0.02 53.93 0.19 $''$ 2.82 0.03 17.91 0.48
$''$ 0.08 4.89 0.05 68.70 0.36 $''$ 2.29 0.09 12.15 0.44
$''$ 0.16 4.36 0.20 49.25 0.57 $''$ 1.11 0.59 L
$''$ 0.32 4.62 L 2.52 L $''$ 1.06 L L
21-22s 0.01 2.99 (0.15,0.13) 23.64 (L,0.52) 22-22.5s 3.56 (L,0.44) 129.8 (L,0.26)
$''$ 0.02 2.80 (0.05,0.05) 25.98 (L,0.19) $''$ 2.42 (0.35,0.21) 22.41 (L,0.55)
$''$ 0.04 2.15 (0.08,0.05) 6.86 (L,0.49) $''$ 2.38 (0.31,0.10) 6.45 (L,0.58)
$''$ 0.08 1.73 (0.33,0.12) 5.84 (L,0.57) $''$ 1.56 (0.82,0.23) 20.00 (L,0.24)
$''$ 0.16 0.85 (L,0.68) (L,L) $''$ 2.70 (L,0.44) 50.22 (L,0.53)
------------------- ----------- ------- ------------------ ------- --------------- ------------------- ------- ------------------ ------- ------------------
0.0in
[Table 2. Projection Coefficients: $R_N = S_N / s_N$]{}
$I_{min}-I_{max}$ $R_3$ $R_4$ $R_3(\beta = 1)$ $R_3(\Omega = 1)$
------------------- ------- ------- ------------------ -------------------
17-18 1.224 1.617 1.204 1.232
18-19 1.074 1.212 1.189 1.062
19-20 1.020 1.051 1.055 1.021
20-21 1.049 1.134 1.024 1.053
21-22 1.069 1.189 1.051 1.071
22-22.5 1.076 1.215 1.06.5 1.07750
[Table 3. $\chi^2$ fits for the general connected third order expansion of the three-point function.]{}
------------------- ------- ---------- ------- ----------- ---------- ---------- ------- ----------- ----------
$I_{min}-I_{max}$ $N_f$ $\chi^2$ $q^3$ $q^{111}$ $q^{21}$ $\chi^2$ $q^3$ $q^{111}$ $q^{21}$
17 - 18 44 11.28 1.29 -0.26 0.03 11.38 1.30 -0.25 0.03
$''$ 42 12.24 1.76 12.30 1.76
18 - 19 52 15.03 0.86 0.48 0.01 15.03 0.86 0.48 0.01
$''$ 50 22.98 1.39 22.98 1.39
19 - 20 62 81.92 3.18 0.29 -0.07 88.90 3.24 0.75 -0.11
$''$ 60 85.69 2.80 96.45 2.76
20 - 21 41 100.31 0.91 -2.35 0.16 107.64 1.02 -2.11 0.10
$''$ 39 116.11 1.00 126.28 0.86
21 - 22 17 10.92 0.29 -0.27 0.01 12.96 0.15 -0.38 0.03
$''$ 15 11.66 0.27 14.06 0.18
22 - 22.5 29 28.32 0.45 -0.57 0.02 36.22 0.47 -0.47 0.00
$''$ 27 34.15 0.43 45.38 0.32
21 - 22s 19 9.95 0.51 -0.80 -0.00 13.07 0.32 -0.81 0.03
$''$ 17 12.30 0.34 15.20 0.24
22 - 22.5s 42 25.32 0.48 -1.69 0.13 29.96 0.39 -1.75 0.14
$''$ 40 33.92 0.57 39.56 0.52
------------------- ------- ---------- ------- ----------- ---------- ---------- ------- ----------- ----------
[^1]: Ensemble averages over the connected component have the statistical property of additivity for independent processes and hence receive the name “cumulants”. The alternative terminology “connected” comes from the (Feynman) graph representation of the statistical processes. It can be shown that cumulants correspond to graphs that have exactly one connected component. The mathematical definition relies on a logarithmic mapping of the generating function of ordinary moments (e.g. Szapudi & Szalay 1993)
[^2]: Magnitude ranges are given in the Kron-Cousins $I-$band; the conversion to AB magnitudes is achieved by adding $\sim0.5$ mag to the Kron-Cousins values.
[^3]: ${\langle{}\rangle}_c$ refers to ensemble averages over the connected component
[^4]: The amplitude is higher because ${\bar{\xi}}$ is the average value of $\xi$ UP TO a given scale.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
We have studied the hard X-ray variability of the soft X-ray transient 04 with BATSE in the 20–100 keV energy band. Our analysis covers 180 days following the first X-ray detection of the source on 1992 August 5, fully covering its primary and secondary X-ray outburst. We computed power density spectra (PDSs) in the 20–50, 50–100, and 20–100 keV energy bands, in the frequency interval 0.002–0.488 Hz. The PDSs of 04 are approximately flat up to a break frequency, and decay as a power law above, with index $\sim$1. During the first 70 days of the X-ray outburst, the PDSs of 04 show a significant QPO peak near $\sim$0.2 Hz, superposed on the power-law tail.
The break frequency of the PDSs obtained during the primary X-ray outburst of 04 occurs at 0.041$\pm$0.006 Hz; during the secondary outburst the break is at 0.081$\pm$0.015 Hz. The power density at the break ranged between 44 and 89% Hz$^{-1/2}$ (20–100 keV). The canonical anticorrelation between the break frequency and the power density at the break, observed in Cyg X-1 and other BHCs in the low state, is not observed in the PDSs of 04.
We compare our results with those of similar variability studies of Cyg X-1. The relation between the spectral slope and the amplitude of the X-ray variations of 04 is similar to that of Cyg X-1; however, the relation between the hard X-ray flux and the amplitude of its variation is opposite to what has been found in Cyg X-1. Phase lags between the X-ray flux variations of 04 at high and low photon energies, could only be derived during the first 30 days of its outburst. During this period, the variations in the 50–100 keV, lag those in the 20–50 keV energy band by an approximately constant phase difference of 0.039(3) rad in the frequency interval 0.02–0.20 Hz. The time lags of 04 during the first 30 days of the outburst, decrease with frequency as a power law, with index 0.9 for $\nu$$>$0.01 Hz.
author:
- 'F. van der Hooft, C. Kouveliotou, J. van Paradijs, W.S. Paciesas, W.H.G. Lewin , M. van der Klis, D.J. Crary, M.H. Finger, B.A. Harmon , and S.N. Zhang'
title: 'Hard X-ray variability of the black-hole candidate GRO J0422$+$32 during its 1992 outburst'
---
04[GRO J0422$+$32]{}
Introduction
============
Soft X-ray transients (SXTs) are low-mass X-ray binaries consisting of a neutron star or black-hole primary that undergoes unstable accretion from a late-type companion. Disk instabilities (van Paradijs 1996; King, Kolb & Burderi 1996) cause brief but violent outbursts, typically lasting weeks to months, during which the X-ray luminosity abruptly increases by several orders of magnitude, to $\sim$$10^{37}$–5$\times 10^{38}$ erg sec$^{-1}$, separated by quiescent intervals lasting from years to many decades.
The SXT 04 (Nova Persei 1992) was detected with the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory on 1992 August 5 (Paciesas 1992). Initially, the hard X-ray flux of the source increased rapidly, reaching a maximum of $\sim$3 Crab (20–300 keV) within three days after first detection (Paciesas 1992), at which level it remained for the following three days (Harmon 1992). Subsequently, the hard X-ray intensity (40–150 keV) of 04 decreased exponentially with a decay time of 43.6 days (Vikhlinin 1995). About 135 days after the primary X-ray maximum, the X-ray flux of 04 reached a secondary maximum, after which the flux continued to decrease with approximately the same decay time as before. The source was detected above the BATSE 3$\sigma$ one-day detection treshold of 0.1 Crab (20–300 keV) for $\sim$200 days following the start of the X-ray outburst. The daily averaged 40–150 keV flux history of 04 during this period is displayed in Figure \[0422\_fig1\].
The X-ray spectrum of 04 was hard and could be well described by a cut-off power law with photon index $\sim$1.5 and break energy $\sim$60 keV, detected up to 600 keV with OSSE (Grove, Kroeger & Strickman 1997). In observations of 04 during X-ray maximum with COMPTEL, the source was detected up to 1–2 MeV (van Dijk 1995). Timing analysis of the hard X-ray data revealed quasi-periodic oscillations (QPOs) centered at 0.03 and 0.2 Hz (20–300 keV BATSE data, Kouveliotou 1992, 1993) and 0.3 Hz (40–150 keV SIGMA data, Vikhlinin 1995). These QPOs detected in hard X-rays were confirmed by ROSAT observations in the 0.1–2.4 keV energy band (Pietsch 1993). The power density spectra (PDSs) of 04 were flat for frequencies, [*f*]{}, below the first QPO peak at 0.03 Hz, and fall as 1/[*f*]{} between both QPOs (Kouveliotou 1992). The total fractional rms variations of the 40–150 keV flux in the 10$^{-3}$–10$^{-1}$ Hz frequency interval, ranged between $\sim$15% and $\sim$25% (Vikhlinin 1995). The observed hard X-ray spectrum and rapid X-ray variability resemble the properties of dynamically proven black-hole candidates (BHCs) (Sunyaev 1991; Tanaka & Lewin 1995; see however, van Paradijs & van der Klis 1994), which led to the suggestion that 04 is also a BHC (Roques 1994).
Soon after its first X-ray detection, the optical counterpart of 04 was independently identified by Castro-Tirado (1992, 1993) and Wagner (1992) at a peak magnitude of [*V*]{}$=$13.2. This object was absent on the POSS down to a limiting magnitude of [*R*]{}$\simeq$20 (Castro-Tirado 1993). During the first 210 days of the X-ray outburst the optical light curve declined exponentially with an [*e*]{}–folding time of 170 days (Shrader 1994). Then the optical brightness dropped quickly, followed by two mini-outbursts (with an amplitude of $\sim$4 mag each) before it finally reached quiescence at [*V*]{}$=$22.35, $\sim$800 days after first detection of the X-ray source (Garcia 1996). Therefore, the outburst amplitude of 04 was about 9 mag in [*V*]{}, the largest observed in any SXT to date (van Paradijs & McClintock 1995). The optical light curve of 04, during and after the X-ray outburst, bears many characteristics similar to those of the so-called “tremendous outburst amplitude dwarf novae” (TOADs). Kuulkers, Howell & van Paradijs (1996) proposed that these similarities reflect the small mass ratios and very low mass transfer rates in SXTs and TOADs.
During the decay to quiescence, optical brightness modulations were reported at periods of $\sim$2.1, 5.1, 10.2, and 16.2 hrs, respectively (Harlaftis 1994; Callanan 1995; Kato, Mineshige & Hirata 1995; Chevalier & Ilovaisky 1995; Martin 1995). Spectroscopic observations by Filippenko, Matheson & Ho (1995) showed that the orbital period is 5.08$\pm$0.01 hrs, and determined the mass function at [*f(M)*]{}$=$1.21$\pm$0.06 M$_{\odot}$. This value of the mass function is confirmed by observations performed by Orosz & Bailyn (1995), [*f(M)*]{}$=$0.40–1.40 M$_{\odot}$, and Casares (1995), [*f(M)*]{}$=$0.85$\pm$0.30 M$_{\odot}$. The orbital inclination, $i$, of 04 was estimated from an interpretation of the double waved light curve as an ellipsoidal variation. From the $\Delta$[*I*]{}$\sim$0.03 mag semi-amplitude modulation Callanan (1996) derived $i$$\leq$45$^\circ$. This implies a mass of $\geq$3.4 M$_{\odot}$ for the compact star, i.e. slightly above the (theoretical) maximum mass of a neutron star. Independently, Beekman (1997) constrain the inclination of 04 from infrared and optical photometry to be between 10$^\circ$ and 31$^\circ$. Consequently, their lower limit to the mass of the compact star becomes $\ga$9 M$_{\odot}$. Therefore, the compact star in 04 is most probably a black hole.
Here we report on a temporal analysis of 20–100 keV BATSE data of 04 collected during 180 days following its first detection. We derive the X-ray light curve of the source and discuss its hard X-ray spectrum. We compute daily averaged power density spectra in the frequency interval 0.002–0.488 Hz and show their evolution in time. From the Fourier amplitudes we compute cross spectra and derive the phase and time lags between the hard and soft X-ray variations of 04. We compare our results with those of similar analyses of Cyg X-1.
X-ray light curve and X-ray spectrum {#lc_spectrum}
====================================
We have determined the flux of 04 by applying the Earth occultation technique (Harmon 1993) to data obtained with the BATSE large-area detectors (count rates with a 2.048 sec time resolution in 16 spectral channels). The average number of occultation steps of 04 was 13 per day, ranging from 2 to 24. The systematic error due to the variable orientation of [*CGRO*]{}, is typically $\sim$5–10%. The daily averaged flux history of 04 in the 40–150 keV energy band is displayed in Fig. \[0422\_fig1\]. The X-ray light curve rises from first detection to a maximum of $\sim$3 Crab (20–300 keV) in three days, followed by an exponential decrease with a decay time of 43.6 days (Vikhlinin 1995). During the decay of the X-ray light curve, a secondary maximum is observed at about 135 days after the main X-ray maximum. After the secondary maximum the light curve decays at approximately the same rate as before. The whole 1992 X-ray outburst of 04 lasted for $\sim$200 days.
The 40–150 keV X-ray spectrum of 04 measured by BATSE was approximated by a single power law. A single power law does not provide a good fit to the X-ray spectrum, but is useful as a coarse indicator of the spectral hardness. The power-law index of the spectrum was $\sim$2.1 at the peak of the primary outburst; the spectrum became harder during its decay (power-law index of $\sim$1.9). The distribution of one-day averaged flux measurements versus the photon power-law indices of 04 in the 40–150 keV energy band is shown in Figure \[0422\_fig2\]. The distribution is flux-limited ($F\geq0.04$ photons/cm$^{\rm 2}$ sec$^{-1}$) and contains a total of 126 data points, 23 of which corresponding to the secondary outburst. For flux values above 0.20 photons/cm$^{\rm 2}$ sec$^{-1}$, flux and power-law index appear to be correlated positively. For $F\leq0.20$ photons/cm$^{\rm 2}$ sec$^{-1}$, the photon power-law index is determined less accurately, and remains constant at 1.89$\pm$0.02 (average of 84 data points, weighted by the individual errors). The weighted average of the photon power-law index obtained during the primary outburst only (1.89$\pm$0.02, 61 data points), is consistent with that of the secondary outburst (1.92$\pm$0.04, 23 data points).
Time series analysis
====================
We have used count rate data from the large-area detectors (four broad energy channels) with a time resolution of $\Delta$[*T*]{}$=$1.024 sec and applied an empirical model (Rubin 1996) to subtract the signal due to the X-ray/gamma-ray background. This model describes the background by a harmonic expansion in orbital phase (with parameters determined from the observed background variations), and includes the risings and settings of the brightest X-ray sources in the sky. It uses eight orbital harmonic terms, and its parameters were updated every three hours.
For our analysis we considered data segments of 524.288 sec (512 time bins of 1.024 sec each) on which we performed Fast Fourier Transforms (FFTs) covering the frequency interval 0.002–0.488 Hz. The average number of uninterrupted 512 bin segments available while the source was not occulted by the Earth was 32 per day. For each of the detectors which had the source within 60$^\circ$ of the normal, we calculated the FFTs of the lowest two energy channels (20–50, 50–100 keV) of each data segment separately. These FFTs were coherently summed (weighted by the ratio of the source to the total count rates) and converted to PDSs. The source count rates were obtained from the occultation analysis. The PDSs were normalized such that the power density is given in units of (rms/mean)$^{2}$ Hz$^{-1}$ (see, e.g., van der Klis 1995a) and finally averaged over an entire day.
Power density spectra (PDSs)
----------------------------
The power spectra of 04 are approximately flat up to a break frequency, and decay as a power law above (see Figure \[0422\_fig4\]). During the first $\sim$70 days of the outburst, the PDSs show a significant peak, indicative of QPOs in the time series, superposed on the power-law tail. For all PDSs obtained during this stage of the X-ray outburst, the centroid frequency of the QPO peak is above the break frequency. Beyond day $\sim$70, the QPOs are no longer detected significantly in the PDSs; the power spectra obtained during the remaing part of the outburst are well described by a broken power law only.
The evolution of the daily averaged PDSs (20–100 keV) of 04 during the first 80 days of the outburst is illustrated in a dynamical spectrum (Figure \[0422\_fig3\]). In this representation the frequency scale is logarithmically rebinned to 61 bins; the darker the color, the higher is the power level. In Fig. \[0422\_fig3\], a dark shaded band, indicative of QPOs in the time series, is seen at centroid frequencies between 0.13 and 0.26 Hz until approximately day 70 of the outburst. The centroid frequency of the QPO increases slowly until day 20, and gradually decreases afterwards (see also Figure \[0422\_fig6\]). During days 50–70 of the outburst, there is some evidence for the presence of a second, very weak peak in the power spectra. This second feature is much weaker than the main QPOs peak, and is found at frequencies between the break of the power law and the centroid frequency of the main QPO. From about 70 days after the start of the outburst onward, neither the main QPOs peak nor the second feature are present significantly in the PDSs of 04.
The break in the power spectra of 04 is clearly visible at low frequencies in the dynamical spectrum of Fig. \[0422\_fig3\]. During the period displayed in this dynamical spectrum, the break frequency remains constant, and occurs at 0.037$\pm$0.004 Hz. As the outburst of 04 proceeds, the power density increases over the total frequency interval of the PDSs. This effect is most prominently seen at low frequencies in the dynamical spectrum, indicated by the darker shaded area starting at day $\sim$30, but is present in the total frequency interval of the power spectra. From Fig. \[0422\_fig3\] it also follows that the power density of the PDSs obtained during the first three days of the X-ray outburst of 04 is enhanced with respect to the remaining data, indicating increased rapid variability at the very start of the outburst compared to later stages of the outburst.
Fits to the PDSs
----------------
We made fits to the PDSs in the 20–50, 50–100 and 20–100 keV energy bands, using a combination of a Lorentzian profile and a broken power law. To improve statistics, we made fits to the average PDSs of five consecutive days during the first 110 days of the outburst (obtaining 22 five-day averaged PDSs), and averages of ten consecutive days afterward (obtaining 7 ten-day averaged PDSs). A typical five-day averaged power spectrum in the 20–100 keV energy band, together with the model fit, is shown in the top panel of Fig. \[0422\_fig4\]. The model requires seven parameters (three for the Lorentzian, four for the broken power law), which left 27 degrees of freedom as the PDSs were logarithmically rebinned into 34 frequency bins. The Lorentzian profile was included during the first 70 days of the outburst only (i.e. 14 five-day averaged PDSs). Inclusion of a Lorentzian profile during later stages of the outburst did not significantly improve the quality of the fit. We routinely obtained reduced $\chi^{2}$ values between 0.6 and 2.9 for the fits to the PDSs. As the outburst of 04 continued, the received flux decreased (see, e.g., Fig. \[0422\_fig1\]) for which reason the power spectra became ill-constrained at the local flux minimum near day $\sim$120 of the outburst. During the secondary maximum the quality of the PDSs improved slightly.
The histories of the break frequency $\nu_{\rm break}$, and indices of the broken power-law component for $\nu < \nu_{\rm break}$, and $\nu > \nu_{\rm break}$ (29 data points in each energy band), are presented in Figure \[0422\_fig5\]. The evolution of the centroid frequency and full width at half maximum (FWHM) of the Lorentzian profile (14 data points per energy band only, as the Lorentzian profile was included only during the first 70 days of the X-ray outburst) are shown in Fig. \[0422\_fig6\]. From these figures it can be seen that the centroid frequency of the Lorentzian profile increased during the first 3 weeks of the outburst of 04 from an initial value of $\sim$0.15 Hz to a maximum of $\sim$0.26 Hz, but from that moment on monotonically decreased to $\sim$0.1 Hz 70 days after the onset of the outburst. The FWHM of the Lorentzian is determined best in the PDSs in the 20–100 keV energy band. The FWHM of the QPO peak in this energy band does not change significantly during the outburst, although there may be a slight slight tendency to increase as the significance of the QPOs peak in the power spectra diminishes towards the end of the 70 days period. During the first 100 days of the outburst, the average break frequency is 0.041$\pm$0.006 Hz (20–100 keV), followed by an increase (within less than 2 weeks) to an average value of 0.081$\pm$0.015 Hz during the last 70 days of the outburst. A local minimum in the history of the break frequencies is observed near day 60 of the outburst. The low-frequency flat part of the PDSs below the break frequency appears to steepen during this period. Note that the increase of the break frequency does not coincide with the omission of the Lorentzian profile from the fitting function.
Break frequency
---------------
The relation between the break frequency and the power density at the break in the PDSs of 04, is displayed in Figure \[0422\_fig7\] for the 20–50, 50–100, and 20–100 keV energy bands. From the 20–100 keV data it follows that the power density at the break covers a broad range of values (44–89% Hz$^{-1/2}$), while the break frequency appears to cluster at 0.041$\pm$0.006 and 0.081$\pm$0.015 Hz. Data obtained from the secondary outburst of 04 (7 ten-day averaged PDSs) are indicated in Fig. \[0422\_fig7\] by triangles, while those of the primary outburst (22 five-day averaged PDSs) are denoted by dots. The 20–100 keV data in this figure (right panel), clearly show that the break frequencies near $\sim$0.08 Hz all correspond to PDSs obtained during the secondary outburst of 04. Those of the primary outburst, all cluster at a frequency near $\sim$0.04 Hz. The distribution of the break frequency and the power density at the break determined in the 20–50 keV and the 50–100 keV data shows more scatter, but follows the same trend: the low break frequency corresponds to the primary outburst of 04, while the high break frequency occurs during the secondary outburst.
This effect is illustrated again in Figure \[0422\_fig8\], in which we show two averaged power spectra of 04 in the 20–100 keV energy band, obtained during different stages of its outburst, as well as their respective model fits. The left panel contains two PDSs with equal break frequency, but different power density at the break, i.e., both PDSs were obtained during the primary outburst of 04. The PDS of day 3–7 (indicated by dots) has a break frequency and power density at the break of ($\nu_{\rm break}$, rms)$=$(0.038$\pm$0.004 Hz, 55$\pm$2% Hz$^{-1/2}$); triangles indicate the PDS corresponding to day 68–72 with (0.037$\pm$0.002 Hz, 76$\pm$2% Hz$^{-1/2}$). The right panel displays two PDSs with equal power density at the break, but different break frequency, selected from both the primary and secondary outburst. Dots indicate the PDS obtained during day 3–7 as in the left panel, while the triangles correspond to the PDS obtained during day 153–162. The break frequency and power density at the break in this power spectrum were (0.073$\pm$0.008 Hz, 57$\pm$1% Hz$^{-1/2}$).
Fractional rms amplitudes
-------------------------
We determined fractional rms amplitudes by integrating the single-day averaged PDSs of 04 in the 20–50, 50–100, and 20–100 keV energy bands over three different frequency intervals. One interval covered the flat part of the power spectrum below the break frequency (0.005–0.03 Hz; 12 bins). A second interval was selected in the power-law tail of the PDSs (0.10–0.48 Hz; 199 bins), and the last frequency interval was chosen across the break frequency (0.01–0.10 Hz; 46 bins), including both the flat part and the power-law tail of the PDSs. The history of the integrated fractional rms amplitudes of 04 in time are given in Figure \[0422\_fig9\]. For each frequency interval over which the PDSs were integrated, the general shapes of these curves in the three different energy bands are very similar. The largest rapid X-ray variability occurs in the lowest energy band.
During the onset of the primary X-ray outburst of 04, the fractional rms amplitudes decrease rapidly over the total frequency interval in each of the three energy bands of the daily averaged PDSs. The fractional rms amplitudes are especially large during the first two days of the outburst in the low frequency intervals, which can also be seen in the dynamical power spectrum of Fig. \[0422\_fig3\]. At the X-ray maximum of 04 and shortly thereafter, the rms amplitudes continue to decrease, but at a slower pace. About two weeks after X-ray maximum, the rms amplitudes reach an absolute minimum in each of the three energy bands and in each frequency interval. From that moment on, the fractional rms amplitudes gradually increase until day $\sim$65 of the outburst, after which they suddenly decrease to a local minimum at day $\sim$73. Such variations in the fractional rms amplitude, are present in each of the histories displayed in Fig. \[0422\_fig9\]. The flux history of 04 does not show any features during this phase of the outburst, but proceeds its smooth exponential decay (see Fig. \[0422\_fig1\]).
After the local minimum in fractional rms amplitude at day $\sim$73 of the outburst, the amplitudes increase again, but become uncertain and are dominated by detector noise at low flux levels of 04 due to unresolved sources in the uncollimated field of view of BATSE. The largest fractional rms amplitudes are obtained shortly before the onset of the secondary maximum in the light curve. At this secondary maximum, the fractional rms amplitudes obtained in the 0.01–0.10 and 0.10–0.48 Hz frequency intervals again reach a local minimum. The fractional rms amplitudes obtained in the 0.001–0.03 Hz interval do not exhibit such a minimum, but appear to be flat during the remainder of our analysis.
We have plotted the 20–100 keV fractional rms amplitudes obtained in the 0.005–0.03, 0.01–0.10 and 0.10–0.48 Hz frequency intervals, versus the 40–150 keV flux and photon power-law index of 04 in Figure \[0422\_fig10\]. The distribution is flux-limited, similar to Fig. \[0422\_fig2\] ($F\geq$0.04 photons/cm$^{\rm 2}$ sec$^{-1}$), and consists of 103 data points of the primary, and 23 data points of the secondary outburst. The fractional rms amplitudes appear to be anticorrelated with the flux of 04; the smallest rms amplitudes were obtained at the highest flux values. The single point deviating from the general trend in the fractional rms amplitude versus flux distribution in the 0.005–0.03 and 0.01–0.10 Hz frequency intervals corresponds to day 2 of the outburst. Again, this suggests that the low frequency X-ray variability of 04 was exceptionally high during the start of its outburst. The photon power-law indices and fractional rms amplitudes also appear to be anticorrelated: the trend in Fig. \[0422\_fig10\] is one of larger fractional rms amplitudes as the X-ray spectrum of 04 hardens.
Lag spectra
===========
We have calculated lags between the X-ray variations in the 20–50 and 50–100 keV energy bands of the 1.024 sec time resolution data of 04. The cross amplitudes were created from the Fourier amplitudes $a_j^l$$=$$\sum_k c_k^l \exp(i 2 \pi k j/n)$, where $n$ is the number of time bins (512), $c_k^l$ denotes the count rate in bin $k$$=$$0, \cdots, n-1$ and channel number $l$$=$0, 1 and $j$$=$$-n/2, \cdots, n/2$ corresponds to Fourier frequencies $2\pi j/n\Delta T$. The complex cross spectra of channel 0 and 1 are given by $C_{j}^{12}$$=$$a^{2\ast}_j a^1_j$ and were averaged daily. Errors on the real and imaginary parts of these daily averaged cross spectra were calculated from the respective sample variances, and formally propagated when computing the phase and time lags. The phase lags as a function of frequency are obtained from the cross spectra via $\phi_{j}$$=$arctan\[Im($C_{j}^{12}$)/Re($C_{j}^{12}$)\], and the time lag $\tau_{j}$$=$$\phi_{j}/2\pi\nu_{j}$, with $\nu_{j}$ the frequency in Hz of the $j$-th frequency bin. With these definitions, lags in the hard (50–100 keV) with respect to the soft (20–50 keV) X-ray variations, appear as positive angles.
The phase lags of 04 between the X-ray variations in the 20–50 and 50–100 keV energy bands, averaged over a 30 day interval at the start of the outburst, and averaged over an interval covering the following 95 days, are presented in Figure \[0422\_fig11\]. Cross spectra for a large number of days must be averaged and converted to lag values to obtain sufficiently small errors. Cross spectra of the final 55 days of our analysis were not taken into account in the second average, as inclusion of these data did not significantly improve the quality of the average cross spectrum. Figure \[0422\_fig12\] shows the corresponding time lags on a logarithmic scale. Lags at frequencies above 0.5$\nu_{\rm Nyq}$ are displayed but not taken into account in our analysis, as Crary (1998) have shown that lags between 0.5$\nu_{\rm Nyq}$ and $\nu_{\rm Nyq}$ can be affected by data binning, and therefore, decrease artificially to zero. Our results show that at the lowest frequencies the phase lags are consistent with zero (0.014$\pm$0.006 rad, 0.001–0.02 Hz; 9 bins). At frequencies $\geq$0.02 Hz, the hard X-rays lag the soft by 0.039$\pm$0.003 rad (average of 0.02–0.20 Hz; 94 bins) during the 30 days following start of the outburst of 04. The phase lag derived for the following 95 days (0.017$\pm$0.007 rad, 0.02–0.20 Hz) is not statistical significant. During the first 30 days of the outburst of 04, the hard X-rays lag the soft by an amount of 0.02–0.2 sec in the frequency interval 0.02–0.20 Hz. The time lags decrease in this period with frequency as a power law, with index 0.88$\pm$0.04 for frequencies $\geq$0.01 Hz. Although primarily consisting of upper limits, the time lags in the 95 day average are consistent with those obtained at the beginning of the outburst (power-law index of 1.04$\pm$0.32 for $\nu\geq$0.01 Hz).
Discussion
==========
Soon after the X-ray detection of 04, a possible optical counterpart was identified by Wagner (1992) and Castro-Tirado (1992, 1993). During the decay to quiescence, its optical brightness was reported to be modulated at several periods ranging between 2.1 and 16.2 hrs. Filippenko (1995) determined the mass function and orbital period to be 1.21$\pm$0.06 M$_{\odot}$, and 5.08$\pm$0.01 hrs, respectively. This relatively low value for the mass function was confirmed by Orosz & Bailyn (1995) and Casares (1995). The mass function of 04 is one of the lowest measured values for the SXTs analyzed to date, and provides by itself no dynamical evidence that the compact star in 04 is a black hole. Callanan (1996) derived an upper limit to the orbital inclination of 45$^\circ$ from the ellipsoidal variations. Combined with the spectroscopic measurements, this limiting inclination implies a mass of the compact star in 04 in excess of 3.4 M$_{\odot}$. The limits put to the inclination of 04 by Beekman (1997) from infrared and optical photometry (10$^\circ$–31$^\circ$) imply an even higher mass limit of the compact star: $\ga$9 M$_{\odot}$. Therefore, based on dynamical arguments, it can be concluded that 04 most probably contains a black hole. Thus, the dynamical evidence on 04 supports the early suggestion (Roques 1994), made on the basis of the X-ray properties, that this system contains a black hole.
It is not possible on the basis of BATSE observations alone, to distinguish between black hole source states. However, two independent observations at low X-ray energies during the decay of the X-ray light curve of 04, suggest that it was in the low (or hard) state. The X-ray spectrum of 04, obtained about 24 days after the start of the outburst by TTM (2–30 keV) and HEXE (20–200 keV) on board Mir-Kvant, had a power-law shape (photon index 1.5), with no strong soft component and an exponential cutoff at energies above 100 keV (Sunyaev 1993). ROSAT HRI (0.1–2.4 keV) observations $\sim$42 days after the start of the outburst, also show no indication for an ultrasoft excess in the X-ray spectrum (Pietsch 1993). Therefore, these observations show that 24–42 days after the X-ray outburst had started, 04 was in the low state. The lack of significant changes in the hard X-ray properties (see Section \[lc\_spectrum\]), indicate that this conclusion applies to the whole outburst.
Comparison to Cyg X-1
---------------------
From an analysis of approximately 1100 days of BATSE data (20–100 keV) of the BHC Cyg X-1, covering the period 1991–1995, Crary (1996a) found a strong correlation between the spectral slope and both the high energy X-ray flux and the variability thereof. Although low-energy coverage was lacking, it is likely that Cyg X-1 was in the low state during almost the entire period based on its strong rapid X-ray variability and the presence of a hard spectral component (Crary 1996b). A possible transition to the high state may have lasted for $\sim$180 days, starting 1993 September, as the source flux gradually declined over a period of 150 days to a very low level. After that, the flux rose within 30 days, back to the level it approximately had before the low flux episode occured (Crary 1996b).
The strong correlation of the flux and power-law index of 04 in the 40–150 keV energy band, shown in Fig. \[0422\_fig2\], is different from the correlation between the same quantities in Cyg X-1 as observed by Crary (1996a). For flux values above 0.20 photons/cm$^{\rm 2}$ sec$^{-1}$, the flux and photon power-law index of 04 are related linearly; the larger the high-energy X-ray flux, the softer the X-ray spectrum. The power-law index is constant at 1.89$\pm$0.02 for [*F*]{}$<$0.20 photons/cm$^{\rm 2}$ sec$^{-1}$. The correlation between the flux and power-law index of Cyg X-1 in the 45–140 kev energy band, however, is in the opposite direction; the larger the 45–140 keV flux of Cyg X-1, the harder its X-ray spectrum (Crary 1996a). For comparison, the figures of Crary (1996a) are reproduced in Figure \[0422\_fig13\]. These show the correlations between the fractional rms amplitude (0.03–0.488 Hz) in the 20–100 keV energy band, and the photon power-law index and flux, for both 04 and Cyg X-1.
The correlations between the photon power-law index and fractional rms variability in Cyg X-1 and 04 are very similar; for both sources, the lower the photon power-law index, the larger the fractional rms amplitude.
As a result, the correlation between the flux and the fractional rms amplitude in the 20–100 keV energy band are entirely different for Cyg X-1 and 04. In the Cyg X-1 data, the distribution of flux and fractional rms amplitude measurements (frequency interval 0.03–0.488 Hz) follows a broad, upturning band, i.e., larger fractional amplitudes at higher flux values (Crary 1996a). However, a strong anticorrelation is found between the 20–100 keV fractional rms amplitude (both 0.01–0.10 and 0.10–0.48 Hz) of 04, and its 40–150 keV flux (see, e.g., Fig.\[0422\_fig10\]). This distribution traces a narrow path in the diagram, with the smallest fractional amplitudes occurring at the largest flux values, or equivalently, at the start of the outburst of 04.
Crary (1998) studied the Fourier cross spectra of Cyg X-1 with BATSE during a period of almost 2000 days. During this period, Cyg X-1 was likely in both the low, and high or intermediate state. Crary (1998) found that the lag spectra between the X-ray variations in the 20–50 and 50–100 keV energy bands of Cyg X-1 do not show an obvious trend with source state. The X-ray variations of Cyg X-1 in the 50–100 keV energy band lag those in the 20–50 keV energy band over the 0.01–0.20 Hz frequency interval by a time interval proportional to $\nu^{-0.8}$ (Crary 1998). The general shape and sign of the phase and time lag spectra of 04 are very similar to those of Cyg X-1. Significant phase (or equivalently, time) lags in the 04 data could only be derived during the early stage of its outburst. During this period, the variations in the 50–100 keV energy band lag those in the 20–50 keV energy band by 0.039(3) rad in the frequency interval 0.02–0.20 Hz. The time lags of 04 during the first 30 days of its outburst, decrease with frequency as a power law, with index $\sim$0.9 for $\nu>$0.01 Hz.
Break frequency
---------------
In several BHCs the break frequency of the PDSs has been observed to vary by up to an order of magnitude while the high frequency part remained approximately constant (Belloni & Hasinger 1990a; Miyamoto 1992). As a result, the break frequency and power density at the break are strongly anticorrelated. Méndez & van der Klis (1997) have collected the relevant data on all BHC power spectra in the low, intermediate and very high state for six BHCs, and show that among these sources the break frequency is anticorrelated to the power density at the break. As this correlation appears to hold across different source states, Méndez & van der Klis (1997) suggest a correlation with mass accretion rate may exist, i.e. the break frequency increases (and the power density decreases) with increasing mass accretion rate (see also van der Klis 1994a).
The break frequency and power density at the break in the PDSs of 04 are clearly not anticorrelated (see Fig. \[0422\_fig7\]). The power density at the break in the 20–100 keV energy band ranged between 44 and 89% Hz$^{-1/2}$, while the break frequency was rather constant, at either 0.041$\pm$0.006 or 0.081$\pm$0.015 Hz. All PDSs in which the break of the power spectrum was detected near $\sim$0.04 Hz were obtained during the primary X-ray outburst of 04; all PDSs obtained during the secondary outburst have a break frequency near $\sim$0.08 Hz. Therefore, the relation between the break frequency and power density at the break is different from the one observed in Cyg X-1 (Belloni & Hasinger 1990a; Miyamoto 1992). This contrast between the PDSs of 04 and Cyg X-1 was also reported by Grove (1994) based on OSSE data of the first stage of the X-ray outburst of 04.
The break frequency and power density at the break of BHC power spectra in the low, intermediate and very high state (LS, IS, VHS) collected by Méndez & van der Klis (1997) are reproduced in Figure \[0422\_fig14\], together with the 20–100 keV data of 04. However, this representation should be taken with caution, as it combines data obtained in different energy bands, and data of different source states. Fig. \[0422\_fig14\] shows a clear anticorrelation of the break frequency and power density at the break over two orders of magnitude in both frequency and power density. The low state BATSE data (20–100 keV) of GRO J1719$-$24 (van der Hooft 1996) and Cyg X-1 (Crary 1996a), and low state EXOSAT data of Cyg X-1 (Belloni & Hasinger 1990a), clearly follows this anticorrelation. However, the power density at the break detected in the PDSs of GS 2023$+$338 is approximately constant for break frequencies in the 0.03–0.08 Hz interval, but shows a sudden turnover at frequencies $<$0.03 Hz to large power densities. The absence of the anticorrelation of the break frequency and the power density at the break in the power spectra of GS 2023$+$338 was first reported by Oosterbroek (1997). Although these studies of 04 and GS 2023$+$338 show that the relation between the break frequency and power density at the break differs in detail from other BHCs in the LS, the data of 04 and GS 2023$+$338 displayed in Fig. \[0422\_fig14\] does fit the general anticorrelation, observed in several sources over two decades of frequency and power density, within a factor of $\sim$2. It is interesting to note that the absence of an anticorrelation of the break frequency and power density at the break, is found in the PDSs of the two sources which show the lowest break frequencies of all BHCs: 04 and GS 2023$+$338.
Earlier temporal analyses of 04
-------------------------------
The hard X-ray variability of 04 has been studied by Grove (1994) with OSSE (35–600 keV), and by Denis (1994) and Vikhlinin (1995) with SIGMA in the 40–300 and 40–150 keV energy band. The OSSE observations were obtained from 1992 August 11 until September 17, i.e., days 7–44 of the X-ray outburst. Grove (1994) computed PDS in the 35–60 and 75–175 keV energy band. The power spectra show breaks at frequencies of $\sim$10$^{-2}$ Hz and a few Hz, and a strong peaked noise component at 0.23 Hz. Statistically significant noise is detected at frequencies above 20 Hz. The total fractional rms in the 0.01–60 Hz frequency interval for the entire OSSE pointing is $\sim$40% (35–60 keV) and $\sim$30% (75–175 keV), respectively.
SIGMA observed 04 from 1992 August 15 until September 25 (days 11–52). Vikhlinin (1995) computed power spectra in the frequency interval 10$^{-4}$–10 Hz. The power density is nearly constant below a break frequency at 0.03 Hz, and decreases as a power law above, with index $\sim$0.9. A strong QPOs peak is detected at a frequency of 0.3 Hz. At the start of the SIGMA observation of 04 (August 15–27), the PDSs show a significant decrease of the power densities for frequencies lower than 3.5$\times$10$^{-3}$ Hz. Such turnover of the PDSs at low frequencies is unusual; PDSs typically exhibit strong very-low frequency noise, or are constant for such low frequencies. The total fractional rms variations (40–150 keV) of 04 in the frequency interval 10$^{-3}$–10$^{-1}$ Hz increased from $\sim$15% at the start of the SIGMA observations to $\sim$25% at the end. Denis (1994) reported an increase of the fractional rms variation over the same period from $\leq$25% (2$\sigma$ upper limit) to $\sim$50% (150–300 keV) in the 2$\times$10$^{-4}$–1.25$\times$10$^{-1}$ Hz frequency interval.
These results of Vikhlinin (1995) and Grove (1994) are consistent with the results presented here. The shape of the power spectrum of 04 in the 0.01–0.5 Hz interval is essentially identical in the three studies, although different, but partly overlapping, energy bands have been used. The values of the break frequency and power-law index (both, below and above the break frequency) determined in the OSSE and SIGMA data, are consistent with those presented here. However, OSSE and SIGMA only covered the early phases of the X-ray outburst of 04, with the exception of its very start. Therefore, the significant change of the break frequency during later stages of the X-ray outburst was missed. Fractional rms amplitudes were determined on a daily basis in the OSSE data (0.01–60 Hz) and SIGMA data (3.6$\times$10$^{-4}$–8$\times$10$^{-2}$ Hz). At the beginning of the OSSE and SIGMA observations, the fractional rms amplitudes remained constant, or perhaps slightly decreasing, followed by a turnover and gradual increase, similar to the fractional rms amplitude histories displayed in Fig. \[0422\_fig9\]. However, due to data gaps, the moment of turnover is difficult to determine in the OSSE and SIGMA data.
Low-frequency QPOs (0.04–0.8 Hz) have been observed in the PDSs of several BHCs: Cyg X-1, LMC X-1, GX 339$-$4 and GRO J1719$-$24 (van der Klis 1995b; van der Hooft 1996). These QPOs were observed while the sources were likely in the low state, with the exception of LMC X-1 where a 0.08 Hz QPO was found while an ultrasoft component dominated the energy spectrum, showing that the source was in the high state. During our observations 04 was probably in the low state and its PDSs showed a strong QPOs peak with a centroid frequency between 0.13 and 0.26 Hz (20–100 keV). Therefore, 04 is the fourth BHC with shows low-frequency QPOs in its PDSs while in the low state.
Peaked noise components and QPOs peaks in the power spectra of 04 have been reported by various groups. Kouveliotou (1992, 1993) reported QPO peaks centered at $\sim$0.03 and 0.2 Hz in different BATSE energy bands covering 20–300 keV. Grove (1994) detected a strong peaked noise component in the OSSE data (35–60, 75–175 keV) at a centroid frequency of 0.23 Hz (FWHM $\sim$0.2 Hz), and evidence for additional peaked components near 0.04 and 0.1 Hz with a day to day variable intensity. Vikhlinin (1995) report a strong QPO peak in the SIGMA data (40–150 keV) at 0.31 Hz (FWHM 0.16 Hz), with a fractional rms variability of $\sim$12%. Likely, the reports of strong noise components at a few 10$^{-1}$ Hz all refer to the same feature in the PDSs of 04; the strong QPO peak with a centroid frequency ranging between 0.13 and 0.26 Hz (20–100 keV), present in the power spectra of 04 during the first $\sim$70 days of its outburst. The peaked noise at a few 10$^{-2}$ Hz reported by Kouveliotou (1992, 1993) and Grove (1994), is detected near the break frequency of the power spectra. These detections may be supported by Vikhlinin (1992), who reported peaked noise at 0.035 Hz (FWHM 0.02 Hz) in 40–70 keV SIGMA data, obtained early in the SIGMA pointing. However, Kouveliotou (1993) report that this peaked noise structure is only detected occasionally in the PDSs of 04. In the analysis presented here, we do not find significant evidence for such peaked noise structures at a few 10$^{-2}$ Hz, possibly due to our daily averaging routine, or the fact that these noise components occur close to the break frequency.
Comptonization models
---------------------
The power law hard X-ray spectral component of accreting BHCs, which dominate the X-ray emission in the low state, can be described well by Compton upscattering of low-energy photons by a hot electron gas (Sunyaev & Trümper 1979). In such a case, the energy of the escaping photons on average increases with the number of scatterings, and therefore, with the time they reside in the cloud. Therefore, high-energy photons lag those with lower energies by an amount proportional to the photon travel time. If the hard X-rays are emitted from a compact region in the immediate vicinity of the black hole, the resulting time lags should be independent of Fourier frequency and of the order of milliseconds.
The hard X-ray photons (50–100 keV) of 04 lag the low-energy photons (20–50 keV) by as much as $\sim$0.1–1 sec at low frequencies. The time lags are strongly dependent on the Fourier frequency, and decrease roughly as $\nu^{-1}$. Similar lag behaviour have been observed in Cyg X-1 (Cui 1997; Crary 1998) and GRO J1719$-$24 (van der Hooft 1998) while in the low state. Recently, Kazanas, Hua & Titarchuk (1997) argued that the Comptonization process takes place in an extended non-uniform cloud around the central source. Such a model can account for the form of the observed PDS and energy spectra of compact sources. Hua, Kazanas & Titarchuk (1997) showed that the time lags of the X-ray variability depend on the density profile of such an extended but non-uniform scattering atmosphere. Their model produces time lags between the hard and soft bands of the X-ray spectrum that increase with Fourier period, in agreement with the observations. Therefore, analysis of the hard time lags in the X-ray variability of black-hole candidates could provide information on the density structure of the accretion gas (Hua 1997). The time lags observed in GRO J0422$+$32, Cyg X-1 and GRO J1719$-$24 are quite similar and support the idea that the Comptonizing regions around these black holes are similar in density distribution and size.
However, the observed lags require that the scattering medium has a size of order 10$^3$ to 10$^4$ Schwarzschild radii. It is unclear how a substantial fraction of the X-ray luminosity, which must originate from the conversion of gravitational potential energy into heat close to the black hole, can reside in a hot electron gas at such large distances. This is a generic problem for Comptonization models of the hard X-ray time lags. Perhaps very detailed high signal-to-noise cross-spectral studies of the rapid X-ray variability of accreting BHCs, and combined spectro-temporal modeling can solve this problem.
Conclusions
===========
We have analyzed the hard X-ray variability of 04. The canonical anticorrelation between the break frequency and the power at the break observed in Cyg X-1 and other BHCs in the low state, is not present in the PDSs of 04. The relation between the photon power-law index of the X-ray spectrum and the amplitude of the X-ray variations of 04 has similarities to that of Cyg X-1; however, the relation between the hard X-ray flux and the amplitude of its variation is opposite to what has been found in Cyg X-1.
FvdH acknowledges support by the Netherlands Foundation for Research in Astronomy with financial aid from the Netherlands Organisation for Scientific Research (NWO) under contract number 782-376-011. FvdH also thanks the “Leids Kerkhoven–Bosscha Fonds” for a travel grant. CK acknowledges support from NASA grant NAG-2560. JvP acknowledges support from NASA grants NAG5-2755 and NAG5-3674. MvdK gratefully acknowledges the Visiting Miller Professor Program of the Miller Institute for Basic Research in Science (UCB). This project was supported in part by NWO under grant PGS 78-277.
Beekman, G., Shahbaz, T., Naylor, T., Charles, P.A., Wagner, R.M. & Martini, P. 1997, , 290, 303
Belloni, T. & Hasinger, G. 1990a, , 227, L33
Belloni, T & Hasinger, H. 1990b, , 230, 103
Belloni, T., van der Klis, M., Lewin, W.H.G., van Paradijs, J., Dotani, T., Mitsuda, K. & Miyamoto, S. 1997, , 322, 857
Castro-Tirado, A.J., Pavlenko, P., Shlyapnikov, A., Gershberg, R., Hayrapetyan, V., Brandt, S. & Lund, N. 1992, , 5588
Castro-Tirado, A.J., Pavlenko, E.P., Shlyapnikov, A.A., Brandt, S., Lund, N. & Ortiz, J.L. 1993, , 276, L37
Callanan, P.J., Garcia, M.R., McClintock, J.E., Zhao, P., Remillard, R.A., Bailyn, C.D., Orosz, J.A., Harmon, B.A. & Paciesas, W.S. 1995, , 441, 786
Callanan, P.J., Garcia, M.R., McClintock, J.E., Zhao, P., Remillard, R.A. & Haberl, F. 1996, , 461, 351
Casares, J., Martin, A.C., Charles, P.A., Martin, E.L., Rebolo, R., Harlaftis, E.T. & Castro-Tirado, A.J. 1995, , 276, L35
Chevalier, S. & Ilovaisky, S.A. 1995, , 297, 103
Crary, D.J., Kouveliotou, C., van Paradijs, J., van der Hooft, F., Scott, D.M., Paciesas, W.S., van der Klis, M., Finger, M.H., Harmon, B.A. & Lewin, W.H.G. 1996a, , 462, L71
Crary, D.J., Kouveliotou, C., van Paradijs, J., van der Hooft, F., Scott, D.M., Zhang, S.N., Rubin, B.C., Finger, M.H., Harmon, B.A., van der Klis, M. & Lewin, W.H.G. 1996b, A&A Supp., 120, 153
Crary, D.J., Finger, M.H., Kouveliotou, C., van der Hooft, F., van der Klis, M., Lewin, W.H.G. & van Paradijs, J. 1998, , 493, L71
Cui, W., Zhang, S.N., Focke, W., Swank, J.H. 1997, , 484, 383
Denis, M., Olive, J.-F., Mandrou, P., Roques, J.P., Ballet, J., Goldwurm, A., Laurent, Ph., Cordier, B., Vikhlinin, A., Churazov, E., Gilfanov, M., Sunyaev, R., Dyachkov, A., Khavenson, N., Kremnev, R. & Kovtunenko, V. 1994, , 92, 459
Filippenko, A.V., Matheson, T. & Ho, L.C. 1995, , 455, 614
Garcia, M.R., Callanan, P.J., McClintock, J.E. & Zhao, P. 1996, , 460, 932
Grove, J.E., Johnson, W.N., Kinzer, R.L., Kroeger, R.A., Kurfess, J.D. & Strickman, M.S. 1994, in AIP Conf. Proc. 304, The Second Compton Symposium, 192
Grove, J.E., Kroeger, R.A. & Strickman, M.S. 1997, in The Transparent Universe: Proc. 2nd INTEGRAL Workshop, ed. C. Winkler, T.J.-L. Courvoisier & Ph. Durouchoux (Noordwijk: ESTEC/ESA), 197
Harlaftis, E., Jones, D., Charles, P. & Martin, A. 1994, in AIP Conf. Proc. 308, The Evolution of X-ray Binaries, ed. S. Holt & C.S. Day (New York: AIP), 91
Harmon, B.A., Wilson, R.B., Fishman, G.J., Meegan, C.A., Paciesas, W.S., Briggs, M.S., Finger, M.H., Cameron, R., Kroeger, R. & Grove, E. 1992, , 5584
Harmon, B.A., Wilson, C.A., Brock, M.N., Wilson, R.B., Fishman, G.J., Meegan, C.A., Paciesas, W.S., Pendleton, G.N., Rubin, B.C. & Finger, M.H. 1993, in: AIP Conf. Proc. 280, “1$^{\rm st}$ Compton Gamma Ray Observatory Symp.”, eds. M. Friedlander, N. Gehrels & D. Macomb (New York: AIP), 314
Hua, X.-M., Kazanas, D., Titarchuk, L. 1997, , 482, L57
Kato, T., Mineshige, S. & Hirata, R. 1995, , 47, 31
Kazanas, D., Hua, X.-M., Titarchuk, L. 1997, , 480, 735
King, A.R., Kolb, U. & Burderi, L. 1996, , 464, L127
Kouveliotou, C., Finger, M.H., Fishman, G.J. Meegan, C.A., Wilson, R.B. & Paciesas, W.S. 1992, , 5592
Kouveliotou, C., Finger, M.H., Fishman, G.J., Meegan, C.A., Wilson, R.B., Paciesas, W.S., Minamitani, T. & van Paradijs, J. 1993, in AIP Conf. Proc. 280, Proc. Compton Gamma Ray Observatory, ed. M. Friedlander, N. Gehrels & D.J. Macomb (New York: AIP), 319
Kuulkers, E., Howell, S.B. & van Paradijs, J. 1996, , 462, L87
Martin, A.C., Charles, P.A., Wagner, R.M., Casares, J., Henden, A.A. & Pavlenko, E.P. 1995, , 274, 559
Méndez, M. & van der Klis, M. 1997, , 479, 926
Miyamoto, S., Kimura, K., Kitamoto, S., Dotani, S. & Ebisawa, K. 1991, , 383, 784
Miyamoto, S., Kitamoto, K., Iga, S., Negoro, H. & Terada, K. 1992, , 391, L21
Miyamoto, S., Iga, S., Kitamoto, S. & Kamado, Y. 1993, , 403, L39
Orosz, J.A. & Bailyn, C.D. 1995, , 446, L59
Oosterbroek, T., van der Klis, M., van Paradijs, J., Vaughan, B., Rutledge, B., Lewin, W.H.G., Tanaka, Y., Nagase, F., Dotani, T.,Mitsuda, K. & Miyamoto, S. 1997, , 321, 776
Paciesas, W.S., Briggs, M.S., Harmon, B.A., Wilson, R.B. & Finger, M.H. 1992, , 5580
Pietsch, W., Haberl, F., Gehrels, N. & Petre, R. 1993, , 273, L11
Roques, J.P., Bouchet, L., Jourdain, E., Mandrou, P., Goldwurm, A., Ballet, J., Claret, A., Lebrun, F., Finoguenov, A., Churazov, E., Gilfanov, M., Sunyaev, R., Novikov, B., Chulkov, I., Kuleshova, N. & Tserenin, I. 1994, , 92, 451
Rubin, B.C., Lei, F., Fishman, G.J., Finger, M.H., Harmon, B.A., Kouveliotou, C., Paciesas, W.S., Pendleton, G.N., Wilson, R.B. & Zhang, S.N. 1996, A&A Supp., 120, 687
Shrader, C.R., Wagner, R.M., Hjellming, R.M., Han, X.H. & Starrfield, S.G. 1994, , 434, 698
Sunyaev, R.A. & Trümper, J. 1979, Nature, 279, 506
Sunyaev, R. Churazov, E., Gilfanov, M., Pavlinsky, M., Grebenev, S., Babalyan, G., Dekhanov, I., Vamburenko, N., Bouchet, L., Niel, M, Roques, J.P., Mandrou, P., Goldwurm, A., Cordier, B., Laurent, P. & Paul, J. 1991, , 247, L29
Sunyaev, R.A., Kaniovsky, A.S., Borozdin, K.N., Efremov, V.V., Aref’ev, V.A., Melioransky, A.S., Skinner, G.K., Pan, H.C., Kendziorra, E., Maisack, M., Doebereiner, S. & Pietsch, W. 1993, , 280, L1
Tanaka, Y. & Lewin, W.H.G. 1995, in: X-ray Binaries, eds. W.H.G. Lewin, J. van Paradijs & E.P.J. van den Heuvel (Cambridge: Cambridge University Press) p. 126
van der Hooft, F., Kouveliotou, C., van Paradijs, J., Rubin, B.C., Crary, D.J., Finger, M.H., Harmon, B.A., van der Klis, M., Lewin, W.H.G., Norris, J.P. & Fishman, G.J. 1996, , 458, L75
van der Hooft, F. 1998, subm. to ApJ
van der Klis, M. 1994, , 283, 469
van der Klis, M. 1995a, in: The lives of the neutron stars, eds. M.A. Alpar, Ü. Kiziloǧlu & J. van Paradijs (Dordrecht: Kluwer Academic Publishers), 301
van der Klis, M. 1995b, in: “X-ray Binaries”, eds. W.H.G. Lewin, J. van Paradijs & E.P.J. van den Heuvel (Cambridge: Cambridge University Press), 252
van Dijk, R., Bennett, K., Collmar, W., Diehl, R., Hermsen, W., Lichti, G.G., McConnell, M., Ryan, J., Schönfelder, V., Strong, A., van Paradijs, J. & Winkler, C. 1995, , 296, L33
van Paradijs, J. & van der Klis, M. 1994, , 281, L17
van Paradijs, J. & McClintock, J.E. 1995, in: X-ray Binaries, eds. W.H.G. Lewin, J. van Paradijs & E.P.J. van den Heuvel (Cambridge: Cambridge University Press), 58
van Paradijs, J. 1996, , 464, L139
Vikhlinin, A., Finoguenov, A., Sitdikov, A., Sunyaev, R., Goldwurm, A., Paul, J., Mandrou, P. & Techine, P. 1992, , 5608
Vikhlinin, A., Churazov, E., Gilfanov, M., Sunyaev, R., Finoguenov, A., Dyachkov, A., Kremnev, R., Sukhanov, K., Ballet, J., Goldwurm, A., Cordier, B., Claret, A., Denis, M., Olive, J.F., Roques, J.P. & Mandrou, P. 1995, , 441, 779
Wagner, R.M., Bertram, R., Starrfield, S.G. & Shrader, C.R. 1992, , 5589
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We show that hadron production in relativistic heavy ion collisions at transverse momenta larger than 2 GeV/$c$ can be explained by the competition of two different hadronization mechanisms. Above 5 GeV/$c$ hadron production can be described by fragmentation of partons that are created perturbatively. Below 5 GeV/$c$ recombination of partons from the dense and hot fireball dominates. This can explain some of the surprising features of RHIC data like the constant baryon-to-meson ratio of about one and the small nuclear suppression for baryons between 2 to 4 GeV/$c$.'
address:
- ' Physics Department, Duke University, P.O.Box 90305, Durham, NC 27708, USA'
- ' RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, NY 11973, USA'
author:
- 'R J Fries, B Müller, C Nonaka and S A Bass'
title: 'Hadronization in heavy ion collisions: recombination or fragmentation?'
---
The Relativistic Heavy Ion collider (RHIC) has provided exciting data about hadron production at transverse momenta of a few GeV/$c$ in central Au+Au collisions. The production of pions at high $P_T$ was found to be suppressed compared to the scaled yield from $p+p$ collisions [@PHENIX]. This jet quenching effect can be understood by final state interaction of fast partons with the dense and hot medium produced in central heavy ion collisions. Fast partons lose energy via induced bremsstrahlung before they can fragment into high $P_T$ hadrons [@GyulWang:94]. The suppression effect is dramatic and can be as large as a factor of 6 above $P_T=5$ GeV/$c$.
On the other hand the suppression of protons and antiprotons seems to be much less [@PHENIX-B]. Experimental data from PHENIX show a proton/pion ratio of about 1 between 1.5 GeV/$c$ and 4 GeV/$c$ [@Chujo:02]. This is surprising since the production of protons and antiprotons is usually suppressed compared to the production of pions because of the much larger mass. At high transverse momentum this can be understood in terms of perturbative quantum chromodynamics (pQCD) [@CoSo:81]. The fragmentation functions $D_{a\to h}(z)$ describe the probability for a parton $a$ with momentum $p$ to turn into a hadron with momentum $zp$, $0<z<1$. These fragmentation functions were measured for pions and protons, mainly in $e^+ e^-$ collisions, and give a ratio $p/\pi^0 < 0.2$ for $P_T > 2$ GeV/$c$ when used in $p+p$ and $N+N$ collisions. The energy loss of partons in a medium can be taken into account by a rescaling of the parton momentum [@GW:00]. However this should affect baryons and mesons in the same way.
The lack of nuclear suppression for baryons is a challenge for our understanding of hadron production. The currently accepted picture assumes that a parton with large transverse momentum is produced in a hard scattering reaction between initial partons, propagates through the surrounding hot matter and loses energy by interactions, and finally hadronizes. Apparently the creation and interaction of a parton will happen independently of its later fate during hadronization. Therefore any unusual behavior that is different between baryons and mesons can only be attributed to hadronization itself. We propose to use recombination of quarks from the surface of the hot fireball as an alternative hadronization mechanism [@FMNB:03].
In the fragmentation process a parton with transverse momentum $p_T$ is leaving the interaction zone while still being connected with other partons by a color string. The breaking of the string creates quark antiquark pairs which finally turn into hadrons. The distribution of one of these hadrons, which is bound to have less transverse momentum $P_T = zp_T$, is then described by a fragmentation function. The average value $\langle z\rangle$ is about 0.5 for pions in $p+p$ collisions. In other words the production of a, say, 5 GeV/$c$ pion has to start with a 10 GeV/$c$ parton in average, which are rare to find due to the steeply falling parton spectrum. Jet quenching even enhances the lack of high $p_T$ partons. On the other hand, the 5 GeV/$c$ pion could be produced by the recombination of a quark and an antiquark with about 2.5 GeV/$c$ each in average. 2.5 GeV/$c$ and 10 GeV/$c$ are separated by orders of magnitude in the parton spectrum. The price to pay is of course that two of these partons have to be found close to each other in phase space. However we do have a densely populated phase space in central heavy ion collisions at RHIC where we even expect the existence of a thermalized quark gluon plasma.
Recombination of quarks has been considered before in hadron collisions [@DasHwa:77] and was also applied to heavy ion collisions [@Gupt:83]. In QCD the leading particle effect in the forward region of a hadron collision is well known. This is the phenomenon that the production of hadrons that share valence partons with the beam hadrons are favored in forward direction. It has been realized that this can only be explained by recombination. This has fueled a series of theoretical work, see e.g. [@BJM:02] and references therein. Recently the recombination idea for heavy ion collisions, stimulated by the RHIC results, has been revived for elliptic flow [@Voloshin:02] and hadron spectra and ratios [@FMNB:03; @GreKoLe:03].
The formalism of recombination has already been developed in a covariant setup for the process of baryons coalescing into light nuclei and clusters in nuclear collisions [@DHSZ:91; @ScheiHei:99]. We give a brief derivation for the case of mesons. By introducing the density matrix $\hat \rho$ for the system of partons, the number of quark-antiquark states that will be measured as mesons is given by $$\label{eq:pinumb}
N_M = \sum_{ab}\int \frac{d^3 P}{(2\pi)^3} \> \langle M ;{\bf P} | \> \hat
\rho_{ab} \> | M ;{\bf P} \rangle$$ Here $| M ;{\bf P} \rangle$ is a meson state with momentum ${\bf P}$ and the sum is over all combinations of quantum numbers — flavor, helicity and color — of valence partons that contribute to the given meson $M$. This can be cast in covariant form using a hypersurface $\Sigma$ for hadronization [@CoFr:74] $$\fl
E \frac{d N_M}{d^3 P} = {C_M} \int\limits_\Sigma
\frac{d \sigma \> P\cdot u(\sigma)}{(2\pi)^3} \int \frac{d^3 q
}{(2\pi)^3} \>
w_a\bigg( {\sigma} ; \frac{\bf P}{2}-{\bf q} \bigg)
\> \Phi^W_M ({\bf q}) \>
w_b\bigg( {\sigma}; \frac{\bf P}{2}+{\bf q}
\bigg).$$ Here $E$ is the energy of the four vector $P$, $d \sigma$ a measure on $\Sigma$ and $u(\sigma)$ is the future oriented unit vector orthogonal to the hypersurface $\Sigma$. $w_a$ and $w_b$ are the phase space densities for the two partons $a$ and $b$, $C_M$ is a degeneracy factor and $$\Phi^W_M ({\bf q}) = \int d^3 r \> \Phi^W_M ({\bf r},{\bf q})$$ where $\Phi^W_M ({\bf r},{\bf q})$ is the Wigner function of the meson [@ScheiHei:99].
Since the hadron structure is best known in a light cone frame, we write the integral over ${\bf q}$ in terms of light cone coordinates in a frame where the hadron has no transverse momentum but a large light cone component $P^+$. This can be achieved by a simple rotation from the lab frame. Introducing the momentum $k=P/2-q$ of parton $a$ in this frame we have $k^+ = x P^+$ with $0<x<1$. We make an ansatz for the Wigner function of the meson in terms of light cone wave functions $\phi_M(x)$. The final result can be written as [@FMNB:03] $$\fl
\label{eq:res3}
E \frac{N_M}{d^3 P} = C_M \int\limits_\Sigma d\sigma
\frac{P\cdot u(\sigma)}{(2\pi)^3} \int\limits_0^1 {d x} \>
w_a\big( {\sigma} ; x {P^+} \big)
\> \left| \phi_M (x) \right|^2 \>
w_b\big( {\sigma}; (1-x) {P^+} \big).$$ For a baryon with valence partons $a$, $b$ and $c$ we obtain $$\begin{aligned}
\label{eq:protres}
E \frac{N_B}{d^3 P} &= C_B \int\limits_\Sigma d\sigma
\frac{P\cdot u(\sigma)}{(2\pi)^3} \int\limits_0^1 {d x_1 \, d x_2 \, d x_3}
\delta(x_1+x_2+x_3-1) \\ & \times
w_a\big( {\sigma} ; x_1 {P^+} \big)
w_b\big( {\sigma}; x_2 {P^+} \big)
w_c\big( {\sigma}; x_3 {P^+} \big)
\> \left| \phi_B (x_1,x_2,x_3) \right|^2 . \nonumber\end{aligned}$$ $\phi_B(x_1,x_2,x_3)$ is the effective wave function of the baryon in light cone coordinates.
A priori these wave functions are not equal to the light cone wave functions used in exclusive processes. We are recombining effective quarks in a thermal medium and not perturbative partons in an exclusive process. Nevertheless, as an ansatz for a realistic wave function one can adopt the asymptotic form of the light cone distribution amplitudes $$\begin{aligned}
\label{eq:barlcwf}
\phi_M(x) &= \sqrt{30} x(1-x), \\
\phi_B(x_1,x_2,x_3) &= 12\sqrt{35} \, x_1 x_2 x_3 \nonumber\end{aligned}$$ as a model. However, it turns out that for a purely exponential spectrum the shape of the wave function does not matter. In that case the dependence on $x$ drops out of the product of phase space densities $$w_a\big( {\sigma} ; x {P^+} \big) w_b\big( {\sigma}; (1-x) {P^+} \big)
\sim e^{-xP^+/T} e^{-(1-x)P^+/T} = e^{-P^+/T}.$$ One can show that a narrow width approximation, using $\delta$ peaked wave functions that distribute the momentum of the hadron equally among the valence quarks — 1/2 in the case of a meson and 1/3 in the case of a baryon — deviates by less then 20% from a calculation using the wave functions in (\[eq:barlcwf\]). That there is a small deviation can be attributed to the violation of energy conservation. Since recombination is a $2\to 1$ or $3\to 1$ process, energy will generally not be conserved if we enforce momentum conservation and a mass shell condition for all particles. However, one can show that violations of energy conservation are of the order of the effective quark masses or $\Lambda_{\rm QCD}$ and can therefore be neglected for transverse hadron momenta larger than 2 GeV/$c$.
Fragmentation of partons is given by [@Owens:86] $$\label{eq:fracmaster}
E \frac{d \sigma_h}{d^3 P} = \sum_a \int\limits_0^1 \frac{d z}{z^2}
D_{a\to h}(z) E_a \frac{d \sigma_a}{d^3 P_a}.$$ The sum runs over all parton species $a$ and $\sigma_a$ is the cross section for the production of parton $a$ with momentum $P_a = P/z$. We use a leading order (LO) pQCD calculation of $\sigma_a$ [@FMS:02] together with LO KKP fragmentation functions [@KKP:00]. Energy loss of the partons is taken into account by a shift of the parton spectrum by $$\Delta p_T = \sqrt{ \lambda p_T}.$$
We summarize that the transverse momentum dependent yield of mesons from recombination can be written as $\sim C_M w^2(P_T/2)$ in the simple narrow width approximation, whereas from fragmentation we expect $\sim D(z) \otimes w(P_T/z)$. For an exponential parton spectrum $w=e^{-P_T/T}$ the ratio of recombination to fragmentation is $$\frac{R}{F} = \frac{C_M}{\langle D \rangle}
e^{-\frac{P_T}{T} \left( 1- \frac{1}{\langle z \rangle} \right) }$$ where $\langle D \rangle <1$ and $\langle z \rangle <1 $ are average values of the fragmentation function and the scaling variable. Therefore $R/F > 1$. In fact, recombination always wins over fragmentation from an exponential spectrum (as long as the exponential is not suppressed by small fugacity factors). The same is true in the case of baryons.
Now let us consider a power law spectrum $w=A (P_T/\mu)^{-\alpha}$ with a scale $\mu$ and $\alpha >0 $. Then the ratio of recombination over fragmentation is $$\frac{R}{F} = \frac{C_M A}{\langle D \rangle}
\left(\frac{4}{\langle z \rangle} \right)^\alpha
\left(\frac{P_T}{\mu}\right)^{-\alpha}$$ and fragmentation ultimately has to win at high $P_T$. We also note that we can expect a constant baryon/meson ratio from recombination, when the parton spectrum is exponential. In this case the ratio is only determined by the degeneracy factors $$\frac{dN_B}{dN_M} = \frac{C_B}{C_M}.$$
For our numerical studies we consider fragmentation of perturbative partons and recombination from a thermal phase $$w_{\rm th} = e^{-p_T \cosh(\eta-y)/T \, e^{-y^2/2 \Delta^2}}$$ with an effective temperature $T$. $\eta$ is the space-time rapidity and $\Delta \approx 2$ the width of the rapidity distribution. We fix the hadronization hypersurface $\Sigma$ by the condition $\tau_f = \sqrt{t^2-z^2} ={\rm const}.$ [@DHSZ:91]. We set $\tau_f=5$ fm/$c$. A two phase parton spectrum with a perturbative power law tail and an exponential part at low transverse momentum is also predicted by parton cascades like [VNI/BMS]{} [@PCM].
The parameters, $\lambda$ for the average energy loss and $T$ for the slope of the exponential parton spectrum, are determined by a fit to the inclusive charged hadron spectrum measured by PHENIX [@Jia:02]. We obtain $\lambda\approx 1$ GeV and $T\approx 350$ MeV. The temperature contains the effect of a blue-shift because of the strong radial flow. An additional normalization factor of about $1/30$ for the recombination part is necessary to describe the data. This is due to the use of an effective temperature which gives a too large particle number compared to the physical temperature that we expect to be around 175 MeV.
In Figure \[fig:2\] we show the inclusive charged hadron spectrum using recombination and fragmentation for pions, kaons, protons and antiprotons. We note that the crossover between the recombination dominated and fragmentation dominated regimes is around 5 GeV/$c$. It is in fact earlier for pions ($\sim$4 GeV/$c$) than for protons ($\sim$6 GeV/$c$). In Figure \[fig:2\] we also give the spectrum of up quarks as an example for the partonic input of our calculation. We note that the crossover between the perturbative and the thermal domain is here around 3 to 3.5 GeV/$c$. It is characteristic for recombination that features of the parton spectrum are pushed to higher transverse momentum in the hadron spectrum.
In Figure \[fig:3\] we give the proton/pion ratio and the nuclear modification factor $R_{AA}$ for pions, protons and charged hadrons. The proton/pion ratio shows a plateau between 2 and 4 GeV/$c$ in accordance with experiment and a steep decrease beyond that. This decrease has not yet been seen and is a prediction of our work.
In the quantity $R_{AA}$ the effect of jet quenching is manifest for all hadrons at large transverse momentum. For pions $R_{AA}$ grows only moderately at low $P_T$ in accordance with PHENIX data [@Miod:02]. On the other hand recombination is much more important for protons, because they are suppressed in the fragmentation process. Therefore $R_{AA}$ nearly reaches a value of 1 below 4 GeV/$c$ for protons.
In summary, we have discussed recombination as a possible hadronization mechanism in heavy ion collisions. We have shown that recombination can dominate over fragmentation up to transverse momenta of 5 GeV/$c$. Recombination provides a natural explanation for the baryon/meson ratio and the nuclear suppression factors observed at RHIC.
[**Acknowledgments.**]{} This work was supported by RIKEN, Brookhaven National Laboratory, DOE grants DE-FG02-96ER40945 and DE-AC02-98CH10886, and by the Alexander von Humboldt Foundation.
[99]{}
K. Adcox [*et al.*]{} \[PHENIX\], Phys. Rev. Lett. [**88**]{}, 022301 (2002); C. Adler [*et al.*]{} \[STAR\], nucl-ex/0210033. M. Gyulassy and X. Wang, Nucl. Phys. B [**420**]{}, 583 (1994); R. Baier [*et al.*]{}, Nucl. Phys. B [**484**]{}, 265 (1997). K. Adcox [*et al.*]{} \[PHENIX\], Phys. Rev. Lett. [**88**]{}, 242301 (2002); C. Adler [*et al.*]{} \[STAR\], Phys. Rev. Lett. [**86**]{}, 4778 (2001); C. Adler [*et al.*]{} \[STAR\], Phys. Rev. Lett. [**89**]{}, 092301 (2002); K. Adcox [*et al.*]{} \[PHENIX\], Phys. Rev. Lett. [**89**]{}, 092302 (2002); T. Chujo \[PHENIX Collaboration\], arXiv:nucl-ex/0209027. J. C. Collins and D. E. Soper, Nucl. Phys. B [**194**]{}, 445 (1982). X. F. Guo and X. N. Wang, Phys. Rev. Lett. [**85**]{}, 3591 (2000); Nucl. Phys. A [**696**]{}, 788 (2001).
R. J. Fries, B. Müller, C. Nonaka and S. A. Bass, Phys. Rev. Lett. [**90**]{}, 202303 (2003). K. P. Das and R. C. Hwa, Phys. Lett. B [**68**]{}, 459 (1977); Erratum [*ibid.*]{} [**73**]{}, 504 (1978); R. G. Roberts, R. C. Hwa and S. Matsuda, J. Phys. G [**5**]{}, 1043 (1979). C. Gupt [*et al.*]{}, Nuovo Cim. A [**75**]{}, 408 (1983); T. S. Biro, P. Levai and J. Zimanyi, Phys. Lett. B [**347**]{}, 6 (1995); T. S. Biro, P. Levai and J. Zimanyi, J. Phys. G [**28**]{}, 1561 (2002). J. C. Anjos, J. Magnin and G. Herrera, Phys. Lett. B [**523**]{}, 29 (2001); E. Braaten, Y. Jia and T. Mehen, Phys. Rev. Lett. [**89**]{}, 122002 (2002). S. A. Voloshin, nucl-ex/0210014; Z. W. Lin and C. M. Ko, Phys. Rev. Lett. [**89**]{}, 202302 (2002); D. Molnar and S. A. Voloshin, nucl-th/0302014. V. Greco, C. M. Ko and P. Levai, Phys. Rev. Lett. [**90**]{}, 202302 (2003). C. B. Dover [*et al.*]{}, Phys. Rev. C [**44**]{}, 1636 (1991). R. Scheibl and U. W. Heinz, Phys. Rev. C [**59**]{}, 1585 (1999). F. Cooper and G. Frye, Phys. Rev. D [**10**]{}, 186 (1974).
J. F. Owens, Rev. Mod. Phys. [**59**]{}, 465 (1987). R. J. Fries, B. Müller, and D. K. Srivastava, Phys. Rev. Lett. [**90**]{}, 132301 (2003); D. K. Srivastava, C. Gale and R. J. Fries, Phys. Rev. C [**67**]{}, 034903 (2003). B. A. Kniehl, G. Kramer and B. Pötter, Nucl. Phys. B [**582**]{}, 514 (2000). S. A. Bass, B. Müller, and D. K. Srivastava, Phys. Lett. B [**551**]{}, 277 (2003).
J. Jia \[PHENIX\], nucl-ex/0209029. S. Mioduszewski \[PHENIX\], nucl-ex/0210021.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The relevant integration of wind power into the grid has involved a remarkable impact on power system operation, mainly in terms of security and reliability due to the inherent loss of the rotational inertia as a consequence of such new generation units decoupled from the grid.'
address:
- 'Dept. of Electrical Engineering, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain'
- 'Dept. of Hydraulic, Energy and Environmental Engineering, Universidad Politécnica de Madrid, 28040, Spain'
- 'Dept. of Civil Engineering, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain'
author:
- 'Ana Fernández-Guillamón'
- José Ignacio Sarasúa
- Manuel Chazarra
- 'Antonio Vigueras-Rodríguez'
- 'Daniel Fernández-Muñoz'
- 'Ángel Molina-García'
bibliography:
- 'biblio.bib'
title: 'Frequency Control Analysis based on Unit Commitment Schemes with High Wind Power Integration: a Spanish Isolated Power System Case Study'
---
Power system stability, Wind energy integration, Wind frequency control, Unit commitment
Nomenclature {#nomenclature .unnumbered}
============
Introduction {#sec.introduction}
============
Conventional power plants with synchronous generators have traditionally determined the inertia of power systems [@tielens16]. However, during the last decades, most countries have promoted large-scale integration of Renewable Energy Sources (RES) [@zhang17; @fernandez19power]. RES are usually not connected to the grid through synchronous machines, but through power electronic converters electrically decoupled from the grid [@junyent15; @tian16]. As a consequence, by increasing the amount of RES and replacing synchronous conventional units, the effective rotational inertia of power systems can be significantly reduced [@akhtar15; @yang18; @fernandez19analysis]. Actually, Albadi *et al.* consider that the impact of RES on power systems mainly depends on the RES integration and the system inertia [@albadi10], being the RES negative effects more severe in isolated power systems [@martinez18]. Among the different RES, wind power is the most developed and relatively mature technology [[@BREEZE2014223]]{}, especially variable speed wind turbines (VSWT) [@edrah15; @syahputra16; @artigao18; @cardozo18; @li18]. Indeed, Toulabi *et al.* affirm that the participation of wind power into frequency control services becomes inevitable due to the relevant integration of such resource [@toulabi17].
Power imbalances between generation and demand can occur, among others, due to the loss of power generators [@sokoler16]. Actually, this loss of power generators can be the most severe contingency in case it is the largest one [@zhang18]. These imbalances cause frequency fluctuations, and subsequently the grid becomes unstable, even leading to black-outs [@khalghani16; @marzband16]. Hence, the frequency control services are playing an essential role for secure and reliable power systems [@ozer15]. Moreover, frequency stability is the most critical issue in isolated power systems due to their low rotational inertia [@aghamohammadi14; @jiang15; @jiang16]. Frequency control has a hierarchical structure, and in Europe is usually organized up to five layers: $(i)$ frequency containment, $(ii)$ imbalance netting, $(iii,iv)$ frequency restoration (automatic and/or manual) and $(v)$ replacement, from fast to slow timescales [@entsoe_europe].
According to the specific literature, several studies have proposed wind frequency control approaches. However, authors notice that in those contributions: $(i)$ energy schedule scenarios considered are usually arbitrary and unrealistic, without considering a unit commitment (UC) scheme and individual generation units [@keung09; @el11; @mahto17; @abazari18; @alsharafi18]; $(ii)$ the power imbalance is usually taken as a fixed random value (between 3 and 20%), excluding the $N-1$ criterion [@ma10; @wang13; @kang16; @ochoa18]; $(iii)$ load shedding is not taken into account in the frequency response analysis [ [@bevrani11; @ma18; @8667397]]{}; and $(iv)$ only a few wind power integration scenarios are commonly analyzed to evaluate the wind frequency controller —usually one or two different scenarios— [@zhang12; @mi16; @bao18; @chen18]. As a consequence, simulations can address unrealistic and inaccurate results. For instance, recent studies considering two wind integration share rates provide different —and even opposite— conclusions regarding frequency nadir and RoCoF (Rate of Change of Frequency): some authors conclude that these parameters can improve [@wang13; @wilches15], others that they could get worse [@alsharafi18; @aziz18] or even be similar [@kang16] as wind penetration increases.
By considering previous contributions, the aim of this paper is to analyze the frequency response of an isolated power system with high integration of wind power generation including wind frequency control and load shedding. These energy schedule scenarios are determined by a UC model, taking into account some technical and economical constraints [@farrokhabadi18] and guaranteeing the frequency system recovery after the largest power plant outage ($N-1$ criterion) [@teng16]. A realistic load shedding program is also included, as well as wind frequency control. With this aim, our study is carried out in Gran Canaria Island power system, in the Canary island archipelago (Spain), where the wind power integration has increased from 90 to 180 MW in the last two years. Moreover, in the Canary island archipelago, more than 200 loss of generation events per year were registered between 2005 and 2010. In fact, the number of this kind of incidents even surpassed 300 per year, subsequently suffering from the activation of the load shedding programs [@padron2015reducing]. This analysis can be extended to other isolated power systems with relevant wind energy potential, such as Madagascar [@praene17] or Japan [@meti15]. The main contributions of this paper can be thus summarized as follows:
- Evaluation of wind frequency control responses, including load shedding and rotational inertia changes from realistic operation conditions under generation unit tripping.
- Analysis of frequency deviations (nadir, RoCoF) in isolated power systems with high penetration of wind power, using energy schedules and unit comments obtained from an optimization model.
The rest of the paper is organized as follows: Section \[sec.power\_system\_description\] describes the Gran Canaria power system and the generation scheduling process. The power system model, including both optimization and dynamic models, are described in Section \[sec.power\_system\_model\]. Simulation results are analyzed and discussed in Section \[sec.results\]. Finally, Section \[sec.conclusions\] outlines the main conclusions of the paper.
Power System and Generation Scheduling Process {#sec.power_system_description}
==============================================
Preliminaries
-------------
[\[sec.power\_system\_general\_overview\]]{}
Different frequency analysis studies have been carried out by authors based on specific power systems. For instance, Zerket *et al.*. considered a modified Nordic 32-bus test system [@zertek12]; in [@nazari2014distributed], the power system of Flores Island, and the electric power system of Sao Miguel Island were used; Moghadam *et al.*. focused on the power system of Ireland [@moghadam2014distributed]; in [@wang2017system], the Singapore power system was used; Pradhan *et al.* tested the three-area New England system [@pradhan2019online]; and [@sarasua2019analysis] considered the Spanish isolated power system located in El Hierro Island. In this paper, authors have focused on Gran Canaria Island (Spain), where the wind power integration has increased from 90 to 180 MW in the last two years.
Gran Canaria Island belongs to Canarian archipelago, one of the outermost regions of the European Union. Canarian archipelago is located in the north-west of the African Continent. From the energy point of view, Gran Canaria Island is an isolated power system. Traditionally, Gran Canaria Island’s generation has been exclusively associated with fossil fuels: diesel, steam, gas and combined cycle units from two different power plants: *Jinámar power plant* and *Barranco de Tirijana power plant*. However, this fossil fuel dependence has involved an important economic and environmental drawback. To overcome theses problems, the Canary Government promoted the installation of wind power plants in the 90’s, accounting for 70 MW in 2002. In the following decade, the installation of wind power plants stopped around 95 MW and, since 2015, wind power capacity has been doubled, nearly reaching 180 MW.
Regarding the wind power generation and system demand in Gran Canaria Island along 2018, both are shown in Fig. \[fig.demand\]. The system demand is discretized for six different intervals, considering the lowest and highest demand of Gran Canaria Island. Wind power generation is discretized for five intervals. According to the ranges in the system demand and the wind power generation shown in Fig. \[fig.demand\], thirty energy scenarios are proposed to analyse the frequency response of the system including wind frequency control. Each energy scenario is based on a pair demand-wind power generation as it is further described in Section \[sec.results\].
![image](wp-demand.pdf){width="0.75\linewidth"}
Generation Scheduling Process
-----------------------------
[\[sec.power\_system\_generation\_scheduling\]]{} The generation scheduling of the Gran Canaria Island power system is ruled in [@miet12; @miet15]. It is carried out by the Spanish Transmission System Operator (TSO) according to the economic criterion of variable costs of each power plant. The schedules are obtained according to different time horizons: weekly or daily. Each energy schedule depends on the previous time horizon and, subsequently, weekly and daily schedules are required to determine the hourly generation scheduling, which is used in the present paper. An overview of these schedules is summarized in Fig. \[fig.schedule\].
\
1. *Weekly scheduling:* Estimation of the hourly start-up and shut-down decisions from each Saturday (00:00 h) to the following Friday (23:59 h). This initial generation schedule is determined following two steps: $(i)$ an economic dispatch is carried out to minimize the total variable costs to meet the net power system demand (i.e., the power system demand minus the renewable generation). The result of such economic dispatch includes both the hourly energy and the reserve schedules (labeled as [`Schedule-A`]{}). $(ii)$ an economic and security dispatch is determined taking into account the transmission lines and minimizing the total variable costs to support the net power system demand and a certain level of power quality. The result of this economic and security dispatch is also a hourly energy and reserve schedule (labeled as [`Schedule-B`]{}).
2. *Daily scheduling:* Updates the [`Schedule-B`]{} of a certain day $D$ from the updated available information of the power system: generation from suppliers, demand from consumers and the state of the transmission lines. The result of the daily scheduling in the day $D$ is a new hourly energy and reserve schedule (labeled as [`Schedule-C`]{}). It is obtained before 14:00 h of the day $D-1$. This [`Schedule-C`]{} is determined following a similar process as in the weekly scheduling: $(i)$ an economic dispatch is firstly carried out and $(ii)$ an economic and security dispatch is then calculated. The daily scheduling processes aims to minimize the total variable costs to meet the net power system demand with a minimum certain level of power quality.
Methodology: Unit Commitment and Frequency Model {#sec.power_system_model}
================================================
Frequency deviations are analyzed according to possible generation tripping and power system reserves by considering explicitly individual generation units and technologies. With this aim, a series of energy scenarios for each scenario of system demand and wind power generation based on a real isolated power system (the Gran Canaria Island) are estimated and evaluated accordingly, considering current wind power integration percentages and load shedding programs. Fig. \[fig.flow\_chart\] shows the proposed methodology, highlighting the novelties and differences presented in this paper compared to other approaches focused on frequency analysis (i.e., carry out a UC to determine the energy scenarios, consider the loss of the largest power plant as imbalance, and include load shedding with and without wind frequency control). The following subsections describe respectively the unit commitment model and frequency models used in this work. Fig. \[fig.power\_system\] shows a simplified scheme of the modelled Gran Canaria Island power system, where conventional and wind power plants are depicted.
![[]{data-label="fig.flow_chart"}](flowchart.pdf){width="0.9\linewidth"}
![image](power_system.pdf){width="0.595\linewidth"}
Unit commitment model: creation of scenarios {#sec.optimization_model}
--------------------------------------------
In order to analyze frequency deviations in the Gran Canaria power system, a UC model is required to estimate the number of thermal units connected to the grid for each generation mix scenario. These thermal units remain unchanged during the subsequent frequency control analysis. The UC model used in this paper has been recently proposed by the authors in [[@FernandezMunoz2019]]{}, based on [[@fernandez19]]{}, which is a deterministic thermal model based on mixed integer linear programming (MILP). Other contributions focused on probabilistic unit commitment can be also found in the specific literature. In this way, [[@AZIZIPANAHABARGHOOEE2016634]]{} proposes an optimal allocation of up/down spinning reserves under high integration of wind power. The planning horizon of our model is adapted to 24 hours with a time resolution of one hour consistent with the approach used by the TSO for the next-day generation scheduling [[@pezic13]]{}; and the hydropower technology is excluded from the model formulation in order to be consistent with the generation mix of the Gran Canaria power system. The model formulation is partially based on [[@morales12]]{} which is, to the author’s knowledge, the most computationally efficient formulation available in the literature when considering different types of start-up costs of thermal units.
The Gran Canaria power system is operated in a centralized manner by the TSO in order to minimize the total system costs, according to [@miet15]. Therefore, the objective function of the UC model here used consists in minimizing the start-up cost, the fuel cost, the operation and maintenance cost and the wear and tear cost of all thermal units. The optimal solution of the UC model is formed by the hourly energy schedule of each thermal unit of the system, taking into account their minimum on-line and off-line times. Among others, the energy schedule is restricted by the following constraints. The production-cost curve of each thermal unit is modeled as a piece-wise linear function discretized by ten pieces. The number of pieces is determined as a trade-off between the accuracy of the solution and the computation time cost limits. In addition, the system demand and the spinning reserve requirements must be fulfilled in each hour. According to the P.O. SEIE 1 in [@miet12], the hourly spinning reserve must be higher or equal to the maximum of the following three values: $(i)$ the expected inter-hourly increase in the system demand between two consecutive hours; $(ii)$ the most likely wind power loss calculated by the TSO from historical data; $(iii)$ the loss of the largest spinning generating unit in each hour. It is important to bear in mind that there are two conceptual differences between the formulation presented in [@morales12] and the one used here:
- The meaning of each start-up type of each thermal unit is different. In [@morales12], each start-up type corresponds to a different power trajectory of the thermal unit whereas the approach of the model here used is the following: each start-up type refers to the start-up cost as a function of the time that the unit has remained off-line since the previous shut-down. The start-up cost calculation and the involved parameters of the thermal units are defined in [@miet15].
- For those thermal units that have a start-up process longer than one hour, a single output power trajectory ranging from zero to the unit’s minimum output power is considered.
Further details of the UC model formulation can be found in [@FernandezMunoz2019].
{#sec.dynamic_model}
### General overview {#sec.dynamic_model_general_overview}
Frequency deviations in power systems are usually modeled by means of an aggregated inertial model. This assumption has been successfully applied to isolated power systems, as the Irish power system [@mansoor00]. In this paper, frequency system variations are the result of an imbalance between the supply-side (*Barranco de Tirijana* power plant $P_{T}$, *Jinámar* power plant $P_{J}$ and wind power plants $P_{w}$, which are explicitly considered for simulations) and the demand-side $P_{d}$. A load frequency sensitivity parameter $D$ is also included to model the load sensitivity under frequency excursions [@o14], $$\label{eq.swing}
f\,\dfrac{df}{dt}=\dfrac{1}{T_{m}(t)}\left(P_{T}+P_{J}+P_{w}-P_{d}-D\cdot\Delta f \right) ,$$ being $T_{m}(t)$ the total inertia of the power system, which corresponds to the equivalent addition of the rotational inertia of all synchronous generators under operation conditions, in terms of the system base, $$\label{eq.Tm}
T_{m} (t)= \sum_{m=1}^{4}2\;H_{s,m}+\sum_{q=1}^{5}2\;H_{g,q}+\sum_{k=1}^{5}2\;H_{ds,k}+\sum_{l=1}^{2}2\;H_{cc,l}\,.$$ Frequency and power variables also depend on time, but it is not explicitly included to simplify the expressions. Fig. \[fig.dynamic\_model\] shows the general block diagram of the proposed simulation model. It has been developed in Matlab/Simulink. The block [`Power system`]{} in Fig. \[fig.dynamic\_model\] contains eq. ([\[eq.swing\]]{}), modeled with the corresponding block diagram. The power provided by the power plants, $P_{T}$ and $P_{J}$ respectively, are the addition of the power supplied by each thermal generation unit (steam $s$, gas $g$, combined cycle $cc$ and diesel $ds$) under operating conditions, expressed as follows: $$\label{eq.pt}
P_{T}=\sum_{m=1}^{2}P_{s,m}+\sum_{q=4}^{5}P_{g,q}+\sum_{l=1}^{2}P_{cc,l}\;,$$ $$\label{eq.pj}
P_{J}=\sum_{m=3}^{4}P_{s,m}+\sum_{q=1}^{3}P_{g,i}+\sum_{k=1}^{5}P_{ds,k}\;.$$ The $k$, $l$, $m$ and $q$ indexes refer to the number of diesel, combined cycle, steam and gas units of each power plant, respectively. The proposed dynamic model depicted in Fig. \[fig.dynamic\_model\] is thus composed by the different thermal units belonging to *Barranco de Tirijana* and *Jinámar* power plants —explicitly considered in the model—, as well as the wind power plants, the power system and the power load of consumers. This load consumer block includes the load shedding program, activated when the grid frequency exceed certain thresholds. The dynamic response of each generation unit is simulated according to the transfer functions discussed in Section [\[sec.thermal\_units\]]{}.
![image](dynamic_model.pdf){width=".6\linewidth"}
### Thermal generation units {#sec.thermal_units}
The frequency response of the thermal generation units has been modeled through the transfer functions proposed in [@kundur94; @neplan16]. Parameters have been selected from typical values, see Table \[tab.thermal\]. The combined cycle generation unit frequency behavior is supposed similar to the gas generation units, see Fig. \[fig.thermal\_generation\_models\]. The two inputs for these three frequency models are $(i)$ frequency deviations —including constraints provided by the frequency containment—, and $(ii)$ AGC conditions for the frequency restoration in isolated power systems. Both inputs are linked by the corresponding droop.
![image](thermal.pdf){width="0.755\linewidth"}
According to the Spanish insular power system requirements, the AGC system is in charge of removing the steady-state frequency error after the frequency containment control. This is usually known as ‘frequency restoration’, and modeled in a similar way to [@perez14]. The equivalent regulation effort $\Delta RR$ is then estimated as: $$\label{eq.rr}
\Delta RR = - K_{f} \cdot \Delta f.$$
This expression is included in the block diagram shown Fig. \[fig.dynamic\_model\], in block [`AGC`]{}. $K_{f}$ is determined following the ENTSO-E recommendations [@entsoe]. This regulation effort is conducted by all thermal generation units and distributed depending on the participation factors $K_{u,i}$, assuming that: $(i)$ all thermal generation units connected to the power system equally participate in secondary regulation control and $(ii)$ the participation factors are obtained as a function of the speed droop of each unit. The participation factors $K_{u,i}$ of thermal generation units disconnected from the grid are considered as zero [@wood12]: $$\label{eq.pref}
\begin{split}
\Delta P_{ref,th}=\dfrac{1}{T_{u,th}}\int\Delta RR\cdot K_{u,th}\;dt = \\
= \dfrac{-1}{T_{u,th}} \cdot K_{u,th} \cdot K_f \int \Delta f \;dt.
\end{split}$$
$$\label{eq.Kuth}
\begin{split}
\sum K_{u,th}=\sum_{m=1}^{4}K_{u,s,m}+\sum_{q=1}^{5}K_{u,g,q} +\\
+\sum_{k=1}^{5}K_{u,d,k}+\sum_{l=1}^{2}K_{u,cc,l}=1
\end{split}$$
### Load shedding {#sec.load_shedding}
A realistic load shedding scheme is considered in the proposed model by means of sudden load disconnections when frequency excursions are higher than certain thresholds. Table \[tab.load\] summarizes these frequency thresholds, time delay and load shedding values for different scenarios. This load shedding scheme depends on the islanding power system operation conditions required by the Spanish TSO and thus, the responses are in line with certain frequency excursion thresholds. When the scenario corresponds to an intermediate load case, the load shedding value is interpolated from the corresponding steps.
### Wind power plants {#sec.wind_turbines}
One equivalent VSWT with $n_{WT}$-times the size of each one model the wind power penetration —being $n_{WT}$ the number of wind turbines [@pyller03; @mokhtari14]—, is proposed as aggregated model for wind power plants. An equivalent averaged wind speed ($v_{w}=10.25$ m/s) is assumed for the simulations. This wind speed is considered as constant, which has been previously used in the specific literature for short-time period frequency analysis including wind power plants [@chang10; @erlich10; @margaris12; @zertek12; @vzertek12]. With this wind speed, the wind generation accounts for 80% of the installed wind power capacity.
Wind turbines are modeled according to the turbine control model, mechanical two-mass model and wind power model described in [@ullah08; @clark10]. The two-mass model assumes the rotor and blades as a single mass, and the generator as another mass [@jafari17; @liu17]. Huerta *et al.* consider that the two-mass model is the most suitable to evaluate the grid stability [@huerta17]. Wind turbines also include a frequency control response. The strategy for VSWTs implemented in this paper is based on the technique described in [@fernandez18; @fernandez18fast], see Fig. \[fig.control\]. It was evaluated in [@fernandez18] for isolated power systems with up to a 45% of wind power integration and compared to the approach of [@tarnowski09], providing a more appropriate frequency response under power imbalance conditions. In [@fernandez18fast], the proposed frequency control strategy was studied for multi-area power systems. Three operation modes are considered: $(i)$ normal operation mode, $(ii)$ overproduction mode and $(iii)$ recovery mode. Different set-point active power $P_{sp}$ values are then determined aiming to restore the grid frequency under power imbalance conditions. Fig. \[fig.power\] depicts the VSWTs active power variations $\Delta P _{WF}$ submitted to an under-frequency excursion, being $\Delta P_{w}=P_{sp}-P_{MPPT}(\Omega_{MPPT})$.
With the aim of reducing load shedding contributions under high wind power integration scenarios, two modifications are carried out to the preliminary frequency controller, both in overproduction and recovery periods. According to [@fernandez18], the overproduction power $\Delta P_{OP}$ is estimated proportionally to the frequency excursion evolution, with a maximum value of 10%. In this paper, the maximum $\Delta P_{OP}$ is increased to 15%, in order to provide more power after the imbalance and minimizing load shedding situations. In the recovery mode, the power of point $P_{2}$ is defined as $P_{MPPT} (\Omega_{V})+x\cdot \left( P_{mt}(\Omega_{V})-P_{MPPT}
(\Omega_{V})\right) $ —see Fig. \[fig.control\]—, being $x$ an scale factor considered as 0.75 in the original approach [@fernandez18]. However, in this case, the recovery time period of the wind turbines is faster than the AGC action of the frequency restoration control, and subsequently obtaining an undesirable frequency evolution, see Fig. \[fig.potencia\]. As a consequence, $x$ has been increased to 0.95, smoothing and slowing down the recovery period of the wind frequency controller. This alternative frequency controller is included in the VSWT model as seen in Fig. \[fig.aero\_control\].
![image](aero_control_mod.pdf){width="0.75\linewidth"}
Results {#sec.results}
=======
Scenarios under consideration
-----------------------------
According to the demand distribution in Gran Canaria Island along 2018 previously discussed in Section [\[sec.power\_system\_general\_overview\]]{}, six different power demand conditions are considered for the study. Each system demand is analyzed under different wind power generation percentages following Fig. \[fig.demand\]. Thus, thirty different energy scenarios are under study, which is significantly higher than other contributions focused on frequency control under contingencies including wind power plants [@lalor2005frequency; @sigrist2009representative]. To determine the energy schedule of each supply-demand scenario, the Unit Commitment model described in Section \[sec.optimization\_model\] is run with GAMS software and Cplex 12.2 solver, which uses a branch and cut algorithm to solve MILP problems. Fig. \[fig.scenarios2\] depicts the energy schedule of each scenario aggregated by generation technology. From the $N-1$ criterion, the largest generation unit is suddenly disconnected under a contingency. As a consequence, a different generation group is disconnected in each scenario, depending on the energy schedule obtained by the UC model and subsequently addressed a variety of power imbalance situations. Fig. \[fig.after\_imbalance\] summarizes the energy schedule of each scenario after these disconnections, pointed out the technology and generation unit tripping under such circumstances. Due to these sudden disconnections, the equivalent rotational inertia of the power system is reduced according to eq. ([\[eq.Tm\]]{}).
![Scenarios under study[]{data-label="fig.scenarios2"}](scenarios.pdf){width="0.995\linewidth"}
![Generation mix after disconnections[]{data-label="fig.after_imbalance"}](scenarios_mod.pdf){width="0.995\linewidth"}
Frequency response analysis {#sec.simulation_results}
---------------------------
\
\
With the aim of evaluating frequency deviation and power system performance under the sudden generation disconnection established with the $N-1$ criterion, grid frequency response is analyzed $(i)$ excluding wind frequency control and only considering conventional units; and $(ii)$ including conventional units and wind frequency control strategy. Firstly, nadir and RoCoF results for the 30 simulated scenarios according to the generation unit tripping obtained for the UC model and depicted in Fig. \[fig.after\_imbalance\] are compared to results obtained following methodologies of previous contributions [@keung09; @ma10; @el11; @alsharafi18]. They usually assume a constant 10% imbalance, neglect any inertia power system modification and do not include load shedding scheme in their frequency analysis models. With this aim, Fig.s \[fig.nadir\] and \[fig.rocof\] summarize nadir and RoCoF respectively, including (or not) wind frequency response. RoCoF is calculated between 0.3 and 0.5 s after the sudden disconnection of the largest conventional generation unit for each energy scenario. As can be seen, clear differences are identified between both approaches. In fact, most obvious results are determined with a constant power imbalance, as was to be expected, see Fig. \[fig.nadir\_sin\] and \[fig.nadir\_con\] for nadir comparison values. If a simplified power system modeling is considered for frequency control analysis, with typical 10% power imbalance conditions —usually assumed in previous contributions as was previously discussed— 49.4 Hz nadir and 0.5 Hz/s RoCoF values are obtained for all cases, which provides significant discrepancies with our proposal, see Fig. \[fig.nadir\] and \[fig.rocof\] respectively. Indeed, nadir lies in between 48.54 and 49.15 Hz when wind power plants are excluded from frequency control, depending on each scenario —see Fig. [\[fig.nadir\_sin\_2\]]{}—. In fact, these values were even worse if the load shedding program was not considered, as it is activated in 21 of the 30 scenarios analyzed. However, a larger wind power integration without frequency control —see Fig. [\[fig.nadir\_sin\_2\]]{}— doesn’t imply a worse nadir response, which could be deduced a priori, due to the loss of the larger power plant (which is different, depending on the scenario). When wind frequency control is considered for simulations, the minimum frequency is increased 110 mHz in average for all cases. Moreover, the more wind power integration providing frequency control, the lower nadir is obtained. For instance, for wind power integration over 50%, the minimum frequency is reduced around 200 mHz. It can’t be then deduced an homogeneous response of the considered power system submitted to realistic generation unit tripping. In this way, and based on the proposed methodology and modeling, it is important to point out that higher wind power integration excluding frequency control does not always imply a worse frequency response, see Fig. \[fig.nadir\_sin\_2\]. With regard to RoCoF, it varies between 0.97 and 1.93 Hz/s initially, see Fig. [\[fig.rocof\_sin\_2\]]{}, but slighted 185 mHz/s in average by including wind frequency control, Fig. [\[fig.rocof\_con\_2\]]{}. These results are substantially different from those obtained in the simplified power system analysis (where RoCoF was around 0.5 Hz/s); this is mainly due to the inertia change considered in this study and neglected in the previous one, as low inertia is related to a faster ROCOF [@daly15]. Therefore, including wind power frequency control can lead to lower frequency deviations under imbalances, as was expected.
\
The proposed wind frequency response analysis allows us to evaluate the wind frequency control impact on load shedding actions in islanding power systems under different imbalances. Fig. \[fig.deslastre\_res\] summarizes the load shedding for the 30 simulated scenarios and considering the generation unit tripping obtained for the UC model, see Fig. \[fig.after\_imbalance\]. In this way, Fig. [\[fig.deslastre\_sin\_2\]]{} and Fig. [\[fig.deslastre\_con\_2\]]{} shows the corresponding load shedding responses by including or not wind frequency control for the considered energy scenarios. Both nadir and RoCoF improvements lead to a load shedding reduction in 11 scenarios. Moreover, in these 11 scenarios, the average load shedding reduction is 80%, getting up to a 100% reduction in 5 scenarios —for example, compare 30.80 MW load shedding under the range \[500, 550\] MW power demand and \[105, 120\] MW wind power generation, Fig. [\[fig.deslastre\_sin\_2\]]{}, to 0 MW load shedding under the same demand and wind power values when wind frequency control is included, Fig. [\[fig.deslastre\_con\_2\]]{}—.
Table \[tab.overview\] shows a comparison of results between the proposed analysis described in this paper and conventional methodologies previously considered where a constant imbalance is assumed, inertia of the power system is kept constant during the imbalance and load shedding is not included for simulations. In the table, the average $\mu$ and variance $\sigma^{2}$ of nadir, RoCoF inertia change and load shedding values for the 30 different generation mix and imbalance scenarios are shown with and without wind frequency control.
Conclusion {#sec.conclusions}
==========
The case study is focused on the real isolated power system located in the Gran Canaria Island (Spain), which has doubled its wind power capacity in the last two years. With regard to the frequency analysis, by including wind power generation into frequency control, nadir and RoCoF are reduced in most of energy scenarios considered (110 mHz and 185 mHz/s in average, respectively). Regarding load shedding, it is reduced in 11 out of the 30 power imbalance analyzed. This improvement is more significant in high wind power integration scenarios (regardless of the power demand), and for high power demands (regardless of the wind power integration). Therefore, wind frequency control can be considered a remarkable solution to reduce load shedding in islanding power systems with high wind power integration.
Acknowledgment
==============
Authors thank Ignacio Ares for the preliminary analyses that he did as part of his final master project.
Funding
=======
This work has been partially supported the Spanish Ministry of Economy and Competitiveness under the project ‘Value of pumped-hydro energy storage in isolated power systems with high wind power penetration’ of the National Plan for Scientific and Technical Research and Innovation 2013-2016 (Ref. ENE2016-77951-R) and by the Spanish Education, Culture and Sports Ministry (Ref. FPU16/04282).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Many deep learning algorithms can be easily fooled with simple adversarial examples. To address the limitations of existing defenses, we devised a probabilistic framework that can generate an exponentially large ensemble of models from a single model with just a linear cost. This framework takes advantage of neural network depth and stochastically decides whether or not to insert noise removal operators such as VAEs between layers. We show empirically the important role that model gradients have when it comes to determining transferability of adversarial examples, and take advantage of this result to demonstrate that it is possible to train models with limited adversarial attack transferability. Additionally, we propose a detection method based on metric learning in order to detect adversarial examples that have no hope of being cleaned of maliciously engineered noise.'
author:
- |
George A. Adam\
Department of Computer Science\
University of Toronto\
Toronto, ON M5S 3G4\
`alex.adam@mail.utoronto.ca`\
Petr Smirnov\
Medical Biophysics\
University of Toronto\
Toronto, ON M5S 3G4\
David Duvenaud\
Department of Computer Science\
University of Toronto\
Toronto, ON M5S 3G4\
Benjamin Haibe-Kains\
Medical Biophysics\
University of Toronto\
Toronto, ON M5S 3G4 Anna Goldenberg\
Department of Computer Science\
University of Toronto\
Toronto, ON M5S 3G4\
bibliography:
- 'sample.bib'
- 'Zotero.bib'
title: |
Stochastic Combinatorial Ensembles for\
Defending Against Adversarial Examples
---
Introduction
============
Deep Neural Networks (DNNs) perform impressively well in classic machine learning areas such as image classification, segmentation, speech recognition and language translation [@hinton_deep_2012; @krizhevsky_imagenet_2012; @sutskever_sequence_2014]. These results have lead to DNNs being increasingly deployed in production settings, including self-driving cars, on-the-fly speech translation, and facial recognition for identification. However, like previous machine learning approaches, DNNs have been shown to be vulnerable to adversarial attacks during test time [@szegedy_intriguing_2013]. The existence of such adversarial examples suggests that DNNs lack robustness, and might not be learning the higher level concepts we hope they would learn. Increasingly, it seems that the attackers are winning, especially when it comes to white box attacks where access to network architecture and parameters is granted.
Several approaches have been proposed to protect against adversarial attacks. Traditional defense mechanisms are designed with the goal of maximizing the perturbation necessary to trick the network, and making it more obvious to a human eye. However, iterative optimization of adversarial examples by computing gradients in a white box environment or estimating gradients using a surrogate model in a black-box setting have been shown to successfully break such defenses. While these methods are theoretically interesting as they can shed light on the nature of potential adversarial attacks, there are many practical applications in which being perceptible to a human is not a reasonable defense against an adversary. For example, in a self-driving car setting, any deep CNN applied to analyzing data originating from a non-visible light spectrum (e.g. LIDAR), could not be protected even by an attentive human observer. It is necessary to generate ‘complete’ defenses which preclude the existence of adversarial attacks against the model. This requires deeper understanding of the mechanisms which make finding adversarial examples against deep learning models so simple. In this paper, we review characteristics of such mechanisms and propose a novel defense method inspired by our understanding of the problem. In a nutshell, the method makes use of the depth of neural networks to create an exponential population of defenses for the attacker to overcome, and employs randomness to increase the difficulty of successfully finding an attack against the population.
Related Work
============
Adversarial examples in the context of DNNs have come into the spotlight after Szegedy et al. [@szegedy_intriguing_2013], showed the imperceptibility of the perturbations which could fool state-of-the-art computer vision systems. Since then, adversarial examples have been demonstrated in many other domains, notably including speech recognition [@carlini_audio_2018], and malware detection[@grosse_adversarial_2016]. Nevertheless, Deep Convolutional Neural Networks (CNNs) in computer vision provide a convenient domain to explore adversarial attacks and defenses, both due to the existence of standardized test datasets, high performing CNN models reaching human or super-human accuracy on clean data, and the marked deterioration of their performance when subjected to adversarial examples to which human vision is robust.
In order to construct effective defenses against adversarial attacks, it is important to understand their origin. Early work speculated that DNN adversarial examples exist due to the highly non-linear nature of DNN decision boundaries and inadequate regularization, leading to the input space being scattered with small, low probability adversarial regions close to existing data points [@szegedy_intriguing_2013]. Follow-up work [@goodfellow_explaining_2014] speculated that adversarial examples are transferable due to the linear nature of some neural networks. While this justification did not help to explain adversarial examples in more complex architectures like ResNet 151 on the ImageNet dataset [@Liu], the authors found significant overlap in the misclassification regions of a group of CNNs. In combination with the fact that adversarial examples are transferable between machine learning approaches (for example from SVM to DNN) [@papernot_transferability_2016], this increasingly suggests that the linearity or non-linearity of DNNs is not the root cause of the existence of adversarial examples that can fool these networks.
Recent work to systematically characterize the nature of adversarial examples suggests that adversarial subspaces can lie close to the data submanifold [@tanay_boundary_2016], but that they form high-dimensional, contiguous regions of space [@tramer_space_2017; @ma_characterizing_2018]. This corresponds to empirical observation that the transferability of adversarial examples increases with the allowed perturbation into the adversarial region [@carlini_towards_2016], and is higher for examples which lie in higher dimensional adversarial regions [@tramer_space_2017]. Other work [@gilmer_adversarial_2018] further suggests that CNNs readily learn to ignore regions of their input space if optimal classification performance can be achieved without taking into account all degrees of freedom in the feature space. In summary, adversarial examples are likely to exist very close to but off the data manifold, in regions of low probability under the training set distribution.
Proposed Approaches to Defending Against Adversarial Attacks
------------------------------------------------------------
The two approaches most similar to ours for defending against adversarial attacks are MagNets [@meng_magnet:_2017] and MTDeep [@sengupta_mtdeep:_2017]. Similar to us, MagNets combine a detector and reformer, based off of the Variational Autoencoder (VAE) operating on the natural image space. The detector is designed by examining the probability divergence of the classification for the original image and the autoencoded reconstruction, with the hypothesis that for adversarial examples, the reconstructions will be classified into the adversarial class with much lower probability. We adapt the probability divergence approach, but instead of relying on the same classification network, we compute the similarity using an explicit metric learning technique, as described in Section \[need\_for\_detection\]. MagNets also incorporated randomness into their model by training a population of 8 VAEs acting directly on the input data, with a bias towards differences in their encoding space. At test time, they chose to randomly apply one of the 8 VAEs. They were able to reach 80% accuracy on defending against a model trained to attack one of the 8 VAEs, but did not evaluate the performance on an attack trained with estimation of the expectation of the gradient over all eight randomly sampled VAEs. We hypothesize that integrating out only 8 discrete random choices would add a trivial amount of computation to mount an attack on the whole MagNet defense. Our method differs in that we train VAEs at each layer of the embedding generated by the classifier network, relying on the differences in embedding spaces to generate diversity in our defenses. This also gives us the opportunity to create a combinatorial growth of possible defenses with the number of layers, preventing the attacker from trivially averaging over all possible defenses.
In MTDeep (Moving Target Defense) [@sengupta_mtdeep:_2017], the authors investigate the effect of using a dynamically changing defense against an attacker in the image classification framework. The framework they consider is a defender that has a small number of possible defended classifiers to choose for classifying any one image, and an attacker that can create adversarial examples with high success for each one of the models. They also introduce the concept of differential immunity, which directly quantifies the maximal defense advantage to switching defenses optimally against the attacker. Our method also builds on the idea of constantly moving the target for the attacker to substantially increase the difficulty the attacker has to fool our classifier. However, instead of randomizing only at test time, we use the moving target to make it increasingly costly for the attacker to generate adversarial examples.
Methods
=======
Probabilistic Framework
-----------------------
Deep neural networks offer many places at which to insert a defense against adversarial attacks. Our goal is to exploit the potential for exponential combinations of defenses. Most defenses based on cleaning inputs tend to operate in image space, prior to the image being processed by the classifier. Attacks that are aware of this preprocessing step can simply create images that fool the defense and classifier jointly. However, an adversary would have increased difficulty attacking a model that is constantly changing. For example, a contracted version of VGG-net has 7 convolutional layers, thus offering $2^{1 + 7} = 64$ (1 VAE before input, and 1 VAE for each of the 7 convolutional layers) different ways to arrange defenses (Figure \[figure\_1\]); this can be thought of as a bag of $64$ models. If the adversary creates an attack for one possible defense arrangement, there is only a $\frac{1}{64}$ chance that the same defense will be used when they try to evaluate the network on the crafted image. Hence, assuming that the attacks are not easily transferable between defense arrangements, the adversary’s success rate decreases exponentially as the number of layers increases just linearly. Even if an adversary generates malicious images for all 64 versions of the model, the goal post when testing these images is always moving, so it would take 64 attempts to fool the model on average. An attacker trying to find malicious images fooling all the models together would have to contend with gradient information sampled from random models resulting in possibly orthogonal goals. An obvious defense to use at any given layer is an autoencoder with an information bottleneck such that it is difficult to include adversarial noise in the reconstruction. This should have minimal impact on classifier performance when given normal images, and should be able to clean the noise at various layers in the network when given adversarial images.
![Illustration of probabilistic defense framework for a deep CNN architecture. Pooling and activations are left out to save space.[]{data-label="figure_1"}](figures/figure_1_report.pdf){width="\textwidth"}
Need for Detection {#need_for_detection}
------------------
Some adversarial examples are generated with large enough perturbations that they no longer resemble the original image, even when judged by humans. It is unreasonable to assume that basic image processing techniques can restore such adversarial examples to their original form since the noise makes the true class ambiguous. Thus, there is a need to flag images with substantial perturbations. Carlini and Wagner [@Carlini2017], demonstrated that methods that try to detect adversarial examples are easily fooled, and also operate under the assumption that adversarial examples are fundamentally different from natural images in terms of image statistics. However, this was in the context of small perturbations. We believe that a good detection method should be effective in detecting adversarial examples that have been generated with large enough perturbations. Furthermore, we believe that detection methods should be generalizable to new types of attacks in order to be of practical relevance, so in our work we do not make any distributional assumptions regarding adversarial noise in building our detector.
Our proposed detection method might not seem like a detection method at all. In fact, it is not explicitly trained to differentiate between adversarial examples and natural images. Instead, it relies on the consensus between the predictions of two models: the classifier we are trying to defend, and an auxiliary model. To assure low transferability of adversarial examples between the classifier and auxiliary model, we choose an auxiliary model that is trained in a fundamentally different way than a softmax classifier. This is where we leverage recent developments in metric learning.
As an auxiliary model here we use a triplet network [@Hoffer], which was previously introduced in a face recognition application[@FaceNet]. A triplet network is trained using 3 different training examples at once: an anchor, a positive example, and a negative example as seen in Supplementary Figure 3; this results in semantic image embeddings that are then used to cluster and compute similarities between images. We use a triplet network for the task of classification via an unconventional type of KNN in the embedding space. This is done by randomly sampling $50$ embeddings for each class from the training set, and then computing the similarity of these embeddings to the embedding for a new test image (Figure \[triplet\_detector\]a). Doing so gives a distribution of similarities that can be then converted to a probability distribution by applying the softmax function (Figure \[triplet\_detector\]b). To classify an image as adversarial or normal, we first take the difference between the probabilities from classifier and from the embedding network (the probabilities are compared for the most likely class of the original classifier). If this difference in probability is high, then the two models do not agree, so the image is classified as adversarial (Figure \[triplet\_detector\]c). Note that for this setup to work, classifiers have to agree in classification of most unperturbed images. In our experiments, we have confirmed the agreement between LeNet and the triplet network is 90% as seen in Supplementary Table 2. More formally, the detector uses the following logic
$$\begin{aligned}
k &= \mathrm{argmax}(p_c(y | x)) & \Delta &= |p_c(y = k | x) - p_t(y = k | x)| & D(\Delta) =
\begin{cases}
1 & \Delta \geq \eta \\
0 & \mathrm{otherwise}
\end{cases}\end{aligned}$$
![a) The test image is projected into embedding space by the embedding network. A fixed number of examples from all possible classes from the training set are randomly sampled. The similarity of the test image embedding to the randomly sampled training embeddings is computed and averaged to get a per-class similarity. b) The vector of similarities is converted to a probability distribution via the softmax function. c) The highest probability class (digit 0 in this case) of the classifier being defended is determined, and the absolute difference between the classifier and triplet probability for that class is computed.[]{data-label="triplet_detector"}](figures/classification_with_triplet.pdf){width="80.00000%"}
where $p_c(y | x)$ and $p_t(y | x)$ are the probability distributions from the classifier and triplet network respectively, $k$ is the most probable class output by the classifier, and $\Delta$ is the difference in probability between the most probable class output by the classifier and the same class output by the triplet network. In our experiments we set the threshold $\eta=0.4$. Note that while we used a triple network as an auxiliary model in our examples, the goal is to find a model that is trained in a distinct manner and will thus have different biases than the original classifier, so other models can certainly be used in place of a triplet network if desired.
Defense Analysis
================
Before discussing the experiments and corresponding results, here is the definition of adversarial examples that we are working with in this paper. An image is an adversarial example if
1. It is classified as a different class than the image from which it was derived.
2. It is classified incorrectly according to ground truth label.
3. The image from which it was derived was originally correctly classified.
Since we are evaluating attack success rates in the presence of defenses, point 3 in the above definition ensures that the attack success rate is not confounded by a performance decrease in the classifier potentially caused by a given defense. In our analysis, we use the fast gradient sign (FGS) [@goodfellow_explaining_2014], iterative gradient sign (IGS) [@kurakin_adversarial_2016], and Carlini and Wagner (CW) [@carlini_towards_2016] attacks. We are operating under the white-box assumption that the adversary has access to network architecture, weights, and gradients.
Additionally, we did not use any data augmentation when training our models since that can be considered as a defense and could further confound results. To address possible concerns regarding poor performance on unperturbed images when defenses are used, we performed the experiment below. For the FGS and IGS attacks, unless otherwise noted, an epsilon of 0.3 was used as is typical in the literature.
Performance of Defended Models on Clean Data
--------------------------------------------
One of the basic assumptions of our approach is that there exist operations that can be applied to the feature embeddings generated by the layers of a deep classification model, which preserve the classification accuracy of the network while removing the adversarial signal. As an example of such transformations, we propose a variational autoencoder. We have evaluated the effect of inserting VAEs on two models: Logistic Regression (LR) and a 2 convolutional layer LeNet on the MNIST dataset. The comparison of the performance of these methods is summarized in Table \[table:performance\_reduction\]. Surprisingly, on MNIST it is possible to train quite simple variational autoencoding models to recreate feature embeddings with sufficient fidelity to leave the model performance virtually unchanged. Reconstructed embeddings are visualized in Supplementary Materials. Supplementary Figure 3 shows how the defense reduces the distance between adversarial and normal examples in the various layers of LeNet.
Model Undef. Accuracy Deterministic Def. Accuracy Stochastic Def. Accuracy
----------- ----------------- ----------------------------- --------------------------
LR-VAE 0.921 0.907 0.914
LeNet-VAE 0.990 0.957 0.972
: Performance reduction caused by defenses on the MNIST dataset.[]{data-label="table:performance_reduction"}
Transferability of Attacks Between Defense Arrangements
-------------------------------------------------------
The premise of our defense is that the exponentially many arrangements of noise removing operations are not all exploitable by the same set of adversarial images. The worst case scenario is if the adversary creates malicious examples when noise removing operations are turned on in all possible locations. It is possible that such adversarial examples would also fool the classifier when the defense is only applied in a subset of the layers. Fortunately, we note that for FGS, IGS, and CW2, transferability of attacks between defense arrangements is limited as seen in Table \[table:transferability\_layer\_lenet\]. The column headers are binary strings indicating presence or absence of defense at the 3 relevant points in LeNet: input layer, after the first convolutional layer, after the second convolutional layer. Column \[0, 0, 0\] shows the attack success rate against an undefended model, and column \[1, 1, 1\] shows the attack success rate against a fully deterministically-defended model. The remaining columns show the transfer attack success rate of the perturbed images created for the \[1, 1 ,1\] defense arrangement. The most surprising result is that defense arrangements using 2 autoencoders are more susceptible to a transfer attack than defense arrangements using a single autoencoder. Specifically, \[1, 0, 0\] is more robust than \[1, 1, 0\] which does not make sense intuitively. Although the perturbed images are engineered to fool a classifier with VAEs in all convolutional layers in addition to the input layer, it is possible that the gradient used to generate such images is orthogonal to the gradient required to fool just a single VAE in the convolutional layers.
------------------------------ ------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
(r)[2-3]{} (r)[4-9]{} Attack \[0, 0, 0\] \[1, 1, 1\] \[0, 0, 1\] \[0, 1, 0\] \[1, 0, 0\] \[0, 1, 1\] \[1, 0, 1\] \[1, 1, 0\]
FGS 0.176 0.1451 0.035 0.019 0.117 0.018 0.133 0.131
IGS 0.434 0.270 0.016 0.011 0.193 0.014 0.231 0.223
CW2 0.990 0.977 0.003 0.002 0.775 0.003 0.959 0.892
------------------------------ ------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
: Transferability of attacks from strongest defense arrangement \[1, 1, 1\] to other defense arrangements for LeNet-VAE.[]{data-label="table:transferability_layer_lenet"}
Investigating Cause of Low Attack Transferability
-------------------------------------------------
To confirm the suspicion that orthogonal gradients are responsible for the unexpected transferability results between defense arrangements seen in Table \[table:transferability\_layer\_lenet\], we computed the cosine similarity of the gradients of the output layer w.r.t to the input images. Table \[table:cosine\_similarities\_lenet\] shows the average cosine similarity between the strongest defense arrangement \[1, 1, 1\] and other defense arrangements. To summarize the relationship between cosine similarity and attack transferability, we computed the correlations of the transferabilities in Table \[table:transferability\_layer\_lenet\] with the cosine similarities in Table \[table:cosine\_similarities\_lenet\]. These correlations are shown in Table \[table:pearson\_correlations\]. It is quite clear that cosine similarity between gradients is an almost perfect predictor of the transferability between defense arrangements. Thus, training VAEs with the goal of gradient orthogonality, or training conventional ensembles of models with this goal has the potential to drastically decrease the transferability of adversarial examples between models.
----------------------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
(r)[2-8]{} Base Arrangement \[0, 0, 1\] \[0, 1, 0\] \[0, 1, 1\] \[1, 0, 0\] \[1, 0, 1\] \[1, 1, 0\] \[1, 1, 1\]
[\[1, 1, 1\]]{} 0.219 **0.190** 0.249 0.648 0.773 0.728 0.949
----------------------------- ------------- ------------- ------------- ------------- ------------- ------------- -------------
: Cosine similarities of LeNet probability output vector gradients w.r.t. input images.[]{data-label="table:cosine_similarities_lenet"}
--------------------- ------- ------- -------
(r)[2-4]{} Cor Type FGS IGS CW2
Pearson 0.990 0.997 0.997
Spearman 0.829 0.943 0.986
--------------------- ------- ------- -------
: Correlations of cosine similarities between different defense arrangements (using \[1, 1, 1\] as the baseline defense) and the transferability between the attacks on the \[1, 1, 1\] defense to other defense arrangements.[]{data-label="table:pearson_correlations"}
Training for Gradient Orthogonality
-----------------------------------
In order to test the claim that explicitly training for gradient orthogonality will result in lower transferability of adversarial examples, we focus on a simple scenario. We trained 16 pairs of LeNet models, 8 of which were trained to have orthogonal input-output Jacobians, and 8 of which were trained to have parallel input-output Jacobians. As can be seen in Figure \[fig:parallel\_vs\_perpendicular\], the differences in transfer rates and relevant transfer rates are quite vast between the two approaches. The median relevant transfer attack success rate for the parallel approach is approx. 92%, whereas it is only approx. 17% for the perpendicular approach. This result further illustrates the importance of the input-output Jacobian cosine similarity between models when it comes to transferability.
![Baseline attack success rates and transfer success rates for an IGS attack with an epsilon of 1.0 on LeNet models trained on MNIST. 8 pairs of models were trained for the parallel Jacobian goal, and 8 pairs of models were trained for the perpendicular goal to obtain error bars around attack success rates.[]{data-label="fig:parallel_vs_perpendicular"}](figures/parallel_vs_perpendicular.pdf){width="90.00000%"}
Effect of Gradient Magnitude
----------------------------
When training the dozens of models used for the results of this paper, we noticed that attack success rates varied greatly between models, even though the difference in hyperparameters seemed negligible. We posited that a significant confounding factor that determines susceptibility to attacks such as FGS and IGS is the magnitude of the input-output Jacobian for the true class. Intuitively, if the magnitude of the input-output Jacobian is large, then the perturbation required to cause a model to misclassify an image is smaller than had the magnitude of the input-output Jacobian been small. This is seen in Figure \[fig:effect\_of\_grad\_magnitude\] where there is a clear increasing trend in attack success rate as the input-output Jacobian increases. This metric can be a significant confounding factor when analyzing robustness to adversarial examples, so it is important to measure it before concluding that differences in hyperparameters are the cause of varying levels of adversarial robustness.
![Relationship between input-output Jacobian L2 norm and susceptibility to IGS attack with epsilon of 0.3. 8 LeNet models were trained on MNIST for each magnitude with different random seeds. The ’mean’ and ’std’ reported underneath the goal note the actual input-output Jacobian L2 norms, whereas the ’goal’ is the target that was used during training to regularize the magnitude of the Jacobian.[]{data-label="fig:effect_of_grad_magnitude"}](figures/effect_of_grad_magnitude_igs.pdf){width="\textwidth"}
Detector Analysis
=================
The effect of the CW2 attack on the average of defense arrangements is investigated in Supplementary Material. While it may seem that our proposed defense scheme is easily fooled by a strong attack such as CW2, there are still ways of recovering from such attacks by using detection. In fact, there will always be perturbations that are extreme enough to fool any classifier, but perturbations with larger magnitude become increasingly easy for humans to notice. In this section, “classifier” is the model we are trying to defend, and “auxiliary model” is another model we train (a triplet network) that is combined with the classifier to create a detector.
Transferability of Attacks Between LeNet and Triplet Network
------------------------------------------------------------
The best case scenario for an auxiliary model would be if it were fooled only by a negligible percentage of the images that fool the classifier. It is also acceptable for the auxiliary model to be fooled by a large percentage of those images, provided it does not make the same misclassifications as the classifier. Fortunately, we observe that the majority of the perturbed images that fooled LeNet did not fool the triplet network in a way that would affect the detector’s viability. This is shown in Table \[table:transferability\_classifier\_detector\]. For example, 1060 perturbed images created for LeNet using FGS fooled the triplet network. However, only 70 images were missed by the detector due to the requirement for agreement between the auxiliary model and classifier. The columns with “Relevant” in the name show the success rate of transfer attacks that would have fooled the detector.
------------ ------- ------- ------- ------- ------- -------
(r)[2-7]{}
FGS 0.089 0.106 0.164 0.015 0.007 0.004
IGS 0.130 0.094 0.244 0.004 0.007 0.002
CW2 0.990 0.121 0.819 0.013 0.049 0.008
------------ ------- ------- ------- ------- ------- -------
: Transferability of attacks between LeNet and triplet network.[]{data-label="table:transferability_classifier_detector"}
Jointly Fooling Classifier and Detector
---------------------------------------
If an adversary is unaware that a detector is in place, the task of detecting adversarial examples is much easier. To stay consistent with the white-box scenario considered in previous sections, we assume that the adversary is aware that a detector is in place, so they choose to jointly optimize fooling the VAE-defended classifier and detector. We follow the approach described in [@Carlini2017] where we add an additional output as follows
$$G(x)_i =
\begin{cases}
Z_F(x)_i & \text{if $i \leq N$} \\
(Z_D(x) + 1) \cdot \max\limits_{j} Z_F(x)_j & \text{if $i=N + 1$}
\end{cases}$$
where $Z_D(x)$ is the logit output by the detector, and $Z_F(x)$ is the logit output by the classifier. Table \[table:lenet\_detector\] shows the effectiveness of combining the detector and VAE-defended classifier. The reason why this undefended attack success rate for the CW2 attack is lower than that in Table \[table:transferability\_classifier\_detector\] is probably because the gradient signal when attacking the joint model is weaker. Overall, less than 7% (0.702 - 0.635) of the perturbed images created using CW2 fool the combination of VAE defense and detector.
------------------------------ -------- --------- ------- ---------- ------------ --------------- ------------
(r)[2-4]{} (r)[5-8]{} Attack Undef. Determ. Stoc. Original Undefended Deterministic Stochastic
FGS 0.197 0.178 0.179 0.906 0.941 0.957 0.962
IGS 0.323 0.265 0.146 0.903 0.938 0.949 0.967
CW2 0.787 0.703 0.702 0.909 0.848 0.635 0.657
------------------------------ -------- --------- ------- ---------- ------------ --------------- ------------
: Attack success rates and detector accuracy for adversarial examples on LeNet-VAE using a triplet network detector.[]{data-label="table:lenet_detector"}
Discussion
==========
Our proposed defense and the experiments we conducted on it have revealed intriguing, unexpected properties of neural networks. Firstly, we showed how to obtain an exponentially large ensemble of models by training a linear number of VAEs. Additionally, various VAE arrangements had substantially different effects on the network’s gradients w.r.t. a given input image. This is a counterintuitive result because qualitative examination of reconstructions shows that reconstructed images or activation maps look nearly identical to the original images. Also, from a theoretical perspective, VAE reconstructions in general should have a gradient of $\approx \mathbf{1}$ w.r.t. input images since VAEs are trained to approximate the identity function. Secondly, we demonstrated that reducing the transferability of adversarial examples between models is a matter of making the gradients orthogonal between models w.r.t. the inputs. This result makes more sense, and such a goal is something that can be enforced via a regularization term when training VAEs or generating new filtering operations. Using this result can also help guide the creation of more effective concordance-based detectors. For example, a detector could be made by training an additional LeNet model with the goal of making the second model have the same predictions as the first one while having orthogonal gradients. Conducting an adversarial attack that fools both models in the same way would be difficult since making progress on the first model might have the opposite effect or no effect at all on the second model.
A limitation of training models with such unconventional regularization in place is determining how to trade off classification accuracy and gradient orthogonality. Our defense framework requires little computational overhead to filter operations such as blurs and sharpens, and is not particularly computationally intensive when there are VAEs to train. Training a number of VAEs equal to the depth of a network in order to obtain an ensemble containing an exponentially large number of models can be computationally intensive, however, in critical mission scenarios, such as healthcare and autonomous driving, spending more time to train a robust system is certainly warranted and is a key to broad adoption.
Conclusion
==========
In this project, we presented a probabilistic framework that uses properties intrinsic to deep CNNs in order to defend against adversarial examples. Several experiments were performed to test the claims that such a setup would result in an exponential ensemble of models for just a linear computation cost. We demonstrated that our defense cleans the adversarial noise in the perturbed images and makes them more similar to natural images (Supplementary). Perhaps our most exciting result is that the cosine similarity of the gradients between defense arrangements is highly predictive of attack transferability which opens a lot of avenues for developing defense mechanisms of CNNs and DNNs in general. As proof of a concept regarding classification biases between models, we showed that the triplet network detector was quite effective at detecting adversarial examples, and was fooled by only a small fraction of the adversarial examples that fooled LeNet. To conclude, probabilistic defenses are able to substantially reduce adversarial attack success rates, while revealing interesting properties about existing models.
Supplementary Material {#supplementary-material .unnumbered}
======================
Reconstruction of Feature Embeddings {#appdx:reconstruct .unnumbered}
------------------------------------
Since using autoencoders to reconstruct activation maps is an unconventional task, we visualize the reconstructions in order to inspect their quality. For the activations obtained from the first convolutional layer seen in Figure \[fig:conv1\_layer\_reconstructions\], it is obvious that the VAEs are effective at reconstructing the activation maps. The only potential issue is that the background for some of the reconstructions is slightly more gray than in the original activation maps. For the most part, this is also the case for the second convolutional layer activation maps. However, in the first, fourth, and sixth rows of Figure \[fig:conv2\_layer\_reconstructions\], there is an obvious addition of arbitrary pixels that were not present in the original activation maps.
![First convolutional layer feature map visualization for LeNet on MNIST. Original feature maps are on the left, VAE reconstructed features maps are on the right. As is seen, reconstructions are of very high quality.[]{data-label="fig:conv1_layer_reconstructions"}](figures/conv1_layer_reconstructions.pdf){width="5cm" height="15cm"}
![Second convolutional layer feature map visualization for LeNet on MNIST. Original feature maps are on the left, VAE reconstructed features maps are on the right. As is seen, reconstructions are of very high quality.[]{data-label="fig:conv2_layer_reconstructions"}](figures/conv2_layer_reconstructions.pdf){width="5cm" height="15cm"}
Cleaning Adversarial Examples {#cleaning-adversarial-examples .unnumbered}
-----------------------------
The intuitive notion that VAEs or filters remove adversarial noise can be tested empirically by comparing the distance between adversarial examples and their unperturbed counterparts. In figure \[fig:lenet\_fgs\_distances\], the evolution of distances between normal an adversarial examples can be seen. When the classifier is undefended, the distance increases significantly with the depth of the network, and this confirms the hypothesis that affine transformations amplify noise. However, it is clear that applying our defense has a marked impact on the distance between normal and adversarial examples. Thus, we can conclude that part of the reason for why the defense works is that it dampens the effect of adversarial noise.
[.5]{} ![L-$\infty$ distance between adversarial and normal images as a function of layer number for LeNet attacked with FGS for the MNIST dataset.[]{data-label="fig:lenet_fgs_distances"}](figures/lenet_vae_fgs_0_mnist_distances_lineplot.pdf "fig:"){width="1\linewidth"}
Effect of Attacks on Averaged Defense {#effect-of-attacks-on-averaged-defense .unnumbered}
-------------------------------------
Since the IGS and CW2 attacks are iterative, they have the ability to see multiple defense arrangements while creating adversarial examples. This can result in adversarial examples that might fool any defense of the available defense arrangements. Indeed, this seems to happen for the CW2 attack shown in Table \[table:defense\_success\_lenet\]. The cause of this is most easily explained by the illustration in Figure \[fig:failure\_mode\]. Since the depth of the models we trained was not deep enough, it was possible for the iterative attacks to see all defense combinations when creating adversarial examples, so our defense was defeated. We believe that given a deep enough network of 25 or more layers, it would be computationally infeasible for an adversary to create examples that fool the stochastic ensemble.
----------- ------- ------- ------- ------- -------
LR-VAE 0.920 0.032 0.922 0.473 0.921
LeNet-VAE 0.990 0.014 0.977 0.140 0.984
----------- ------- ------- ------- ------- -------
: Success rate of CW2 attack on LR and LeNet defended with VAEs. []{data-label="table:defense_success_lenet"}
![Illustration of how the defense can fail against iterative attacks. Even though the two defense arrangements have orthogonal gradients, thereby exhibiting low transferability of attacks, an iterative attack that alternates between optimizing for either arrangement can end up fooling both.[]{data-label="fig:failure_mode"}](figures/failure_mode.pdf){width="0.8\linewidth"}
Triplet Network Visualization {#triplet-network-visualization .unnumbered}
-----------------------------
Here we illustrate how a triplet network is trained. An anchor, positive example, and negative example are all passed through the same embedding network. The triplet loss is then computed which encourages the distance between the anchor and positive example to be some margin $\alpha$ closer together than the anchor and negative example.
![Illustration of how a triplet network works on the MNIST dataset.[]{data-label="triplet_net"}](figures/triplet_net.pdf){width="\textwidth"}
The triplet loss function is shown here for convenience:
$$L(x, x_{-}, x_{+}) = \mathrm{max}(0, \alpha + \mathrm{d}(x, x_{+}) - \mathrm{d}(x, x_{-}))$$
Agreement Between LeNet and Triplet Network {#agreement-between-lenet-and-triplet-network .unnumbered}
-------------------------------------------
We investigated the agreement between LeNet and triplet network we trained in order to confirm that a concordance based detector is a viable option, and does not result in false positives on normal images. Importantly, the models agree 90% (Table \[table:concordance\]) of the time on normal images, so false positives are not a major concern.
Overall Concordance Correct Concordance Incorrect Concordance
--------------------- --------------------- -----------------------
0.911 0.908 0.003
: Concordance between LeNet and triplet network on predictions on normal images.[]{data-label="table:concordance"}
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Derivation of a reduced effective model allows for a unified treatment and discussion of the $J_1$-$J_2$ $S=1/2$ Heisenberg model on triangular and kagome lattice. Calculating thermodynamic quantities, i.e. the entropy $s(T)$ and uniform susceptibility $\chi_0(T)$, numerically on systems up to effectively $N=42$ sites, we show by comparing to full-model results that low-$T$ properties are qualitatively well represented within the effective model. Moreover, we find in the spin-liquid regime similar variation of $s(T)$ and $\chi_0(T)$ in both models down to $T \ll J_1$. In particular, studied spin liquids appear to be characterized by Wilson ratio vanishing at low $T$, indicating that the low-lying singlets are dominating over the triplet excitations.'
author:
- 'P. Prelovšek'
- 'J. Kokalj'
bibliography:
- 'manuslprb.bib'
title: Similarity of thermodynamic properties of Heisenberg model on triangular and kagome lattices
---
Introduction
============
Studies of possible quantum spin-liquid (SL) state in spin models on frustrated lattices have a long history, starting with the Anderson’s conjecture [@anderson73] for the Heisenberg model on a triangular lattice. In last two decades theoretical efforts have been boosted by the discovery of several classes of insulators with local magnetic moments [@lee08; @balents10; @savary17], which do not reveal long-range order (LRO) down to lowest temperatures $T$. The first class are compounds, as the herbertsmithite ZnCu$_3$(OH)$_6$Cl$_2$ [@norman16], which can be represented with Heisenberg $S=1/2$ model on kagome lattice, being the subject of numerous experimental studies [@mendels07; @olariu08; @han12; @fu15], now including also related materials [@hiroi01; @fak12; @li14; @gomilsek16; @feng17; @zorko19] confirming the SL properties, at least in a wide $T > 0$ range. Another class are organic compounds, as $\kappa$-(ET)$_2$Cu$_2$(CN)$_3$ [@shimizu03; @shimizu06; @itou10; @zhou17], where the spins reside on a triangular lattice. Recently, the charge-density-wave system 1T-TaS$_2$, which is a Mott insulator without magnetic LRO and shows spin fluctuations at $T > 0$ [@klanjsek17; @kratochvilova17; @law17; @he18], has been added into this family.
Numerical [@bernu94; @capriotti99; @white07] and analytical [@chernyshev09] studies of the nearest-neighbor (nn) quantum $S=1/2$ Heisenberg model (HM) on a triangular lattice (TL) confirm a spiral long-range order (LRO) with spins pointing in $120^\circ$ tilted directions. Introducing the next-nearest-neighbor (nnn) coupling $J_2>0$ enables the SL ground state (g.s.) in the part of the phase diagram [@kaneko14; @watanabe14; @zhu15; @hu15; @iqbal16; @wietek17; @gong17; @ferrari19]. There is even more extensive literature on the HM on the kagome lattice (KL) confirming the absence of g.s. LRO order [@mila98; @budnik04; @iqbal11; @lauchli11; @iqbal13]. The prevailing conclusion of numerical studies of the g.s. and lowest excited states is that HM on KL has a finite spin triplet gap $\Delta_t$ [@yan08; @lauchli11; @depenbrock12; @lauchli19] (with some evidence pointing also to gapless SL [@iqbal13; @liao17; @chen18]), but much smaller or vanishing singlet gap $\Delta_s \ll \Delta_t $ [@elstner94; @mila98; @sindzingre00; @misguich07; @bernu15; @lauchli19; @schnack18]. On the other hand, extensions into the $J_1$-$J_2$ model [@kolley15; @liao17] with $J_2 > 0$ again leads towards g.s. with magnetic LRO. Still, HM on both lattices in their respective SL parameter regimes have been studied and considered separately, not recognizing or stressing their similarity.
Our goal is to put extended $J_1$-$J_2$ HM on TL and KL on a common ground, stressing the similarity of their (in particular thermodynamic) properties within their presumable SL regimes. To this purpose we derive and employ a reduced effective model (EM), which is based on keeping only the lowest four $S=1/2$ states in a single triangle. Such an EM has been previously introduced and analyzed for the case of KL [@subrahmanyan95; @mila98; @budnik04; @capponi04] and as the starting point for block perturbation approach also for the TL [@subrahmanyan95], but so far has not been used to evaluate and capture $T>0$ properties. Such an EM has an evident advantage of reduced number of states in an exact-diagonalization (ED) study and hence allowing for somewhat larger lattices (in our study up to $N=42$ sites). Still, this is not the most important message, since EM allows also an insight into the character of low-energy excitations, being now separated into spin (triplet) and chirality (singlet) ones. The main focus of this work is on the numerical evaluation of thermodynamic quantities and their understanding, whereby we do not resort to perturbative limits (extreme breathing limit) of weakly coupled triangles [@repellin17; @iqbal18] which apparently does not represent fully the same SL physics. We concentrate on the entropy density $s(T)$ and uniform susceptibility $\chi_0(T)$ within the SL parameter regimes, approached before mostly by high-$T$ expansion [@elstner93; @elstner94; @sindzingre00; @misguich07; @rigol07; @rigol07a; @bernu15; @bernu19] and only recently with numerical methods adequate for lower $T \ll J_1$, both on TL [@prelovsek18] and KL [@chen18; @schnack18]. However, the most universal property is the temperature-dependent Wilson ration $R(T)$, introduced and discussed further on.
Generalised Wilson ratio
------------------------
Our results in the following reveal that in both lattices and within the SL regime $s(T)$ and $\chi_0(T)$ are very similar in a broad range of $T$. In this respect very convenient quantity is $T$-dependent generalized Wilson ratio $R(T)$, defined as $$R= 4 \pi^2 T \chi_0 / (3 s), \label{rw}$$ which is equivalent (assuming theoretical units $k_B= g \mu_B =1$) to the standard $T=0$ Wilson ratio in the case of Fermi-liquid behavior where $s= C_V=\gamma T$. Definition Eq.(\[rw\]) is more convenient with respect to the standard one (with $C_V$ instead of $s$) since it has meaningful $T$ dependence due to monotonously increasing $s(T)$, having also finite high-$T$ limit $R_\infty = \pi^2/(3 \ln2) = 4.75$. Moreover, it can differentiate between quite distinct $T \to 0$ scenarios:
a\) in the case of magnetic LRO at $T \to 0$ one expects in 2D (isotropic HM) $\chi_0(T\to 0) \sim \chi_0^0 >0$ but $s \propto T^2$ [@manousakis91], so that $R_0 = R(T \to 0) \to \infty$,
b\) in a gapless SL with large spinon Fermi surface one would expect Fermi-liquid-like constant $R_0 \sim 1$ [@balents10; @zhou17; @law17],
c\) $R_0 \ll 1$ or a decreasing $R(T \to 0) \to 0$ would indicate that low-energy singlet excitations dominate over the triplet ones [@balents10; @lauchli19].
In the following we find in the SL regime numerical evidence for the last scenario, which within the EM we attribute to low-lying chiral fluctuations being a hallmark of Heisenberg model in the SL regime. It should be pointed out that the same property might be very general property of SL models and it remains to be clarified in relation with experiments on SL materials.
In Sec. II we derive and discuss the form of the EM model for the $J_1$-$J_2$ Heisenberg model on both TL and KL. In Sec. III we present numerical methods employed to calculate thermodynamic quantities in the EM as well as the in full models.
Reduced effective model
=======================
We consider the isotropic $S=1/2$ extended $J_1$-$J_2$ Heisenberg model, $$H= J_1 \sum_{\langle kl \rangle} {\bf S}_k \cdot {\bf S}_l + J_2 \sum_{\langle \langle kl \rangle \rangle}
{\bf S}_k \cdot {\bf S}_l, \label{hjj}$$ on the TL and KL, where $J_1>0$ and $J_2 $ refer to nn and nnn exchange couplings (see Fig.\[figsl0\]), respectively. The role of $J_2 >0$ on TL is to destroy the 120$^0$ LRO allowing for a SL [@kaneko14; @watanabe14; @zhu15; @iqbal16; @prelovsek18], while for KL it has the opposite effect [@kolley15]. Further we set $J_1=J=1$ as an energy scale.
![(a) Triangular and (B) kagome lattice represented as the (triangular) lattices of coupled basic triangles. Shown are also model exchange couplings on both lattices.[]{data-label="figsl0"}](shema.eps){width="1.1\columnwidth"}
As shown in Fig. \[figsl0\] the model Eq. (\[hjj\]) on both lattices can be represented as coupled basis triangles [@subrahmanyan95; @mila98; @budnik04; @capponi04] where we keep in the construction of the EM only four degenerate $S=1/2$ states (local energy $\epsilon_0=-3/4$), neglecting higher $S=3/2$ states (local $\epsilon_1=3/4$), $$\begin{aligned}
|\uparrow \pm \rangle &=&\frac{1}{\sqrt{3}}[ | \downarrow\uparrow \uparrow \rangle +
\mathrm{e}^{\pm i \phi} | \uparrow\downarrow \uparrow \rangle +\mathrm{e}^{\mp i \phi} | \uparrow\uparrow \downarrow \rangle ],
\nonumber \\
|\downarrow \pm \rangle &=& \frac{1}{\sqrt{3}}[ | \uparrow\downarrow \downarrow \rangle + \mathrm{e}^{\mp i \phi} | \downarrow\uparrow \downarrow \rangle +
\mathrm{e}^{\mp i \phi} | \downarrow\downarrow \uparrow \rangle ], \label{updown}\end{aligned}$$ where $\phi = 2 \pi/3$, $\uparrow, \downarrow$ are (new) spin states and $\pm$ refer to local chirality. One can rewrite Eq.(\[hjj\]) into the new basis acting between nn triangles $\langle i,j \rangle$. The derivation is straightforward taking care that the matrix elements of the original model. Eq. (\[hjj\]), within the new basis, Eq. (\[updown\]), are exactly reproduced with new operators. We follow the procedure through intermediate (single triangle) orbital spin operators (see Fig. \[figsl0\]), $$S_{(0,+,-)}=S_1+(1,\omega,\omega^*) S_2+ (1,\omega^*,\omega) S_3, \label{s0}$$ with $\omega = \mathrm{e}^{ i \phi}$. Operators in Eq. (\[s0\]) have only few nonzero matrix elements within the new basis, Eq. (\[updown\]), e.g., $$\begin{aligned}
&&\langle \uparrow + | S_0^z | \uparrow + \rangle = -\langle \downarrow + | S_0^z | \downarrow + \rangle = \frac{1}{2}, \nonumber \\
&& \langle \downarrow + | S_-^z | \downarrow - \rangle = - \langle \uparrow + | S_- ^z | \downarrow + \rangle = 1 , \nonumber \\
&&\langle \uparrow + | S_0^+ | \downarrow + \rangle = ~~\langle \uparrow - | S_0^+ | \downarrow - \rangle = -1, \nonumber \\
&&\langle \uparrow - | S_+^+ | \downarrow + \rangle = ~~\langle \downarrow + | S_-^- | \uparrow - \rangle = 2. \label{locs}\end{aligned}$$ Such operators can be fully represented in terms of standard local $s=1/2$ spin operators ${\bf s}$, and pseudospin (chirality) operators ${\bf \tau}$(again $\tau=1/2$), $$\begin{aligned}
&&S^z_0= s^z, \quad S^\pm_0=-s^\pm , \nonumber \\
&& S^z_\pm= -2 s^z \tau^\mp, \quad S^\pm_+= 2 s^\pm \tau^-, \quad S^\pm_-= 2 s^\pm \tau^+, \label{effspin}\end{aligned}$$ Since the original Hamiltonian, Eq. (\[hjj\]), acts only between two sites on the new reduced (triangular) lattice, the effective model (EM), subtracting local $\epsilon_0$, can be fully represented for both lattices (as shown by an example further on) in terms of ${\bf s}_i$ and ${\bf \tau}_i$ operators, introduced in Eq. (\[locs\]), $$\begin{aligned}
\tilde H &=& \frac{1}{2} \sum_{i ,d } {\bf s}_{i} \cdot {\bf s}_{j} ( D+ {\cal H}^d_{ij}) , \label{em} \\
{\cal H}^d_{ij} &=& F_d \tau^+_i \tau^-_j + P_d \tau^+_i + Q_d \tau^+_j + T_d \tau^+_i \tau^+_j + \mathrm{H.c}, \nonumber\end{aligned}$$ where directions $d=1$–$6$ and $j=i+d$ run over all nn sites of site $i$, and the new lattice is again TL. We note that Eq. (\[em\]) corresponds to the one studied before for simplest KL [@subrahmanyan95; @mila98; @budnik04], but it is valid also for TL [@subrahmanyan95] and nnn $J_2$. It is remarkable that the spin part remains $SU(2)$ invariant whereas the chirality part is not, i.e., it is of the XY form.
It should be pointed out that EM, Eq. (\[em\]), is not based on a perturbation expansion assuming weak coupling between triangles, although in the latter case it offers a full description of low-lying states and can be further treated analytically in (rather artificial) strong breathing limit $|\tilde H| \ll |E_0|$ [@subrahmanyan95; @mila98]. As for several other applications of reduced effective models (prominent example being the $t$-$J$ model as reduced/projected Hubbard model), one expects that low-$T$ physics is (at least qualitatively) well captured by the EM.
Triangular lattice
------------------
For the considered TL and KL we further present the actual parameters of the EM, Eq. (\[em\]). The derivation is straightforward using the representation Eqs. (\[s0\]) and (\[effspin\]). As an example we present the $J_1$-term interaction in Eq. (\[hjj\]) between (new) sites $0$ and $1$ (see Fig. \[figsl0\]a), $$\begin{aligned}
&&~~~~\tilde H^1_{01}/J_1 = {\bf S}_{01} \cdot ( {\bf S}_{12} + {\bf S}_{13} ) = \nonumber \\
&& = \frac{1}{9}(S_{00}^z + S^z_{0+} + S^z_{0-})(2 S^z_{10} - S^z_{1+} - S^z_{1-} ) + \nonumber \\
&&+\frac{1}{18} [(S_{00}^+ + S^+_{0+} + S^+_{0-})(2 S^-_{10} - S^-_{1+} - S^-_{1-}) +
\mathrm{H.c.}] = \nonumber \\
&&= - \frac{4}{9} {\bf s}_0 \cdot {\bf s}_1 (\tau^-_0 + \tau^+_0 -\frac{1}{2})(\tau^-_1 + \tau^+_1 + 1).
\label{h01}\end{aligned}$$ Deriving in the same manner also other $\tilde H^d$ terms, including now also $J_2$ term (which are even simpler in TL), we can identify the parameters (with $J_1=1$) in the EM, Eq.(\[em\]), $$D= \frac{2}{9} + \frac{1}{3} J_2, \qquad F= - \frac{4}{9} + \frac{4}{3} J_2,
\label{df}$$ and further terms depending explicitly on direction $d$, $$\begin{aligned}
P_1&=& -\frac{4}{9},\quad P_2= \frac{2}{9} \omega^* ,
\quad P_3= -\frac{4}{9} \omega , \nonumber \\
P_4&=& \frac{2}{9} , \quad P_5= -\frac{4}{9} \omega^* ,
\quad P_6= \frac{2}{9} \omega, \nonumber \\
T_1&=& -\frac{4}{9} ,\quad T_2 = -\frac{4}{9} \omega , \quad T_3 = -\frac{4}{9} \omega^* \end{aligned}$$ with $T_{d+3}=T_d$ and $Q_d = P_{d+3}$. It is worth noticing, that $J_2$ does not enter in terms $P_d,Q_d,T_d$. It can be also directly verified, that for the latter couplings, the average over all nn bonds vanish, i.e., $$\bar P = \frac{1}{6}\sum_d P_d = 0, \qquad \bar Q = \bar T =0,$$ indicating possible minor importance of these terms. This is, however, only partially true since such terms also play a role of distributing the increase of entropy $s(T)$ to a wider $T$ interval.
Eqs. (\[em\]),(\[df\]) yield also some basic insight into the HM model on TL, as well the similarity between models on TL and KL. While $\chi_0(T)$ is governed entirely by ${\bf s}$ operators, low-$T$ entropy $s(T)$ (and specific heat $C_V(T)$) involves also chirality ${\bf \tau}$ fluctuations. In TL at $J_2=0$ $\tau$ coupling is ferromagnetic and favors spiral $120^0$ LRO. $\tau$ fluctuations are enhanced via $J_2 >0$ reducing $F$ and finally $F \to 0$ on approaching $J_2 \sim 0.3$. Still, before such large $J_2$ is reached $P_d,Q_d,T_d$ terms become relevant and stabilize SL at $J_2 \sim 0.1$ [@kaneko14; @watanabe14; @zhu15; @hu15; @iqbal16]. It should be stressed that in TL a standard magnetic LRO requires LRO ordering of both ${\bf s}$ and ${\bf \tau}$ operators.
Kagome lattice.
---------------
In analogy to the TL example, Eq. (\[h01\]), we derive also the corresponding terms for the case of KL. Without loosing the generality we can include here also the third-neighbor exchange term $J_3$, see Fig. \[figsl0\]b, which also couples only neighboring triangles. Then $D$ and $F_d$ couplings are given by $$D = \frac{1}{9} + \frac{2}{9} J_2 + \frac{2}{9} J_3, \qquad F_1 =
\frac{4}{9} \omega + \frac{8}{9} \omega^* J_2 + \frac{4}{9} J_3,$$ while $F_{d+1}= F_d^*$. In contrast to TL, in KL at $J_2=0$ the $F_d$ coupling is complex and alternating, with a nonzero average, being real and negative, i.e. $\bar F = (1/6) \sum_d F_d < 0$. Moreover, $| \mathrm{Im} F_d| < |\mathrm{Re} F_d |$, indicates the absence of LRO. Here, $J_2 >0$ reduces $|\mathrm{Im} F_d|$ and on approaching $J_2 \sim 0.5$ one reaches real $F_d <0$ connecting KL model to TL at $J_2=0$ and related LRO, as observed in numerical studies [@kolley15].
Further terms are given by $$\begin{aligned}
P_1&=& - \frac{2}{9} + \frac{2}{9} \omega^* J_2 - \frac{2}{9} \omega J_3, \quad
P_2 = - \frac{2}{9} \omega + \frac{2}{9} \omega^* J_2 - \frac{2}{9} J_3, \nonumber \\
P_3&=& - \frac{2}{9} \omega + \frac{2}{9} J_2 - \frac{2}{9} \omega^* J_3, \quad
P_4 = - \frac{2}{9} \omega^* + \frac{2}{9} J_2 - \frac{2}{9} \omega J_3, \nonumber \\
P_5&=& - \frac{2}{9} \omega^* + \frac{2}{9} \omega J_2 - \frac{2}{9} J_3, \quad
P_6 = - \frac{2}{9} + \frac{2}{9} \omega J_2 - \frac{2}{9} \omega^* J_3, \nonumber \\\end{aligned}$$ with $Q_d=P_{d+3}$ and $$\begin{aligned}
T_1&=& \frac{4}{9} \omega^* - \frac{4}{9} \omega^* J_2 + \frac{4}{9} J_3,
\quad T_2 = \frac{4}{9} - \frac{4}{9} J_2 + \frac{4}{9} J_3, \nonumber \\
T_3&=& \frac{4}{9} \omega - \frac{4}{9} \omega J_2 + \frac{4}{9} J_3, \qquad T_{d+3}=T_d .\end{aligned}$$ Again, terms which do not conserve $\tau^z_{tot}$ have the property $\bar P =\bar Q = \bar T =0$.
Numerical method
================
In the evaluation of thermodynamical quantities we use the FTLM [@jaklic94; @jaklic00; @prelovsek13], which is based on the Lanczos exact-diagonalization (ED) method [@lanczos50], whereby the Lanczos-basis states are used to evaluate the normalized thermodynamic sum $$Z(T) = \mathrm{Tr} \exp[- (H-E_0)/T],$$ (where $E_0$ is the ground state energy of a system). The FTLM is particularly convenient to apply for the calculation of the conserved quantities , i.e., operators $A$ commuting with the Hamiltonian $[H,A]=0$. In this way we evaluate $Z$, the thermal average energy $\Delta E =\langle H - E_0 \rangle $ and magnetization $M = \langle s^z_{tot} \rangle$. From these quantities we evaluate the thermodynamic observables of interest, i.e. uniform susceptibility $\chi_0(T)$ and entropy density $s(T)$, $$\chi_0= \frac{M^2}{N T}, \qquad s = \frac{T \ln Z + \Delta E}{N T} ,$$ where $N$ is the number of sites in the original lattice.
We note that for above conserved operators $A$ there is no need to store Lanczos wavefunctions, so the requirements are essentially that of the g.s. Lanczos ED method, except that we need the summation over all symmetry sectors and a modest sampling $N_s < 10$ over initial wavefunctions is helpful. To reduce the Hilbert space of basis states $N_{st}$ we take into account symmetries, in particular the translation symmetry (restricting subspaces to separate wavevectors $q$) and $s^z_{tot}$ while the EM, Eq. (\[em\]), does not conserve $\tau^z_{tot}$. In such a framework in the present study we are restricted to systems with $N_{st} < 5 \cdot 10^6$ symmetry-reduced basis states, which means EM with up to $N = 42$ sites. The same system size would require in the full HM $N_{st} \sim 10^{10}$ basis states.
An effective criterion for the macroscopic relevance of FTLM results is $Z(T) \gg 1$ (at least for system where gapless excitations are expected), which in practice leads to a criterion $Z>Z^*=Z(T_{fs}) \gg 1$ determining the finite-size temperature $T_{fs}$. Taking $Z^* \sim 20$ implies (for $N=42$) also the threshold entropy density $s(T_{fs}) \sim 0.07$, independent of the model. It is then evident that $T_{fs}$ depends crucially on the model, so that large $s(T)$ works in favor of using FTLM for frustrated and SL systems. Here we do not present the finite-size analysis of FTLM results within EM, but they are quite analogous to previous application of the method to HM on KL [@schnack18] where similar low $T_{fs}$ was established (but much higher $T_{fs}$ ) in unfrustrated lattices and models [@jaklic00]. Moreover, in the case of models with a sizable gap, e.g. $\Delta > T_{fs}$ the results of FTLM can remain correct even down to $T \to 0$. [@jaklic00]
![ Results for the effective model (EM) for the triangular lattice, compared with the full HM. All results for are for $N=30$ sites [@prelovsek18]: a) for entropy density $s(T)$ and b) uniform susceptibility $\chi_0(T)$. c) and d) same quantities for the kagome lattice, compared to full HM, all on $N=42$ sites [@schnack18].[]{data-label="figsl1"}](figsl1n.eps){width="\columnwidth"}
Entropy and uniform susceptibility
==================================
Let us first benchmark results within the EM with the existing results for the full HM on TL and KL. In Figs. \[figsl1\]a,b we present $s(T)$ and $\chi_0(T)$, respectively, as obtained on TL for $J_2=0.1$ on $N=30$ via FTLM on EM, compared with the full HM of the same size [@prelovsek18]. The qualitative behavior of both quantities within EM is quite similar at low $T$ to the full HM, in particular for $s(T)$, although EM evidently misses part of $s(T)$ with increasing $T$ due to reduced basis space. More pronounced is quantitative (but not qualitative) discrepancy in $\chi_0(T)$ which can be attributed to missing higher spin states in EM. Still, the peak in $\chi_0(T)$ and related spin (triplet) gap $\Delta_t >0$ at low $T$) are reproduced well within EM. Similar conclusions emerge from Figs. \[figsl1\]c,d where corresponding results are compared for the KL, where full-HM results for $s(T)$ and $\chi_0(T)$ are taken from study on $N=42$ sites [@schnack18]. Here, EM clearly reproduces reasonably not only triplet gap $\Delta_t \sim 0.1$ but also singlet excitations dominating $s(T \to 0)$, while apparently EM underestimates the value of $\chi_0(T)$.
After testing with the full model, we present in Figs. \[figsl2\] and \[figsl3\] EM results for $s(T)$ and $\chi_0(T)$ for both lattices as they vary with $J_2 > 0$. In Fig. \[figsl2\]a,b we follow the behavior on TL for different $J_2=0, 0.1, 0.15$. From the inflection (vanishing second derivative) point of $s(T)$ defining singlet temperature $T=T_s$ one can speculate on the coherence scale (in the case of LRO) or possible (singlet) excitation gap $\Delta_s \lesssim T_s$ (in the case of SL), at least provided that $T_s > T_{fs}$. Although the influence of $J_2 >0$ does not appear large, it still introduces a qualitative difference. From this perspective $s(T)$ within TL EM at $J_2=0$ reveals higher effective $T_s$ being consistent with $s(T<T_s) \propto T^2$ and a spiral LRO at $T=0$. Still, we get in this case $T_s \sim T_{fs}$ within EM, so we can hardly make stronger conclusions.
On the other hand, for TL and $J_2 = 0.1, 0.15$ where the SL can be expected [@kaneko14; @iqbal16; @gong17] the EM reveals smaller $T_s \sim 0.05$ which is the signature of the singlet gap (which could still be finite-size dependent). More important, results confirm large residual entropy $s(T) \sim 0.1 = 0.14 s_{max}$ even at $T \sim 0.1$. This is in contrast with $\chi_0(T)$ in Fig. \[figsl2\]b which reveal $T$-variation weakly dependent on $J_2$. While for $J_2=0$ the drop at $\chi_0(T < T_t)$ is the signature of the finite-size spin gap (where due to magnetic LRO $\chi_0^0 = \chi_0( T \to 0) >0 $ is expected), $J_2 =0.1, 0.15$ examples are different since vanishing $\chi_0^0$ could indicate the spin triplet gap $\Delta_t > 0.1$ beyond the finite-size effects, i.e. $T_t \sim 0.1 > T_{fs}$.
![ Results within EM on triangular lattice for different $J_2 =0, 0.1, 0.15$; a) for $s(T)$, and b) $\chi_0(T)$.[]{data-label="figsl2"}](figsl2n.eps){width="0.9\columnwidth"}
In Figs. \[figsl3\]a,b we present the same quantities for the case of KL, now for $J_2=0, 0.2, 0.4$. The effect of $J_2>0$ is opposite, since it is expected to recover the LRO at $J_2 \sim 0.4$ [@kolley15], with $120^0$ spin orientation analogous to $J_2=0$ TL. The largest low-$T$ entropy $s(T)$ is found for KL with $J_2=0$. Moreover, EM here yields a quantitive agreement with the full HM [@schnack18], revealing large remanent $s(T)$ due to singlet (chirality) excitations down to $T \sim 0.01$ [@lauchli19]. The evident effect of $J_2 >0$ is to reduce $s(T)$ and finally leading $s(T) \propto T^2$ at large $J_2 \sim 0.4$ which should be a regime of magnetic LRO [@kolley15]. Again, at $J_2=0$ in contrast to entropy $\chi_0(T)$ has well pronounced downturn at $T \sim 0.1$ consistent with the triplet gap $\Delta_t \sim 0.1$ found in most other numerical studies [@sindzingre00; @misguich07; @bernu15; @schnack18; @lauchli19]. Introducing $J_2 > 0$ does not change $\chi_0(T)$ qualitatively.
![ Results within EM on kagome lattice for different $J_2 = 0, 0.2, 0.4$; a) for $s(T)$, and b) $\chi_0(T)$.[]{data-label="figsl3"}](figsl3n.eps){width="0.9\columnwidth"}
Wilson ratio: results
=====================
To calculate $R(T)$, Eq. (\[rw\]), let us first use available results for the full HM for TL [@prelovsek18] and KL [@schnack18], comparing in Fig. \[figsl4\] also the result for unfrustrated HM on a square lattice [@prelovsek18]. Here, we take into account data for $T> T_{fs}$, acknowledging that $T_{fs}$ are quite different (taking $s(T_{fs}) \sim 0.1$ as criterion) for these systems, representative also for the degree of frustration. Fig. \[figsl4\] already confirms different scenarios for $R(T)$. In HM on a simple square lattice, starting from high-$T$ limit $R(T)$ reaches minimum at $T^* \sim 0.7$ and then increases, which is consistent with $R(T\to 0) \to \infty$ for a 2D system with $T=0$ magnetic LRO. The same behavior appears for TL at $J_2=0$ with a shallow minimum shifted to $T^* \sim 0.3$. In contrast, results for KL as well as for TL with $J_2=0.1$ do not reveal such increase, at least not for $T>T_{fs}$, and they are more consistent with the interpretation that $R(T\to 0) \to 0$.
![ Wilson ratio $R(T)$, evaluated from $s(T)$ and $\chi_0(T)$ for full HM on square lattice, triangular lattice with $J_2=0, 0.1$ [@prelovsek18], and kagome lattice [@schnack18]. Results are presented for $T>T_{fs}$.[]{data-label="figsl4"}](figsl4.eps){width="0.7\columnwidth"}
Finally, results for $R(T)$ within EM are shown in Fig. \[figsl5\]a,b as they follow from Figs. \[figsl2\] and \[figsl3\] for different $J_2 \geq 0$. We recognize that EM qualitatively reproduce numerical data within the full HM on Fig. \[figsl4\]. Although for $J_2=0$ TL results in Fig. \[figsl5\]a fail to reveal clearly the minimum down to $T_{fs}\sim 0.1$, there is still a marked difference to the SL regime $J_2=0.1, 0.15$ where EM confirms $R_0 \ll 1$. Results within EM for KL, as shown in Fig. \[figsl5\]b, are even better demonstration for vanishing $R_0$. Here, for $J_2=0$ EM yields quite similar $R(T)$, decreasing and tending towards $R_0 \sim 0$. On the other hand, the effect of finite $J_2 >0$ is well visible and leads towards magnetic LRO with $R_0 \to \infty$ for $J_2 =0.4$.
![ $R(T)$, evaluated within EM for: a) TL with $J_2=0,0.1,0.15$, and b) KL for $J_2 = 0, 0.2, 0.4$.[]{data-label="figsl5"}](figsl5n.eps){width="0.9\columnwidth"}
Conclusions
===========
The main message (apparently not stressed in previous published studies of the SL models) of presented results for the thermodynamic quantities: the entropy $s(T)$, susceptibility $\chi_0(T)$, and in particular the Wilson ratio $R(T)$, behave similarity in the extended $J_1$-$J_2$ Heisenberg model on TL and KL in their presumed SL regimes. Moreover, results on both lattices follow quite analogous development by varying the nnn exchange $J_2 >0$, whereby the effect is evidently opposite between TL and KL, regarding the magnetic LRO and SL phases [@kolley15].
While above similarities can be extracted already from full-model results obtained via FTLM [@schnack18; @prelovsek18], the introduction of reduced effective model appears crucial for the understanding and useful analytical insight. Apart from offering some numerical advantages of reduced Hilbert space of basis states, essential for ED methods, EM clearly puts into display two (separate) degrees of freedom: a) the effective spin $s=1/2$ degrees ${\bf s}_i$, determining $\chi_0(T)$, but as well the dynamical quantities, as e.g. the dynamical spin structure factor $S({\bf q}, \omega)$ not discussed here [@prelovsek18], b) chirality pseudospin degrees ${\bf \tau}_i$ which do not contribute to $\chi_0(T)$, but enter the entropy $s(T)$ and related specific heat. From the EM and its dependence on $J_2$ it is also quite evident where to expect large fluctuations of ${\bf \tau}_i$ and consequently SL regime, which is not very apparent within the full HM. As the EM is based on the direct reduction/projection of the basis (and not on perturbation expansion) of the original HM, it is expected that the correspondence is qualitative and not fully quantitative. In any case, the EM is also by itself a valuable and highly nontrivial model, and could serve as such to understand better the onset and properties of SL.
The essential common feature of SL regimes in HM on both lattices is a pronounced remanent $s(T)>0$ at $T \ll J_1$, which within the EM has the origin in dominant low-energy chiral fluctuations, well below the effective spin triplet gap $\Delta_t$ which is revealed by the drop of $\chi_0(T)$. As a consequence we observe the vanishing of the Wilson ratio $R_0 = R(T\to 0) \to 0$, which seems to be quite generic feature of 2D SL models [@prelovsek19]. Clearly, due to finite-size restrictions we could hardly distinguish a spin-gapped system from scenarios with more delicate gap structure which could also lead to renormalized $R_0 \ll 1$. Moreover, it is even harder to decide beyond finite-size effects whether singlets excitations are gapless or with finite singlet gap $\Delta_s >0$, which should be in any case very small $\Delta_s \ll J$ and from results $R(T\to 0) \to 0$ have to be evidently smaller than a triplet one, i.e. $\Delta_s<\Delta_t$. From our finite-size results it is also hard to exclude the scenario of valence-bond ordered ground state (with broken translational symmetry), although we do not see an indication for that and vanishing $R_0$, it is also not easily compatible with the latter either.
Quantities discussed above are measurable in real materials and have been indeed discussed for some of them. There are evident experimental difficulties, i.e., $\chi_0(T)$ can have significant impurity contributions while $s(T)$ may be masked by phonon contribution at $T > 0$. The essential hallmark for material candidates for the presented SL scenario should be a substantial entropy $s(T)$ persisting well below $T \ll J_1$. There are indeed several studies of $s(T)$ reported for different SL candidates (with some of them revealing transitions to magnetic LRO at very low $T$), e.g., for KL systems volborthite [@hiroi01], YCu$_3$(OH)$_6$Cl$_3$ [@zorko19; @arh19], and recent TL systems 1T-TaS$_2$ [@kratochvilova17] and Co-based SL materials [@zhong19]. While existing experimental results on SL materials do not seem to indicate vanishing (or very small) $R_0$, it might also happen that the (above) considered SL models are not fully capturing the low-$T$ physics. In particular, there could be important role played by additional terms, e.g., the Dzaloshinski-Moriya interaction [@cepas08; @rousochatzakis09; @arh19] and/or 3D coupling, which can reduce $s(T)$ or even induce magnetic LRO at $T \to 0$.
This work is supported by the program P1-0044 of the Slovenian Research Agency. Authors thank J. Schnack for providing their data for kagome lattice and J. Schnack, F. Becca, A. Zorko, K. Morita and T. Tohyama for fruitful discussions.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'G. Allan$^{*}$, C. Delerue, C. Krzeminski'
- 'M. Lannoo'
title: NANOELECTRONICS
---
Introduction
============
In this chapter we intend to discuss the major trends in the evolution of microelectronics and its eventual transition to nanoelectronics. As it is well known, there is a continuous exponential tendency of microelectronics towards miniaturization summarized in G. Moore’s empirical law. There is consensus that the corresponding decrease in size must end in 10 to 15 years due to physical as well as economical limits. It is thus necessary to prepare new solutions if one wants to pursue this trend further. One approach is to start from the ultimate limit, i.e. the atomic level, and design new materials and components which will replace the present day MOS (metal-oxide-semiconductor) based technology. This is exactly the essence of nanotechnology, i.e. the ability to work at the molecular level, atom by atom or molecule by molecule, to create larger structures with fundamentally new molecular organization. This should lead to novel materials with improved physical, chemical and biological properties. These properties can be exploited in new devices. Such a goal would have been thought out of reach 15 years ago but the advent of new tools and new fabrication methods have boosted the field. We want to give here an overview of two different subfields of nanoelectronics. The first part is centered on inorganic materials and describes two aspects: i) the physical and economical limits of the tendency to miniaturization; ii) some attempts which have already been made to realize devices with nanometric size. The second part deals with molecular electronics, where the basic quantities are now molecules, which might offer new and quite interesting possibilities for the future of nanoelectronics.
Devices built from inorganic materials
======================================
These are mainly silicon based microelectronic devices which have invaded our life. Integrated circuits are now found everywhere not only in Personal Computers but also in a lot of equipment we use each minute as cars, telephones, etc. We always need more memory as well as faster and cheaper processors. The race for miniaturization began just after Kahng and Atalla [@ref1] demonstrated in 1960 the first metal-oxide semiconductor field effect transistor (MOSFET) (Fig. \[fig:fig1\]). It turned out to be a success because a large number of transistors and their interconnections could be easily built on the surface of a single silicon chip. Ten years later, the first 1 kilobyte memory chip was on the market and this trend has been followed until now (a 64 megabit one contains more than one hundred millions electronic components). In 1965, Gordon Moore predicted what is known as the Moore’s law: for each new generation of memory chip on the market, the number of components on a chip would quadruple every three years. Miniaturization not only decreases the average current cost per function (historically, $\sim~$ 25% /year) but also improves the cost-performance ratio. In the same time, the market growth was close to 15%/year. Most of the improvement trends are exponential and are resumed in the scaling theory [@ref2]. It shows that a MOSFET operates at a higher speed without any degradation of reliability if the device size is scaled by a factor $1/k$ and at the same time the operating voltage is scaled by the same factor. The speed of the circuits has increased up to one Gigahertz in today’s personal computers. Continued improvements in lithography and processing have made possible the industry’s ability to decrease the minimum feature sizes used to fabricate integrated circuits. For four decades, the semiconductor technology has distinguished itself by the rapid pace of improvement in its products and many predicted technological limitations have been overcome. In the same time it is difficult for any single company to support the progressively increasing R&D investments necessary to evolve the technology. Many forms of cooperation have been established. as the «International Technology Roadmap for Semiconductors" [@ref3] which is the result of a large world consensus among leading semiconductor manufacturers. It looks at the 10-15 years in the future showing that most of the known technological capabilities will be approaching or have reached their limits. Continued gains in making VLSI (Very Large Scale Integration) seem to be first severely limited by technological considerations [@ref4]. The first one which is known as a «power crisis» is an heat dissipation problem. While the size of the components is reduced by a factor $k$, the scaling theory shows that the power density increases like $k^{0.7}$ [@ref2]. A today’s single chip processor in production requires 100 W and this value will rise to 150 W during the next decade. Low-power design of VLSIs is also necessary because they are more and more used in mobile electronic systems which need long-lasting batteries. The reduction of the supply voltage ($\sim$ 0.37 V in 2014) will be accompanied by an increase of the current needed to operate VLSIs to huge values (500 A). This contributes to the second limitation: the «interconnect crisis». A large operating current gives rise to voltage drop problems due to the resistance of the interconnections and to reliability degradation due to electro-migration of defects in the conducting wires. On the other hand, to be attractive, a scaling of the components must be accompanied by a scaling of the interconnect line thickness, width and separation. Then signal integrity is also becoming a major design issue. A high crosstalk noise is due to larger capacitive couplings between interconnects. A smaller geometry also increases the RC (resistance-capacitance) delay (it increases as k$^{1.7}$ [@ref2]). If this delay increases, the signal cannot propagate anywhere within the chip within a clock cycle. According to T. Sakurai [@ref2], a «complexity crisis» will appear. Design complexity is increasing superexponentially. The first obvious reason is the increased density and number of transistors. The complexity is also growing due to designs with a diversity of design styles, integrated passive components and the need to increase incorporate embedded software. The integrated circuit is built on several interconnected levels which interact. Verification complexity grows with the need to test and validate the designs. Finally the tests must be done at higher speed, higher levels of integration and greater design heterogeneity. There are potential solutions to solve some of these problems during the 15 next years [@ref3]. In some other cases the solution is still unknown and this will have a price that people could not continue to afford. Moreover until now, we have considered more technological limitations but new fundamental quantum phenomena will also appear. The first one which has already been investigated is the tunnel effect through the gate insulator which is made of the silicon native oxide SiO$_{2}$ [@ref4]. One important factor of the MOSFET is the gate capacitance of the parallel plate capacitor made by the gate and the conducting channel in the semiconductor (Fig. \[fig:fig1\]). This capacitor is filled by an insulator which is at present silicon dioxide. The charge of this capacitor controls the current between the drain and the source. When the gate (which is one of the capacitor plates) is scaled by $1/k$, the gate thickness scales as $1/k$ to maintain the same capacitance. It will be reduced to 0.7 nm in 2014. at the same time the gate voltage should be reduced to maintain the electric field across the oxide below an undesirable value. This is equivalent to 3 layers of oxygen and 2 of silicon in the oxide. When the insulator thickness is reduced, the overlap of the electronic wavefunctions on both barrier sides increases exponentially and this gives rise to a non-zero probability for the electrons to tunnel across the barrier. An other formidable challenge is to replace the silicon oxide by a dielectric material with a higher dielectric constant. A review of current work and literature in the area of alternate gate dielectrics is given in reference [@ref5]. Doping the semiconductors with impurities, donors (n type) or acceptors (p type), is an essential feature to get free carriers and thus get a substantial conductivity. With current impurity concentration one can show that below a $0.1 * 0.05$ mm gate, there are about 100 free carriers only. This means that a fluctuation of plus or minus one charged impurity in the channel gives a 1% error, which represents another type of limitation. Some other new quantum effects appear due to the size reduction:
- [as in a molecule or an atom, the electron energy can only take discrete value as opposed to the classical value or to the existence of bands of allowed energies for bulk materials.]{}
- [when the wave function associated with an electron takes several distinct channels, interference effects occur which give rise to conductance fluctuations with a root mean square deviation equal to $\displaystyle \frac{e^{2}}{h}$. Such effects are only observed when the phase is not destroyed by inelastic collisions, i.e. when the distance covered is lower than the inelastic mean free path. This is close to 0.1 $\mu$m at room temperature for GaAlAs-GaAs heterojunctions or in Si.]{}
- [a one-dimensional quantum wire is analogous to a wave guide connected to electron reservoirs. When a voltage is applied between these reservoirs, electrons are injected in the wave guide. The number of one-dimensional channels for an electron depends on the number of energy levels below the Fermi energy. This number N of discrete levels due to the confinement perpendicular to the wire is fixed by the width of the quantum wire. Within these conditions the wire conductance is quantized and equal to $\displaystyle \frac{2Ne^{2}}{h}$.]{}
- [discrete energy levels in different wells can interact through a tunnel effect. When a polarization is applied between the outer reservoirs, the conductance is small except when the quantum wells energy levels are aligned. Then the current is maximum and decreases for a further increase of the applied voltage. This gives rise to a negative differential resistance (see Fig. \[fig:fig2\])]{}
These effects which limit the feasibility to reduce the device size below a certain value can also be used to invent new architecture. One electron devices are certainly the most surprising and promising effect. The simplest one is the tunneling junction shown on Fig. \[fig:fig3\] [@ref6]. It is a metal-insulator-metal junction between two electron reservoirs. When the barrier width or height is large, the probability for an electron to cross the barrier when a polarization is applied is small and the tunneling is quantized: the electrons cross the barriers one by one. Then the system is equivalent to a capacitor C and an electron leak due to tunneling across the insulating layer (Fig. \[fig:fig3\].b). When a voltage $V$ is applied, the tunneling of an electron is possible if the energy difference
$$\Delta E =\frac{Q^{2}}{2C}-\frac{(Q-e)^{2}}{2C}
\label{eq:eqone}$$
for the capacitor with a charge equal to $Q=CV$ and to $Q-e$ becomes positive. When, (i.e. $\displaystyle |V|<\frac{e}{2C}$) no electron can cross the barrier and we have a “Coulomb blockade”. To maintain a constant current $I$, one must applied a saw tooth potential with a frequency $I/e$. It is difficult to realize such a system and the following electron box is much easier to make. Let us take a metallic quantum box separated from two electron reservoirs on one side by an ideal capacitor $C_{s}$ and on the other one by a tunnel junction with a capacitance $C$ (Fig. \[fig:fig4\]). Electrons can tunnel into the quantum box one by one until a charge $-Ne$, $N$ depending on the applied voltage. The simplest way to calculate $N(V)$ is to define the ionization level $\epsilon_{i}(N,N+1)$. For the box, $\epsilon_{i}(N,N+1)$ is equal to the total energy difference $E_{TOT}(N+1)-E_{TOT}(N)$ which is roughly equal to $\displaystyle \frac{dE_{TOT}}{dN}$. Applying the Koopmans theorem, this quantity is equal to the lowest one-electron energy level which can accommodate an extra electron when the box is already charged with $N$ electrons. A classical electrostatic calculation gives the potential of the charged box one must add to the HOMO energy level of the isolated box $\epsilon_{i,metal}$ to get the ionization potential:
$$\epsilon_{i}(N,N+1)=\epsilon_{i,metal}+\frac{e^{2}}{2(C+C_{s})}(N+\frac{1}{2}-C_{s}\frac{V}{e})
\label{eq:eqtwo}$$
As shown on Fig. \[fig:fig4\].b, a stable -Ne box charge occurs when the metal Fermi level is located between $\epsilon_{i}(N,N+1)$ and $\epsilon_{i}(N-1,N+1)$. A step in $N(V)$ occurs each time the metal Fermi level is aligned with an ionization potential and we can easily control the box charge. The next step is to realize a one-electron transistor [@ref6] shown on Fig. \[fig:fig5\] with two tunneling junctions and an ideal capacitor. The gate potential controls the charge of the dot and the current. More complicated devices allow to control the electrons one by one like the electron pump [@ref7] and the single electron turnstile [@ref8]. Experimental results and a good introduction can be found in Ref [@ref9; @ref10; @ref11]. To observe a Coulomb blockade, the energy difference between two ionizations levels $\displaystyle \frac{e^{2}}{2(C+C_{s})}$ as calculated from Eq. \[eq:eqtwo\] must be larger than $kT$. The first experiments were done at low temperature but one must reduce the size to work at room temperature. Single electron electronics has been proposed by Tucker in 1992 [@ref12]. This would allow a further reduction of size and electrical power, both conditions being necessary to increase the speed of a device. Some components have already been realized [@ref13; @ref14; @ref15; @ref16; @ref17; @ref18; @ref19] but remain difficulties due notably to non-reproducibility.
Molecular electronics
=====================
If the reduction in size of electronic devices continues at its present exponential pace, the size of entire devices will approach that of molecules within few decades. However, major limitations will occur well before this happens, as discussed previously. For example, whereas in current devices electrons behave classically, at the scale of molecules, they behave as quantum mechanical objects. Also, due to the increasing cost of microelectronic factories, there is an important need for much less expensive manufacturing process. Thus, an important area of research in nanotechnology and nanoscience is molecular electronics, in which molecules with electronics functionality are designed, synthesized and then assembled into circuits through the processes of self-organization and self-alignment. This could lead to new electronics with a very high density of integration and with a lower cost than present technologies. In this section, we review recent progress in molecular electronics and we describe the main concepts at the origin of its development. We show that the latter is strongly connected to the invention of new tools to identify, to characterize, to manipulate and to design materials at the molecular scale. We also stress the importance of the basic knowledge which has still to be obtained in the way towards the integration of molecules into working architectures.
Concepts and origins of molecular electronics
---------------------------------------------
Quite surprisingly, the first ideas of using specific molecules as electronic devices and of assembling molecules into circuits were proposed more than 30 years ago. At this time, the electronic processors were only in their infancy, and adequate tools to perform experiments on single molecules were not available. However, the concept of molecular electronics is appealing, and two components were proposed, the molecular diode and the molecular wire, which can be seen as elementary bricks to build more complex devices or circuits. In spite of these early proposals, practical realizations came only recently due to limitations in chemistry, physics and technology. In the following, we briefly describe the basic principles of the two components.
Molecular diode
---------------
In 1974, Aviram and Ratner [@ref20] proposed to make an electrical rectifier based on D-$\sigma$-A molecules between two metallic electrodes, where $D$ and $A$ are, respectively, an electron donor and an electron acceptor, and $\sigma$ is a covalent bridge. Figure \[fig:fig6\] describes the physical mechanism for the rectification. The electronic states are supposed to be totally localized either on the D side or on the A side. The HOMO(D) and LUMO(D) are high in energy, compared to, respectively, the HOMO(A) and LUMO(A). Therefore, a current can be established at relatively small positive bias, such that the Fermi level at the A side is higher than LUMO(A), and the Fermi level at the D side is lower than HOMO(D), provided that the electrons can tunnel inelastically through the $\sigma$ bridge. Thus an asymmetric current-voltage $I(V)$ curve is expected like in a conventional electronic diode. In the prototype proposed by Aviram and Ratner, the donor group is made by a tetrathiofulvalene molecule (TTF), the acceptor group by a tetracyanoquinodimethane molecule (TCNQ), and the $\sigma$ bridge by three metylene bonds. These molecules are seen as the analogs of n and p-type semiconductors separated by a space-charge region. Twenty years have been necessary before the first experimental demonstration of rectifying effects in diodes based on molecular layers [@ref21], and studies have been amplified in this direction recently [@ref22; @ref23; @ref24]. Nevertheless, the origin of the rectification is still matter of debate [@ref25]. In addition, the Aviram-Ratner principle at the level of a single molecule has not been demonstrated yet.
Molecular wires
---------------
The discovery of the first conductive polymers based on acetylene [@ref26; @ref27] suggested that molecules could have interesting properties to make electrical wires at the molecular scale. These wires are necessary to connect molecular devices into circuits. In 1982, Carter [@ref28] suggested to address the acceptor and donor groups of an Aviram-Ratner diode with polyacetylene chains, and even to control several diodes in the same time. Using this technique, the expected density of components on a chip could be of the order of $10^{14}$ cm$^{-2}$, which is far beyond the possibilities of present technologies. One difficulty raised by Carter is that the transport in the molecular chains takes place in the form of solitons whose motion is slow. But this disadvantage would be largely compensated by the higher density.
Molecular circuits
------------------
Conventional microelectronic technologies presently follow a top-down approach, processing bulk semiconductor materials to make devices with smaller and smaller sizes. In molecular electronics, the basic components like molecular diodes and wires are used to build more complex devices and circuits. This is the bottom-up approach which starts from the molecular level to build a complete chip. For example, it has been recently proposed to create AND and XOR logical functions using assemblies of molecular films, molecular diodes and nanotubes (wires) [@ref29]. These functions could be used to design additioners and other operations which are currently done in CMOS technologies. However, the assembling of the elementary units will not necessarily lead to the expected device as the interaction between individual molecules may perturb their own function [@ref30]. Thus, other proposals suggest to consider the system as an ensemble and to use directly the chemistry to synthesize molecules with the required functions (e.g. additioners) [@ref31].
Transport experiments on ensembles of molecules
-----------------------------------------------
A great challenge of molecular electronics is to be able to transfer information at the molecular scale in a controlled manner. In the devices described above, it consists of a charged carrier (electron or hole) which is transmitted through a single molecule. During several decades, the impossibility to work at the level of a single molecule has been a major difficulty. However, the characterization of molecular ensembles has been undertaken by different means which we describe now.
Molecules in a solvent
----------------------
The first approach consists to study the molecules in a solvent, and to probe the electronic transfer using a combination of chemical and physical methods. It is possible to characterize the electronic transport in solution through the measurement of the oxido-reduction by voltametry and of the electronic excitation by optical absorption. If the chemical reaction takes place inside a molecule, then reaction and intra-molecular transfer become equivalent. Experiments have been made on donor-ligand-acceptor molecules, like mixed valence compounds recently synthesized [@ref32]. These organo-metallic complexes contain two metallic atoms with different oxidation degrees. The experiments allow the measurement of the electronic transfer between the two sites. For example, Taube has synthesized a stable molecule containing two ruthenium complexes which act as donor and acceptor groups [@ref33]. When the molecule is partially oxidized, a charge transfer is observed between the two ions. Other molecules have been made with a large distance of 24 between the two ions, which is obtained by the intercalation of five phenyl groups [@ref34]. An electronic transfer is also observed in this system in spite of the long inter-site distance. This effect is attributed to the electronic coupling induced by the phenyl groups which play here the role of a molecular wire between the two ruthenium ions [@ref35]. Interesting results have been also obtained on biological molecules. In the case of proteins [@ref36], an increase of 20 in the distance between the acceptor and donor sites leads to a decrease by a factor 10$^{12}$ of the transfer rate. Thus, in this process, proteins can be seen as a uniform barrier which limits the electronic tunneling. The importance of tunneling effects in biological molecules is presently a well studied topic.
Molecular layers
----------------
The second approach is the study of molecules which are self-organized in two-dimensional monolayers on the surface of a conductive substrate. Self-assembled monolayers (SAMs) are obtained by the Langmuir-Blodgett (LB) technique or by chemical grafting on a surface. The SAMs have to be well organized to avoid artifacts due to disorder. Electrical measurements require a second electrode which is made by evaporation of a metal on top of the SAM. This is usually the most difficult task because the metal must not diffuse into the SAM where it could make short circuits [@ref37]. In the same time, the contact resistance must be small which requires a good control of the metal/SAM interface. The first results based on this technique have been published by Mann and Kuhn on LB films of alkane chains [@ref38]. Studies of the tunneling through the molecular films show that the molecules are good insulators. Similar results have been obtained on SAMs of n-alkyltrichlorosilane chemically grafted on a Si substrate [@ref39]. These chains realize a barrier of the order of 4.5 eV for the tunneling of electrons or holes [@ref40] and lead to a better insulating layer than SiO$_{2}$ at the nanometer scale. Other works on SAMs concern molecular diodes in the sense of Aviram-Ratner: they are described in previous sections and in ref. [@ref21; @ref22; @ref23; @ref24; @ref25]. Also, nice results have been obtained by Fischer [*et al*]{} [@ref41] on organic heterostructures made with palladium phtalocyanines and compounds based on perylene. Using gold electrodes at 4.2 K, they obtain $I(V)$ curves with clear steps (Figure \[fig:fig7\]) which are attributed to the resonant tunneling through the molecular levels. The $I(V)$ characteristics is symmetric when the heterostructure is symmetric, but it becomes rectifying when perylene layers are inserted on one side of the structure.
STM measurements on single molecules
-------------------------------------
The invention of the Scanning Tunneling Microscope (STM) [@ref42; @ref43] is a main step for the development of molecular electronics. The STM is based on a sharp tip (curvature radius in the nanometer range) which is placed at a small distance from a conductive surface in a way that electrons can tunnel from one electrode to the other. Using piezoelectric tubes, the position of the STM tip is fixed with an accuracy better than 1 , horizontally and vertically. The STM is an efficient tool to study single molecules adsorbed on metallic or semiconductor surfaces [@ref44; @ref45]. It can be used to image, to displace and to characterize single molecules.
### STM imaging of single molecules
A natural application of the STM is the imaging of molecules adsorbed on a surface. In this mode, a constant bias is applied between the tip and the substrate. When the tip is displaced laterally, one measures the height of the tip which is adjusted in order to keep a constant tunneling current. The map of the height versus the lateral position is roughly representative of the topography of the surface, and thus gives information on the adsorbates. The adsorption of small molecules like CO and benzene on Rh(111) has been studied in detail [@ref46; @ref47], showing in some cases that self-organization can take place at the surface. Larger molecules like naphtalene or phtalocyanines have been also imaged [@ref48; @ref49]. The STM allows to study the adsorption sites. Nevertheless, the interpretation of the image is not straightforward, as it does not give information on the atomic positions but on the electronic structure. Most of the work on STM imaging of adsorbed molecules concerns metallic surfaces, and only few semiconductor surfaces. However, as microelectronics technology is based on silicon substrates, there is an increasing need to study organic molecules linked to a silicon surface [@ref50; @ref51]. For example, Fig. \[fig:fig8\] shows high resolution STM images of a Si(100) surface after deposition of thienylenevinylene tetramers, which belong to a new class of $\pi$-conjugated oligomers of particular interest as molecular wires [@ref52]. The silicon dimers, typical of a Si(100) (2$\times$1) surface, are visible, forming rows of grey bean shaped. On the top of these rows, bright features can be seen, with different shapes corresponding to different adsorption configurations [@ref51]. These results show that the molecules are quite conductive. Detailed studies allow a better understanding of the nature of the bonds between the molecules and the surface. Spectroscopic measurements are also possible, as detailed in the next sections.
### STM as a tool to manipulate and to fabricate molecular objects
The STM tip is also a very interesting tool to manipulate the matter at the atomic scale. The forces (Van der Waals, electrostatic, chemical) between the atoms at the tip apex and the imaged object are used to displace atoms or molecules on a surface [@ref53]. The first controlled atomic manipulation was presented by Eigler [*et al*]{} [@ref54]. Xenon atoms were displaced on a nickel surface, by moving the tip which was kept close to the atoms. Artificial structures containing a small number of atoms were made using this technique. Xenon atoms were also transferred vertically from the surface to the tip in a reversible process [@ref55; @ref56]. The manipulation of molecules only came recently. Stipe [*et al*]{} have shown that it is possible to induce the rotation of acetylene molecules by excitation of C-H vibration modes with the STM [@ref57]. This work shows the importance of inelastic processes in the tunneling. This effect can be exploited to perform vibrational spectroscopy on individual molecules [@ref58] and is therefore a powerful technique to identify adsorbates, their chemical bonding and their local chemical environment. Due to the high spatial resolution of the STM, correlations between the electronic structure and vibrational excitation of adsorbed molecules can be determined. Moreover by using inelastic tunneling, molecules can be dissociated, desorbed or even synthesized, which is of essential importance in studies of molecular manipulation. Hla [*et al*]{} have shown that the assembling of two molecules is possible using a STM [@ref59]. Starting from two C$_{6}$H$_{5}$I molecules, they first removed the iodine atoms. Then they approached the two phenyl groups and they observed the formation of a biphenyl molecule. This reaction, entirely realized using the STM, was made at 20 K whereas the conventional synthesis is impossible below 180 K. Recently, larger molecules have been manipulated, showing that their adsorption on a surface may induce important atomic reconstructions [@ref60]. The manipulation of specific parts of molecules may also lead to important changes in their $I(V)$ curve [@ref61; @ref62].
### STM spectroscopy of molecules
In some conditions, the STM allows to study the electronic structure of molecules in connection with their interactions with the surface. In the spectroscopic mode of the STM, the tip is placed above an adsorbed molecule, and the current is measured as a function of the applied voltage. Thus, using this approach, a direct measurement of the transport properties of a single molecule is possible. Several studies have been applied to C60 molecules because they are quite stable and easy to manipulate due to their size. Joachim [*et al*]{} [@ref63] have measured the current through molecules on a gold substrate, at room temperature and with a small applied bias (50 mV). As the tip-molecule distance is reduced, the current increases at a very high rate which is interpreted by a distortion of the electronic levels due to the pressure induced by the tip. Other measurements at 4.2 K have been recently presented by Porath [*et al*]{} [@ref64; @ref65]. C60 molecules are adsorbed on gold substrate covered by a thin amorphous carbon film. The $I(V)$ curves present clear steps which are interpreted by Coulomb blockade effects and resonant tunneling through the discrete states of the molecule. States close to the HOMO-LUMO gap are completely resolved in spectroscopy. The degeneracy of the HOMO and LUMO levels is broken, probably due to the tip-induced electric field or due to a Jahn-Teller effect. Another way to probe the conductivity of a single molecule is to use slightly defective SAMs. A particular system has been mainly studied, with molecules terminated by a thiol end group. Self-assembly is routinely obtained on gold surfaces (and others) using chemical grafting based on sulfur-gold bonds [@ref66]. In a well-known experiment [@ref67], dodecanethiol SAMs have been made with a small number of defects consisting of conjugated wires which are slightly longer than the dodecanethiol molecules. The dodecanethiol SAM forms a quite insulating layer [@ref38]. STM images of the surface show a smooth surface with only few bright spots at the position of the molecular wires. This result demonstrates that the molecular wires have a higher conductivity than the surrounding molecules. Other works have been performed in this direction [@ref68]. The spectroscopy of single molecules may be also evidenced by imaging the surface at different tip-sample bias. In the example presented in a preceding section where thienylenevinylene tetramers are adsorbed on Si(100), the images of the molecules are highly voltage dependent (Fig. \[fig:fig8\]). At sufficiently high negative sample voltages, the molecules are visible, whereas at lower voltages, most of the molecules disappear. The interpretation is the following: at high voltages, electrons can tunnel through the HOMO of the molecule, whereas at lower voltages, only states close to the Fermi level of the semiconductor and associated with the Si dimers can contribute to the tunneling current. Thus, interesting developments with a STM are taking place in various areas. It remains that the experiments which have been done up to now are at the state-of-the-art level, and their transfer to create new technologies is not straightforward. If the STM is a very good tool to study the transport through a single molecule, it is clear that other approaches have to be developed to make molecular electronic devices.
Molecules connected to nanoelectrodes
-------------------------------------
Here we describe other approaches to determine the $I(V)$ characteristics of a single molecule using nanoelectrodes. Compared to STM, these methods lead to stable, permanent and symmetric junctions. Applications of these techniques are already foreseen in the field of chemical and biological sensors.
### Co-planar electrodes
This approach is based on lithographic methods to make metallic electrodes separated by a small gap, of the order of 5-10 nm for the smallest ones. This system is used to characterize long molecules which are placed in such manner that they are connected to the two electrodes (Fig. \[fig:fig9\]). The conductivity of carbon nanotubes has been studied using this approach [@ref69; @ref70]. It has been verified that the nanotubes are metallic or semiconducting depending on their geometry. Carbon nanotubes are presently considered as the best candidates to make molecular wires, as the current can flow efficiently in long nanotubes ($\sim$ 1-0.1 mm). Co-planar electrodes are also used to measure the conductivity of DNA [@ref71], and even to explore superconductivity effects in these systems [@ref72; @ref73].
### Nanopores
In this approach, small holes are made in a thin silicon nitride film deposited on a metallic electrode. The pores are filled with the molecules. A second metallic electrode is evaporated on the top of the system to realize an electrical contact. Compared to techniques involving molecular films, a smaller number of molecules are probed in this approach ($\sim$ 1000). The number of junctions which are short circuited is also reduced [@ref74]. Measurements have been realized on 1-4 phenylene diisocyanide [@ref75] and \[[**1**]{}: 2’-amino-4,4’-di(ethynylphenyl)-5’-nitro-1-benzenedithiol\] [@ref76; @ref77]. In the case of the first molecule, a symmetric $I(V)$ curve is obtained. In the case of the second one, it exhibits negative differential resistance with a high peak-to-valley ratio at low temperature (Fig. \[fig:fig10\]). The origin of this effect could be related to the electronic structure of the molecule [@ref78]. In this case, the molecule could be used to realize oscillators.
### Breaking junctions
In this third approach, a metallic wire (usually gold) is made by lithography on a flexible substrate. Then the wire is broken using a small mechanical perturbation. At the place where the wire is broken, there is a small gap of nanometer size where molecules can be deposited from a solution. Using piezoelectric tubes, the distance between the two electrodes is adjusted in such a way that some molecules make a bridge between the electrodes. One difficulty with this technique is that the number of molecules which are connected is unknown. Reed [*et al*]{} have studied the conductance of benzene-1,4-dithiol grafted on gold electrodes [@ref79]. Steps in the $I(V)$ curve are obtained, maybe due to Coulomb blockade effects. Other molecules have been studied [@ref80; @ref81], confirming for example that dodecanethiol is good insulator at small bias. Park [*et al*]{} have inserted a C60 molecule in a breaking junction [@ref82]. Using a gate voltage applied to the substrate, this system works as a transistor at the molecular level. In addition, a new effect is observed as the electronic transport is coupled to the motion of the C60 molecule. The results are explained by the oscillation of the molecule in the electrostatic field between two electrodes. Coupling to internal modes of vibration of the molecule is also suggested.
Molecular circuits
------------------
The fabrication of circuits based on molecules is obviously one main objective of molecular electronics. Even if much work is needed to develop the corresponding technologies, recent progress has been made in this direction. Collier et al [@ref83] have presented a molecular circuit based on the Teramac architecture [@ref84]. This computer architecture is defect-tolerant, meaning that it continues to work up to three percent of defected components. This kind of approach is ideal for molecular electronics since it will necessarily contain defective components and connections. Logic gates are fabricated from an array of configurable molecular switches, each consisting of a monolayer of electrochemically active rotaxane molecules sandwiched between metal electrodes. AND and OR functions have been realized. The circuit is quite equivalent to Programmable Read Only Memory (PROM). Its fabrication would be much cheaper than in CMOS technologies. This work illustrates an important orientation of molecular electronics. It could be used to make logic devices at a relatively low cost. It is foreseen that expensive fabrication procedures in the electronic industry could be progressively replaced by techniques derived from molecular electronics to reduce fabrication costs. Thus, the main aim is no longer to realize nanometer scale devices to follow the first Moore’s law, but to reduce costs which are presently continuously increasing (second Moore’s law).
[99]{} D. Kahng and M. Atalla, US patents 3206670 and 3102230 (1960). T. Sakurai, Jap. Soc. Appl. Phys. 3, 15 (2001). The International Technology Roadmap for Semiuconductors, Semiconductor Industry Association (Sematech Inc., Austin, Texas 2000); see also http://www.sematech.org for the most recent updates. M. Schulz, Nature 399, 729 (1999). G. D. Wilk, R. M. Wallace, and J. M. Anthony, J. Appl. Phys. 89, 5243 (2001). H. Grabert and M.H. Devoret “Single charge tunneling : Coulomb blockade phenomena in nanostructures” (Plenum Press, New York, 1992) L. J. Geerligs, V. F. Anderegg, P. A. M. Holweg, J. E. Mooij, H. Pothier, D. Esteve, C. Urbina, and M. H. Devoret, Phys. Rev. Lett. 64, 2691 (1990). H. Pothier, P. Lafarge, P. F. Orfila, C. Urbina, D. Esteve, and M. H. Devoret, Physica B 169, 573 (1991); H. Pothier, P. Lafarge, C. Urbina, D. Esteve, and M. H. Devoret, Europhysics Lett. 17, 249 (1992). Special Issue on Single Charge Tunneling, Z. Phys. B 85,317 (1991). D. V. Averin and K. K. Likharev in Mesoscopic Phenomena in Solids, B. L. Al’tshuler, P. A. Lee and R. A. Wenn eds (Elsevier, Amsterdam, 1991). D. K. Ferry and S. M. Goodnick in Transport in nanostructures (Cambridge University Press, 1997). J. R. Tucker, J. Appl. Phys. 72, 4399 (1992). K. Yano, I. Tomoyuki, T. Hashimoto, T. Kobayashi, F. Murai, and S. Moichi, IEDM Tech. Dig. 1993, p. 541.; K. Yano, T. Ishii, T. Hashimoto, T. Kobayashi, F. Murai, and K. Seki, IEEE Trans. Electron Devices 41, 1628 (1994).41 L. Guo, E. Leobandung, and S. Chou, Science 275, 649 (1997); Appl. Phys. Lett. 70, 850 (1997). A. Nakajima, T. Futatsugi, K. Kosemura, T. Fukano, and N. Yokoyama, Appl. Phys. Lett. 70, 1742 (1997); Appl. Phys. Lett. 71, 353 (1997). Y. Takahashi, M. Nagase, H. Namatsu, K. Kurihara, K. Iwadate, Y. Nakajima, S. Horiguchi, K. Murase, and M. Tabe, IEDM Tech. Dig. 1994, p. 938; Electron. Lett. 31, 136 (1995); Y. Yakahashi, H. Namatsu, K. Iwadate, M. Nagase, and K. Murase, IEEE Trans. Electron. Devices 43, 1213 (1996). H. Ishikuro, T. Fujii, T. Saraya, G. Hashiguchi, T. Hiramoto, and T. Ikoma, Appl. Phys. Lett. 68, 3585 (1996). H. Ishikuro andT. Hiramoto, Appl. Phys. Lett. 71, 3691 (1997); Appl. Phys. Lett. 74, 1126 (1999). L. Zhuang, L. Guo, and S. Y. Chou, Appl. Phys. Lett. 72, 1205 (1998); E. leobandung, L. Guo, and S.Y. Chou, Appl. Phys. Lett. 67, 2338 (1995). A. Aviram and M. A. Ratner, Chem. Phys. Letters 29, 277 (1974). G.J. Ashwell, J.R. Sambles, A.S. Martin, W.G. Parker, and M. Szablewski, J. Chem. Soc., Chem. Commun. 1374 (1990); A.S. Martin, J.R. Sambles, G.J. Ashwell, Phys. Rev. Lett. 70, 218 (1993). R.M. Metzger, B. Chen, U. Hüpfner, M.V. Lakshmikantham, D. Vuillaume, T. Kawai, X. Wu, H. Tachibana, T.V. Hughes, H. Sakurai, J.W. Baldwin, C. Hosh, M.P. Cava, L. Brehmer, and C.J. Ashwell, J. Am. Chem. Soc. 119, 10455 (1997). D. Vuillaume, B. Chen and R.M. Metzger, Langmuir 15, 4011 (1999). B. Chen and R.M. Metzger, J. Phys. Chem. B 103, 4447 (1999). R.M. Metzger, J. Mater. Chem. 9, 2027 (1999); ibid 10, 55 (2000). C. Krzeminski, C. Delerue, G. Allan, D. Vuillaume and R. Metzger, Phys. Rev. B, 64 (8), 085405. A. G. MacDiarmid and A. J. Heeger, Synthetic Metals 1, 101. (1980). W. P. Su, J. R. Schrieffer, and A. J. Heeger, J. Chem. Phys, 73, 946 (1980). F. L. Carter, Molecular Electronic Devices, Dekker, 51 (1982). J. C. Ellenbogen and J. C. Love, Proceeedings of the IEEE 70, 386 (2000). P. Ball, Nature 406, 118 (2000). C. Joachim, J. K. Gimzewski and A.Aviram, Nature 408, 541 (2000). H. Taube, Science 226, 1036 (1984). C. Creutz and H. Taube, J. Am. Chem. Soc. 91, 3988 (1969). C. Patoux, J-P. Launay, M. Beley, S. Chodorowski-Kimmes, J-P. Collin, S. James and J-P Sauvage, J. Am. Chem. Soc. 120, 3717 (1998). S. Larsson, Chemical Physics Letters 90, 136 (1982). C. C. Moser, J. M. Keske, K. Warncke, R. S.Farid and P. L. Dutton, Nature, 355, 796 (1992). R. M. Metzger and C. A. Panetta, New. J. Chem., 15, 209 (1991). B. Mann and H. Kuhn, J. Appl. Phys. 42, 4398 (1971). C. Boulas, J. V. Davidovits, F. Rondelez, and D. Vuillaume, Phys. Rev. Lett. 76, 4797 (1996). D. Vuillaume, C. Boulas, J. Collet, G. Allan and C. Delerue, Phys. Rev. B 58, 16491 (1998). C. M. Fischer, M. Burghard, S. Roth and K. V. Klitzing, Europhys. Lett. 28, 129 (1994). G. Binning, H. Rohrer, Ch. Gerber and E. Weibel, Phys. Rev. Lett. 49, 57 (1982). G. Binning and H. Rohrer, Rev. Mod. Phys. 59, 57 (1987). J. Gimzewski, Physics World, 29 (1998). J. Gimzewski and C. Joachim, Science 283, 1683 (1999). H.Ohtani, R. J. Wilson, S. Chiang and C. M. Mate, Phys. Rev. Lett 60, 2398 (1988). P. S. Weiss and D. M. Eigler, Phys. Rev. Lett 71, 3139 (1993). V. M. Hallmark, S. Chiang, J. K. Brown and Ch. Woll, Phys. Rev. Lett. 66, 48 (1991). J. Gimzewski and R. Moller, Phys. Rev. B, 36, 1284 (1987). J.S. Hovis, H. Liu, R.J. Hamers, Surf. Sci. 402, 1 (1998). B. Grandidier, J.P. Nys, D. Stiévenard, C. Krzeminski, C. Delerue, P. Frère, P. Blanchard, J. Roncali, Surf. Sci. 473, 1 (2001). C. Krzeminski, C. Delerue, G. Allan, V. Haghet, D. Stiévenard, E. Levillain, and J. Roncali, J. Chem. Phys. 111, 6643 (1999). S. Gauthier, Applied Surface Science 164, 84 (2000). D. M. Eigler and E. K. Schweizer, Nature 344, 524 (1990). D. M. Eigler, C. P. Lutz and W. E Rudge, Nature 352, 600 (1991). M. F. Crommie, C. P. Lutz and D. M. Eigler, Science 262, 219 (1993). B. C. Stipe, M. A. Rezaei, and W. Ho, Phys. Rev. Lett 81, 1263 (1998). B. C. Stipe, M. A. Rezaei, and W. Ho, Science 280, 1732 (1998). S-W. Hla, L. Bartels, G. Meyer and K-H. Rieder, Phys. Rev. Lett 85, 277 (2000). M. Schunack, L. Petersen, A. Kuhnle, E. Laegsgaard, I. Stensgaard, I. Johannsen and F. Besenbacher, Phys. Rev. Lett 86, 456 (2001). F. Moresco, G. Meyer, K-H. Rieder, H. Tang, A. Gourdon, and C. Joachim, Appl. Phys. Lett. 78, 307 (2001). F. Moresco, G. Meyer, K-H. Rieder, H. Tang, A. Gourdon, and C. Joachim, Phys. Rev. Lett 86, 672 (2001). C. Joachim, J. K. Gimzewski, R. R. Schlittler, C. Chavy, Phys. Rev. Lett. 74, 2102 (1995). D. Porath and O. Millo, J. Apply. Phys. Lett. 81, 2241 (1997). D. Porath, Y. Levi, M. Tarabiah, and O.Millo, Phys. Rev. B 112, 558 (1990). R. G. Nuzzo, L. H. Dubois and D. L. Allara, J. Am. Chem. Soc. 56, 9829 (1997). L. A. Bumm, J. J. Arnold, M. T. Cygan, T. D. Dunbar, T. P Burgin, L. Jones, D. L. Allara, J. M. Tour, P. S. Weiss, Science 271, 1705 (1996). L. A. Bumm, J. J. Arnold, T. D. Dunbar, D. L. Allara, and P. S. Weiss, J. Phys. Chem. B 103, 8122 (1999). S. J. Tans, M. H. Devoret, H. Dal, A. Thess, R. E. Smalley, L. J. Geerligs and C. Dekker, Nature 386, 475 (1997). C. Dekker, Physics Today, 22 (May 1999). D. Porath, A. Bezryadin, S. de Vries and C. Dekker, Nature 403, 635 (2000). M. Kociak, A. Yu. Kasumov, S. Guéron, B. Reulet, I. I. Khodos, Yu. B. Gorbatov, V. T. Volkov, L. Vaccarini, and H. Bouchiat, Phys. Rev. Lett. 86, 2416 (2001). A. Yu. Kasumov, M. Kociak, S. Guéron, B. Reulet, V. T. Volkov, D.V. Klinov and H. Bouchiat, Science 291, 280 (2001). C. Zhou, M. R. Deshpande, M. A. Reed, L. Jones and J. M. Tour, Appl. Phys. Lett. 71, 611 (1997). J. Chen, L. C. Calvet, M. A. Reed, D. W. Carr, D. S. Grubisha, D. W. Bennett, Chem. Phys. Lett. 313, 741 (1999). J. Chen, M. A. Reed, A. M. Rawlett, J. M. Tour, Science 286,1550 (1999). J. Chen, W. Wang, M. A. Reed, A. M. Rawlett, D.W. Price, and J. M. Tour, Appl. Phys. Lett. 77, 1224 (2000). J. M. Seminario, A. G. Zacarias and J. M. Tour, J. Am. Chem. Soc. 122, 3015 (2000). M. A. Reed, C. Zhou, C. J. Muller, T. P. Burgin, and J. M. Tour, Science 278, 252 (1997). C. Kergueris, thesis from Orsay university (1998). C. Kergueris, J.-P. Bourgoin, S. Palacin, D. Esteve, C. Urbina, M. Magoga and C. Joachim, Phys. Rev. B, 59, 12505 (1999). H. Park, J. Park, A. K. L. Lim, E. H. Anderson, A. P. Alivisatos, and P. L. McEuen, Nature, 407, 57 (2000). C. P. Collier, E. W. Wong, M. Belohradsky, F. M. Raymo, J. F. Stoddart, P. J.Kuekes, R. S. Williams, J. R. Heath, Science 285, 391 (1999). J. R. Heath, P. J. Kuekes, G. S. Snider, R. S. Williams, Science 280, 1716 (1998).
[**CAPTIONS**]{}\
Figure \[fig:fig1\]: A metal oxide semiconductor field-effect transistor (MOSFET). When the voltage of the gate is positive, electrons accumulate near the semiconductor surface making the channel between source and drain conducting.\
Figure \[fig:fig2\]: Resonant tunneling effect (a) without and (b) with an applied voltage. (c) Current as a function of the voltage.\
Figure \[fig:fig3\]: a) Metal - Insulator - Metal tunneling junction; b) Its equivalent diagram and c) the potential for a constant current I (=e/T) as a function of time.\
Figure \[fig:fig4\]: a) Electron box (a) and its energy levels (b). $C_{s}$ is a tunneling junction. The electron number $N$ stored on the island varies as a staircase as a function of the applied voltage (c).\
Figure \[fig:fig5\]: One-electron transistor. The current $I$ and the charge of the island is controlled by the gate potential.\
Figure \[fig:fig6\]: The Aviram-Ratner mechanism for molecular rectification.\
Figure \[fig:fig7\]: $I(V)$ characteristics of a symmetric Au/Polymer/PcPd/Polymer/Au heterojunction measured at 4.2 K (a) and the corresponding derivative (b)); PcPd is an octasubstituted metallophtalocyanine (from ref. [@ref41]).\
Figure \[fig:fig8\]: Voltage dependent STM images of the Si(100) surface after deposition of 4TVH oligomers. The sample bias was in (a) -2.1 V and in (b) -1.3 V (from ref. [@ref51]).\
Figure \[fig:fig9\]: Nanotubes on co-planar electrodes and corresponding $I(V)$ characteristics at different gate voltages (courtesy: C. Dekker).\
Figure \[fig:fig10\]: $I(V)$ characteristics of a Au-([**1**]{})-Au device at 60 K (from ref. [@ref77]).\
[**FIGURES**]{}\
![\[fig:fig1\]](image1.eps)
![\[fig:fig2\]](image2.eps)
![\[fig:fig3\]](image3.eps)
![\[fig:fig4\]](image4.eps)
![\[fig:fig5\]](image5.eps)
![\[fig:fig6\]](image6.eps)
![\[fig:fig7\]](image7.eps)
![\[fig:fig8\]](image8.eps)
![\[fig:fig9\]](image9a.eps "fig:") ![\[fig:fig9\]](image9b.eps "fig:")
![\[fig:fig10\]](image10.eps)
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'M. Niklaus'
- 'W. Schmidt'
- 'J. C. Niemeyer'
bibliography:
- 'literature.bib'
title: 'Two-dimensional AMR simulations of colliding flows'
---
[Colliding flows are a commonly used scenario for the formation of molecular clouds in numerical simulations. Due to the thermal instability of the warm neutral medium, turbulence is produced by cooling.]{} [We carry out a two-dimensional numerical study of colliding flows in order to test whether statistical properties inferred from adaptive mesh refinement (AMR) simulations are robust with respect to the applied refinement criteria.]{} [We compare probability density functions of various quantities as well as the clump statistics and fractal dimension of the density fields in AMR simulations to a static-grid simulation. The static grid with $2048^2$ cells matches the resolution of the most refined subgrids in the AMR simulations.]{} [The density statistics is reproduced fairly well by AMR. Refinement criteria based on the cooling time or the turbulence intensity appear to be superior to the standard technique of refinement by overdensity. Nevertheless, substantial differences in the flow structure become apparent. ]{} [In general, it is difficult to separate numerical effects from genuine physical processes in AMR simulations.]{}
Introduction
============
Computational fluid dynamics in astrophysics relies on numerical methods that are capable of covering a huge range of scales. Apart from smoothed particle hydrodynamics [@Mona92], adaptive mesh refinement (AMR) has been applied to variety of problems. This method was developed by @BerOli84 and @BerCol89. Among the widely used, publicly available AMR codes for astrophysical fluid dynamics are FLASH [@FryOls00], Enzo [@Osh04] and Ramses [@Teyss02]. Although there are comparative studies of AMR vs. SPH [for example, @SheaNaga05; @AgerMoore07; @CommHenne08], the degree of reliance of AMR in comparison to non-adaptive methods has met only little attention so far.
Especially for turbulent flows, it is a non-trivial question whether the solutions obtained from AMR simulations agree with the correct solutions of the fluid dynamical equations at a given resolution level. For this reason, we systematically compare AMR and static-grid simulations for a particular test problem in this article. We chose a scenario that has been investigated in the context of molecular cloud formation, namely, the frontal collision of opposing flows of warm atomic hydrogen at supersonic speed [@HeitSly06; @VazGom07; @HenneAud07; @HenBan08; @WalFol00]. Because of the cooling instability at densities $\sim 1\,\mathrm{cm}^{-3}$ and temperatures of a few thousand Kelvin, the gas becomes highly turbulent at the collision interface. Since the instabilities develop on length scales much smaller than the integral scale, this problem is computationally extremely demanding. The two-dimensional resolution study of @HenneAud07 showed that the properties of the turbulent multi-phase medium evolving in these simulations is highly resolution-dependent, and numerical convergence is seen only at resolutions well above $1000^{2}$. In three-dimensional simulations, such high resolutions are infeasible if static grids are used. Consequently, @HenBan08 and @BanVaz08 applied refinement by fixed density thresholds and refinement by Jeans mass, respectively, in their three-dimensional high-resolution AMR simulations.
In this article, we consider two-dimensional colliding flows without self-gravity and magnetic fields for a systematic comparison of AMR simulations to a reference simulation on a static grid. We analyze both statistical properties and the morphology of the gas fragmentation due to the cooling instability. This work is organized as follows: In Section 2 the numerical methods are described and the setup of the simulations will be presented in detail. In Section 3, we compare the results from the different simulations. Section 4 concludes this paper with a summary of the main results and general remarks on AMR.
Numerical methods and simulation setup
======================================
The simulations presented in this article are accomplished using the open source code Enzo [@BryNor97; @Osh04]. The compressible Euler equations are solved by means of the staggered grid, finite difference method Zeus [@StoNor92I; @StoNor92II; @StoMih92III]. We included the cooling function ${\fam=2 L}$ defined by @AudHen05 in these equations: $$\begin{aligned}
\frac{\partial}{\partial t}\rho + \boldsymbol u\cdot\boldsymbol\nabla\rho &=& 0 \label{equ:euler1}\\
\frac{\partial}{\partial t}\rho + \boldsymbol\nabla(\rho\boldsymbol u \otimes \boldsymbol u + P) &=& 0 \label{equ:euler2}\\
\frac{\partial}{\partial t} e + \boldsymbol\nabla[\boldsymbol u(\rho e + P)] &=& -{\fam=2 L}(\rho,T)\label{equ:euler3}.\end{aligned}$$ The primitive variables are the mass density $\rho$, the velocity $\boldsymbol u$ and the specific total energy $e$ of the fluid. The total energy per unit mass is given by $$e = \frac{1}{2}u^2+\frac{P}{(\gamma-1)\rho},$$ where $\gamma$ is the adiabatic exponent and the pressure $P$ is related to the mass density $\rho$ and the temperature $T$ via the ideal gas law: $$P = \frac{\rho k_B T}{\mu m_H}.$$ The constants $k_B$, $\mu$ and $m_H$ denote the Boltzmann constant, the mean molecular weight and the mass of the hydrogen atom, respectively. The gas is assumed to be a perfect gas with $\gamma = 5/3$ and $\mu = 1.4 m_H$.
The cooling function of @AudHen05 includes the cooling by fine-structure lines of CII and OI, further the cooling by H (Ly$\alpha$-line) and the electron recombination onto positively charged grains. The heating is due to the photo-electric effect on small grains and polycyclic aromatic hydrocarbons (PAH) caused by the far-ultraviolet galactic background radiation. For more information about this cooling function see @Wol95 [@WolMcK03; @Spi78; @BakTie94] and @Hab68. The pressure-equillibrium curve resulting from the cooling function is plotted as black curve in Figure \[fig:phasediag\_static5\]. For the numerical solution of the fluid dynamical equations, we used the radiative cooling routine implemented into Enzo. For each hydrodynamical time step, the state variables are iterated over several subcycles, and the resulting total energy increment for the whole time step is added.
For our numerical study, the two-dimensional setup of @AudHen05 and [@HenAud07] was adopted with small modifications. The initial gas content corresponds to the warm neutral material (WNM) in the ISM, i. e., the temperature is $T=7100$ K, the pressure is $P=7\times10^{-13}$ erg cm$^{-3}$ and the number density of neutral hydrogen is $n=0.71$ cm$^{-3}$. From the left and the right boundaries, warm gas with identical thermodynamic properties flows into the computational domain, where the cosine-shaped inflow velocity profile is modulated with small perturbations, realised by randomly shifted phases. These phase shifts are kept constant for the different simulations, so that initial conditions are exactly the same for all runs to ensure comparability. Following @HenBan08, the top and bottom boundary conditions are periodic. The physical dimensions of the computational domain are $20\times20$ pc. The two inflows of gas collide in the middle of the domain. The supersonic collision causes a steep rise in the gas density that triggers the thermal instability, and gas undergoes a transition into the phase of the cold neutral material (CNM) in the ISM. In this phase, the gas has temperatures in the range $30$–$100$ K and number densities within $20$–$50$ cm$^{-3}$ [@Fer01]. The thermal instability produces highly turbulent structures (see Figure \[fig:densityplot\_static5\]) with Mach numbers up to $20$. The challenge for AMR is to track these turbulent structures as accurately as possible.
\
A reference simulation was run with a static grid of $2048^2$ cells. Then the same setup was evolved in AMR simulations with a root-grid resolution of $128^2$ cells and $4$ levels of refinement. The resolution between adjacent refinement levels increases by a factor of $2$. Hence, the effective resolution at the highest level of refinement is $2048^2$. In these simulations, we employed three different types of refinement criteria:
1. refinement by overdensity (OD),
2. refinement by cooling time (CT),
3. refinement by rate of compression and enstrophy (RCEN).
The first two criteria are widely used in astrophysical AMR simulations. For refinement by overdensity, the mass density must exceed the initial density on the root grid by a certain factor. This overdensity, in turn, defines the initial density for refinement at the first level of refinement and so on. We chose three different values for the overdensity factor, namley, twice the initial density (default OD), as well as three times (OD-3) and fourtimes (OD-4) the initial density. For criterion CT, on the other hand, refinement is triggered for a grid cell if the cooling time $\tau_{\mathrm{cool}}:=P/(\gamma-1)\rho|{\fam=2 L}|$ becomes less than the sound crossing time over the cell width. Refinement by the rate of compression and the enstrophy uses yet another technique. It was introduced by @Sch09 for the simulation of supersonic isothermal turbulence. The control variables for refinement are the enstrophy and the rate of compression. The enstrophy is given by one-half of the square of the vorticity, while the rate of compression is defined by the substantial time-derivative of the negative divergence of the velocity. The expression used by @Sch09 to evaluate the rate of compression (see equation (12) in this paper) is easily generalized to the non-isothermal case, where the speed of sound is not a constant. To trigger refinement by RCEN, dynamic thresholds are calculated from statistical moments of the control variables: A grid cell is flagged for refinement if the local fluctuation of a control variable becomes greater than the maximum of the average and the standard deviation of the variable. On the root grid, averages and variances are computed globally, whereas averaging is constrained to individual grid patches at higher levels of refinement.
For comparison of the simulation results, we calculated probability density functions (pdf) of several quantities. To analyze the gas fragmentation in each simulation, we adapted the *clumpfind* algorithm implemented by @Pad07 to non-isothermal problems. The algorithm identifies the smallest dense regions that fulfill the Jeans criterion for gravitationally unstable gas. Since the clump samples found on the two-dimensional grids used in our simulations are insufficient for the calculation of clump mass spectra, only the total number and the mean size of the clumps are used for quantitative comparisons. In addition, we computed the fractal dimension of gas at densities higher than $n=20\, \mathrm{cm}^{-3}$ (corresponding to the minimum density of gas in the cold phase) by means of the box-counting method [@FedKless09].
Results
=======
Due to the gradual accumulation of gas in the simulation domain, no strict statistical equilibrium is approached. For this reason, we evolved the flow until noticeable small-scale structure has developed and the separation of the gas into two phases has emerged. As shown in Figure \[fig:phasediag\_static5\], two distinct phases are found at time $t=5\,\mathrm{Myrs}$. At this time, the central flow region is in a turbulent state (a contour plot of the mass density of the gas is shown in Figure \[fig:densityplot\_static5\]). Thus, we carry out our analysis for $t=5\,\mathrm{Myrs}$. While the main fraction of the gas is situated in the warm phase with temperatures between $5000$ and $10000$ K and low densities $\sim 1$ cm$^{-3}$, the cold gas with temperatures between $30$ K and $100$ K and densities in the range $30$ – $350$ cm$^{-3}$ can be found close to the equilibrium curve.
The pdfs of the mass density and the temperature obtained from different AMR simulations are plotted in Figure \[fig:pdfdenstemp\]. In principle, all refinement criteria reproduce the distributions found in the static-grid simulation quite well, although there is a trend of slightly more cold gas at the cost of warm gas. The discrepancy is more pronounced for refinement by over-density (OD) compared to the other criteria, and it becomes worse for the higher density thresholds (OD-3 and OD-4; not shown in the Figure). Nevertheless, it appears that the thermodynamic properties of the gas are quite robust in AMR simulations.
The gravitationally unstable clumps of gas identified by the clumpfind algorithm in the static-grid simulation at time $t=5\,\mathrm{Myrs}$ are depicted in Figure \[fig:clumpsstatic\]. The corresponding results for the AMR runs are shown in Figures \[fig:clumpsOD\]–\[fig:clumpsRCEN\]. Table \[tab:clumps\] lists the total number and the mean size of the clumps for each simulation. Also listed are the fractal dimensions of the gas regions with number density $n\ge 20$ cm$^{-3}$, which are plotted in Figures \[fig:fractalstatic\]-\[fig:fractalRCEN\].
\
run $N$ $\langle d\rangle$ $D$
-------- -------- -------------------- --------
static $1025$ $44$ $1.45$
OD $505$ $102$ $1.34$
OD-3 $481$ $128$ $1.33$
OD-4 $420$ $157$ $1.39$
CT $1239$ $32$ $1.50$
RCEN $1015$ $85$ $1.42$
: Number $N$ and average size $\langle d\rangle$ of the clumps in the CNM; fractal dimension $D$ of gas with number density greater than $20$ cm$^{-3}$.[]{data-label="tab:clumps"}
For refinement by OD, the fragmentation of the CNM is severely underestimated. The number of clumps is roughly half of the number in the static-grid simulation, and the clumps are typically larger. The lower degree of cold gas fragmentation results in a smaller fractal dimension (also see Figure \[fig:fractalOD\]). If the criteria OD-3 and OD-4 are applied, the number of clumps decreases further, while their average size increases. In the case of criterion OD-4, a slightly higher fractal dimension is obtained, because the cold phase tends to fill broad, area-filling regions. The cooling time criterion CT yields an amount of dense clumps with an average size that compares well to the reference simulation (see Figure \[fig:clumpsCT\]), although the degree of fragmentation appears to be overestimated slightly. However, we found that this overestimation decreases with the further evolution of the colliding flows and, thus, appears to be transient. Refinement by RCEN also reproduces the number of clumps and the fractal dimension of dense gas very well. However, there are some anomalously big clumps, which contribute to an average clump size that is systematically too large. In the plot showing gas at density $n\ge 20$ cm$^{-3}$ (see Figure \[fig:fractalRCEN\]), on the other hand, such anomalous structures are not visible. Although refinement by RCEN does not overproduce gas in the cold phase (as one can see from the excellent agreement of the density and temperature pdfs in Figures \[fig:densityRCEN\] and \[fig:tempRCEN\]), there appears to be a bias toward bigger clumps with this refinement method.
In contrast to the phase separation and gas fragmentation, striking deviations of the turbulent flow properties in the AMR vs. static grid simulations become apparent. Generally, a lot of turbulent small-scale structure is missing in the AMR simulations. Even for the criterion RCEN, which is based on control variables related to turbulence, this is apparent from the contour plots of the squared vorticity modulus shown in Figure \[fig:contvortRCENstat\]. Basically, the perturbations of the velocity field imposed at the inflow boundaries are quickly smoothed out in AMR simulations, so that turbulence is only produced by secondary (e. g., Kelvin-Helmholtz) instabilities at the collision interface in the central region of the computational domain. The reason is that all AMR cirteria, including RCEN, select relatively large fluctuations, whereas smaller perturbations are suppressed. On a static grid, on the other hand, the perturbations are transported from the boundaries to the centre and actively contribute to the production of turbulence. Consequently, small eddies are present in almost the whole domain in this case. Accordingly, the probability distribution of vorticity is markedly different (see Figure \[fig:pdfvort\]). In contrast, @Sch09 found very close agreement of the vorticity pdfs in a static-grid and an AMR simulation with criterion RCEN for turbulence in a periodic box with large-scale forcing. Our results thus indicate that the merits of different refinement schemes are non-universal but rather depend on the properties of individual flow structures.
Conclusions
===========
We performed two-dimensional simulations of colliding flows of warm atomic hydrogen with a radiative cooling function as source term in the energy equation. The goal of our study was the systematic comparison of AMR simulations, where different criteria for refinement were applied, to a reference simulation on a static grid.
While the probability distributions of mass density and temperature are well reproduced in AMR simulations, regardless of the refinement technique, differences become apparent in the fragmentation properties of the cold gas phase. As indicators, we used the total number of clumps and their average size. The clumps were identified by a clumpfind algorithm. In addition, we calculated the fractal dimension of dense gas, assuming a number density threshold of $20$ cm$^{-3}$. Remarkably, the largest deviations from the clump statistics and fractal dimension extracted from the static-grid simulation, were encountered for refinement by overdensity, which is a commonly used refinement criterion in astrophysical AMR simulations. The deviations increase with the chosen density threshold. In this regard, it is important to note that @HenBan08 applied a density-based refinement criterion, where the thresholds were chosen even higher than those considered in our study. Good agreement, on the other hand, was obtained if the cooling time or enstrophy in combination with the rate of compression (the negative rate of change of the velocity divergence) were applied.
Substantial problems with AMR became apparent with regard to turbulent flow properties. Basically, none of our AMR runs were able to reproduce even remotely the small-scale structure of turbulence and the probability distributions of turbulent flow variables such as the vorticity modulus. This defficiency can be attributed to the selection effects introduced by adaptive techniques. The definition of thresholds for triggering refinement either selects strong local fluctuations (for example, large shear that gives rise to Kelvin-Helmholtz instabilities) or large-scale perturbations such as accumulations of mass that become Jeans-unstable in self-gravitating gas. In this respect, the test problem we investigated in this work is particularly tough, because turbulence stems from small-scale instabilities that are seeded by weak initial perturbations. The varying grid resolution in AMR simulations inevitably modulate the growth of these instabilities and, as a consequence, the production of turbulence is suppressed. This defficiency might be overcome by the application of a subgrid scale model, which transports turbulent energy contained in small eddies that are resolved on finer grids across coarser grid regions [see @Maier09].
The key point of using AMR is the computational cost for a given effective resolution. Indeed, Table \[tab:cputime\] demonstrates that a substantial reduction of computation time is achieved with AMR, especially if refinement by overdensity is applied. So, AMR is essentially a trade-off, where fast computation has to be weighted carefully against the question whether the essential physics of the specific problem is captured. However, performing three-dimensional AMR simulations with very high effective resolution is conflicting. On the one hand the reduction of computation time will definitely be even greater, on the other hand there is the potential pitfall of inferring results that are properties of the numerics rather than the physics, since a comparison to a static-grid simulation is neither feasible nor desirable.
Method CPU time
-------- -------------------
static $1172$ h $36$ min
OD $54$ h $20$ min
CT $225$ h $18$ min
RCEN $290$ h $57$ min
: CPU time for the simulation runs.[]{data-label="tab:cputime"}
We thank Patrick Hennebelle and Edouard Audit for providing the cooling function that was used for this numerical study. We also thank Paolo Padoan for sharing his clumpfind tool. Computations described in this work were performed using the Enzo code developed by the Laboratory for Computational Astrophysics at the University of California in San Diego (http://lca.ucsd.edu)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The nature of the unknown sources of ultra-high energy cosmic rays can be revealed through the detection of the GZK feature in the cosmic ray spectrum, resulting from the production of pions by ultra-high energy protons scattering off the cosmic microwave background. Here we show that the GZK feature cannot be accurately determined with the small sample of events with energies $\sim 10^{20}$ eV detected thus far by the largest two experiments, AGASA and HiRes. With the help of numerical simulations for the propagation of cosmic rays, we find the error bars around the GZK feature are dominated by fluctuations which leave a determination of the GZK feature unattainable at present. In addition, differing results from AGASA and HiRes suggest the presence of $\sim 30\%$ systematic errors that may be due to discrepancies in the relative energy determination of the two experiments. Correcting for these systematics, the two experiments are brought into agreement at energies below $\sim 10^{20}$ eV. After simulating the GZK feature for many realizations and different injection spectra, we determine the best fit injection spectrum required to explain the observed spectra at energies above $10^{18.5}$ eV. We show that the discrepancy between the two experiments at the highest energies has low statistical significance (at the 2 $\sigma$ level) and that the corrected spectra are best fit by an injection spectrum with spectral index $\sim 2.6$. Our results clearly show the need for much larger experiments such as Auger, EUSO, and OWL, that can increase the number of detected events by 2 orders of magnitude. Only large statistics experiments can finally prove or disprove the existence of the GZK feature in the cosmic ray spectrum.'
address:
- |
INFN & Università degli Studi di Genova\
Via Dodecaneso, 33 - 16100 Genova, ITALY
- |
INAF/Osservatorio Astrofisico di Arcetri\
Largo E. Fermi, 5 - 50125 Firenze, ITALY
- |
Center for Cosmological Physics\
The University of Chicago, Chicago, IL 60637, USA
- |
Department of Astronomy & Astrophysics, & Enrico Fermi Institute,\
The University of Chicago, Chicago, IL 60637, USA
author:
- Daniel De Marco
- Pasquale Blasi
- 'Angela V. Olinto'
title: On the statistical significance of the GZK feature in the spectrum of ultra high energy cosmic rays
---
Introduction
============
The presence or lack of a feature in the spectrum of ultra-high energy cosmic rays (UHECRs) is key in determining the nature of these particles and their sources. Astrophysical proton sources distributed homogeneously in the universe produce a feature in the spectrum due to the production of pions off the cosmic microwave background. This feature, consisting of a rather sharp suppression of the flux, occurs at energies above $7 \times 10^{19}$ eV, as a result of the threshold in the production of pions in the final state of a proton-photon inelastic interaction. This important prediction was made independently by Greisen and by Zatsepin and Kuzmin [@GZK66]. The resulting spectral feature is now known as the GZK cutoff (or feature, as we prefer to call it). Alternative models for UHECR sources that involve new physical processes may evade the presence of this feature (see, e.g., [@bbv98]). Recent reviews on the origin and propagation of the ultra-high energy cosmic rays can be found in [@BS00; @Olin00], while a recent review of the observations can be found in [@watson].
The detection of cosmic ray events with energy above $E_{GZK}\sim 7\times
10^{19}$ eV does not necessarily imply that the GZK feature is not present: what characterizes the presence of the GZK feature is the relative number of events above and below $E_{GZK}$ when both sides of the spectrum can be accurately determined. The steep injection spectra required to fit the observations below $E_{GZK}$ imply that only a handfull of events above $10^{20}$ eV can be detected during the operation time of experiments such as AGASA and HiRes. This makes the identification of the GZK feature by these experiments extremely difficult. The problem is exacerbated by the fluctuations due to the discreteness of the process of photo-pion production, as will be discussed below. These uncertainties need to be considered when attempting a determination of the best fit injection spectrum of the particles, and in order to quantify the statistical significance of the presence or absence of the GZK feature in the observed spectrum.
The discrepancy between the results of the two largest experiments has generated much debate. AGASA [@AGASA] reports a higher number of events above $E_{GZK}$ than expected while HiRes[@hiresprop; @HIRES1; @HIRES2] reports a flux consistent with the GZK feature. Here we investigate in detail the statistical significance of this discrepancy as well as the significance of the presence or absence of the GZK feature in the data. We find that neither experiment has the necesseray statistics to establish if the spectrum of UHECRs has a GZK feature. In addition, a systematic error in the energy determination of the two experiments seems to be required in order to make the two sets of observations compatible in the low energy range, $10^{18.5}-10^{19.6}$ eV, where enough events have been detected to make the measurements reliable. The combined systematic errors in the energy determination is likely to be $\sim 30\%$. If we decrease the AGASA energies by $15\%$ while increasing HiRes energies also by $15\%$, the two experiments predict compatible fluxes at energies below $E_{GZK}$ and at energies above $E_{GZK}$ the fluxes are within $\sim 2\sigma$ of each other. In this case, the best fit injection spectrum has a spectral index of $\sim 2.6$ but a determination of the GZK feature has very low significance. The detection or non-detection of the GZK feature in the cosmic ray spectrum remains open to investigation by future generation experiments, such as the Pierre Auger project [@auger] and the EUSO [@EUSO] and OWL[@OWL] experiments.
This paper is planned as follows: in §2 we describe our simulations. In §3 we illustrate the present observational situation, limiting ourselves to AGASA and HiResI, and compare the data to the predictions of our simulations. We conclude in §4.
UHECR spectrum simulations
==========================
We assume that ultra-high energy cosmic rays are protons injected with a power-law spectrum in extragalactic sources. The injection spectrum is taken to be of the form $$F(E) d E = \alpha E^{-\gamma} \exp(-E/E_{\rm max}) d E
\label{eq:inj}$$ where $\gamma$ is the spectral index, $\alpha$ is a normalization constant, and $E_{\rm max}$ is the maximum energy at the source. Here we fix $E_{\rm max} =10^{21.5}$ eV, large enough not to affect the statistics at much lower energies. As shown in [@Blanton:2000dr], the induced spectrum of a uniform distribution of sources in space is almost indistinguishable from a distribution with the observed large scale structure in the galaxy distribution. Based on this result, we assume a spatially uniform distribution of sources and do not take into account luminosity evolution in order to avoid the introduction of additional parameters.
We simulate the propagation of protons from source to observer by including the photo-pion production, pair production, and adiabatic energy losses due to the expansion of the universe. In each step of the simulation, we calculate the pair production losses using the continuous energy loss approximation given the small inelasticity in pair production ($2 m_{\rm e}/m_{\rm p}\simeq10^{-3}$). For the rate of energy loss due to pair production at redshift $z=0$, $\beta_{\rm pp}(E,z=0)$, we use the results from [@Blumenthal:nn; @czs]. At a given reshift $z>0$, $$\beta_{\rm pp}(E,z)=(1+z)^3 \beta_{\rm pp}((1+z)E, z=0)\,.$$ Similarly, the rate of adiabatic energy losses due to redshift is calculated in each step using $$\beta_{\rm rsh}(E,z)=H_0 \left[\Omega_M (1+z)^3 +
\Omega_\Lambda\right]^{1/2}$$ with $H_0=75 ~ {\rm km~ s^{-1} Mpc}^{-1}$.
The photo-pion production is simulated in a way similar to that described in ref. [@Blanton:2000dr]. In each step, we first calculate the average number of photons able to interact via photo-pion production through the expression: $$\langle N_{\rm ph}(E,\Delta s) \rangle=\frac{\Delta s}{l(E, z)},$$ where $l(E,z)$ is the interaction length for photo-pion production of a proton with energy $E$ at redshift $z$ and $\Delta s$ is a step size, chosen to be much smaller than the interaction length (typically we choose $\Delta s=100 ~{\rm kpc}/(1+z)^3$).
In Fig. \[fig:il\] we plot the interaction length for photopion production used in [@BS00] (solid thin line), and in [@Stanev] (triangles). The dashed line is the result of our calculations (see below), which is in perfect agreement with the results of [@BS00; @Stanev]. The apparent discrepancy at energies below $10^{19.5}$ eV with the prediction of Ref. [@BS00] is only due to the fact that we consider only microwave photons as background, while in [@BS00] the infrared background was also considered. For our purposes, this difference is irrelevant as can be seen from the loss lengths plotted in Fig. \[fig:il\]. The rightmost thick solid line is the loss length for photopion production [@BS00], while the other thick solid line is the loss length for pair production. In the present calculations, we do not use the loss length of photopion production which is related to the interaction length through an angle averaged inelasticity. Unlike what was done in [@Blanton:2000dr], in the current simulations we evaluate the inelasticity for each single proton-photon scattering using the kinematics, rather than adopting an angle averaged value.
We calculate the interaction length, $l(E)$, as: $$l(E)^{-1}=\int{\rm d}\varepsilon n(\varepsilon)\int^{+1}_{-1}
{\rm d}\mu\frac{1-\mu\beta}{2}\sigma(s)$$ where $n(\varepsilon)$ is the number density of the CMB photons per unit energy at energy $\varepsilon$, $\beta$ is the velocity of the proton, $\mu$ is the cosine of the interaction angle, and $\sigma(s)$ is the total cross section for photo-pion production for the squared center of mass energy $$s=m^2+2\varepsilon E\left(1-\mu\beta\right)\,.$$
For the calculation of the interaction length we adopt the ${\rm p}+\gamma
\rightarrow{\rm hadrons}$ cross section given in [@pdgcrosssec], considering only the photons of the cosmic microwave background as targets. The calculated interaction length (see dashed line in Fig. \[fig:il\]) is in good agreement with the interaction length calculated in [@BS00] and in [@Stanev].
Once the interaction length is known, we then sample a Poisson distribution with mean $\langle N_{\rm ph}(E,\Delta s) \rangle$, to determine the actual number of photons encountered during the step $\Delta s$. When a photo-pion interaction occurs, the energy $\epsilon$ of the photon is extracted from the Planck distribution, $n_{ph}(\epsilon,T(z))$, with temperature $T(z)=T_0 (1+z)$, where $T_0=2.728$ K is the temperature of the cosmic microwave background at present. Since the microwave photons are isotropically distributed, the interaction angle, $\theta$, between the proton and the photon is sampled randomly from a distribution which is flat in $\mu={\rm cos}\theta$. Clearly only the values of $\epsilon$ and $\theta$ that generate a center of mass energy above the threshold for pion production are considered. The energy of the proton in the final state is calculated at each interaction from kinematics. The simulation is carried out until the statistics of events detected above some energy reproduces the experimental numbers. By normalizing the simulated flux by the number of events above an energy where experiments have high statistics, we can then ask what are the fluctuations in numbers of events above a higher energy where experimental results are sparse. The fits are therefore most sensitive to the energy regions below $E_{GZK}$ and give a good estimate of the uncertainties in the present experiments for energies above $E_{GZK}$. In this way we have a direct handle on the fluctuations that can be expected in the observed flux due to the stochastic nature of photo-pion production and to cosmic variance.
![Interaction length for photopion production as calculated in this paper (dashed line) compared to the interaction length of [@BS00] (solid thin line) and of [@Stanev] (triangles). The thick solid lines are the loss lengths for photopion production (on the right) and of pair production (on the left).[]{data-label="fig:il"}](new_length1.eps){width="70.00000%"}
The simulation proceeds in the following way: a source distance is generated at random from a uniform distribution in a universe with $\Omega_\Lambda=0.7$ and $\Omega_m=0.3$. In a Euclidean universe, the flux from a source would scale as $r^{-2}$ where $r$ is the distance between the source and the observer. On the other hand, the number of sources between $r$ and $r+dr$ would scale as $r^2$, so that the probability that a given event has been generated by a source at distance $r$ is independent of $r$: sources at different distances have the same probability of generating any given event. In a flat universe with a cosmological constant, this is still true provided the distance $r$ is taken to be $$r = c \int_{t_g}^{t_0} \frac{dt}{R(t)},$$ where $t_g$ is the age of the universe when the event was generated, $t_0$ is the present age of the universe, and $R(t)$ is the scale factor of the universe.
![Aperture of the HiResI experiment as a function of the energy from [@HIRES2].[]{data-label="fig:hiresexp"}](hires.eps){width="90.00000%"}
Once a source distance has been selected at random, a particle energy is also assigned from a distribution that reflects the injection spectrum, chosen as in Eq. (\[eq:inj\]). This particle is then propagated to the observer and its energy recorded. This procedure is repeated until the number of events above a threshold energy, $E_{th}$ is reproduced. With this procedure we can assess the significance of results from present experiments with limited statistics of events. There is an additional complication in that the aperture of the experiment usually depends on energy. This is taken into account by allowing the event to be detected or not detected depending upon the function $H(E)$ that describes the energy dependence of the aperture.
We only study the spectrum above $10^{18.5}$ eV where the flux is supposed to be dominated by extragalactic sources. For this energy range, we focus on the experiments that have the best statistics: AGASA and HiResI. For the AGASA experiment the exposure is basically flat above $10^{19}$ eV, while for HiRes the exposure is as plotted in Fig. (\[fig:hiresexp\]) [@HIRES2]. For AGASA data, the simulation is stopped when the number of events above $E_{th} = 10^{19}$ eV equals 866. For HiRes this number is 300. Note that while for AGASA the number of detected events actually corresponds to the generated events, for HiRes the number of detected events requires a correction due to the energy dependence of the aperture $H(E)$. This correction allows one to reconstruct the observed spectrum. The statistical error in the energy determination is accounted for in our simulation by generating a [*detection*]{} energy chosen at random from a Gaussian distribution centered at the arrival energy $E$ and with width $\Delta E/E=30\%$ for both experiments.
Our simulations reproduce well the predictions of analytical calculations, in particular at the energies where energy losses may be approximated as continuous. In Fig. \[fig:an\], we compare the results of our simulation with analytical calculations carried out as in ref. [@beregrig]. The curves plotted in the figure are the so called modification factors, defined in [@beregrig; @berebook] for three different values of the source redshift ($z=0.002$, $z=0.02$ and $z=0.2$ from top to bottom). The differential injection spectrum is taken to be a power law $E^{-2.1}$. The points with error bars are the results of our simulations with $2\times 10^6$ particles produced by sources at the redshifts given above. The agreement between the simulated and analytical calculations at low energies is clear. At the energies where photo-pion production becomes important, simulations predict a slightly larger flux than analytical calculations. This is a well known result, and is due to the discreteness of photo-pion energy losses, that allow some particles to reach the detector without appreciable losses.
![Comparison between analytical calculations and the results of our simulation for the modification factor, for injection spectrum $E^{-2.1}$ [@beregrig] and three values of the source redshift ($z=0.002$, $z=0.02$ and $z=0.2$ from top to bottom).[]{data-label="fig:an"}](conf.eps){width="80.00000%"}
In Fig. \[fig:an1\] we compare the results of our simulations (points with error bars) with analytical calculations of the diffuse flux of UHECRs (continuous line) expected if sources with no luminosity evolution inject UHECRs with a spectrum $E^{-2.7}$ [@bereAGN; @bgg2002]. The agreement is evident.
![Comparison between the results of our simulation and the analytical calculations of [@bereAGN; @bgg2002] for the case of astrophysical sources injecting cosmic rays with a spectrum $E^{-2.7}$ and no luminosity evolution.[]{data-label="fig:an1"}](comp.eps){width="80.00000%"}
AGASA versus HiResI
===================
The two largest experiments that measured the flux of UHECRs report apparently conflicting results. The data of AGASA and HiResI on the flux of UHECRs multiplied as usual by the third power of the energy are plotted in Fig. \[fig:agasahires\] (the squares are the HiResI data while the circles are the AGASA data).
![Circles show the AGASA spectrum from [@AGASA] while squares show the HiResI spectrum from [@HIRES2].[]{data-label="fig:agasahires"}](agasahires_normal.eps){width="90.00000%"}
The figure shows that HiResI data are systematically below AGASA data and that HiResI sees a suppression at $\sim 10^{20}$ eV that resembles the GZK feature while AGASA does not.
We apply our simulations here to the statistics of events of both AGASA and HiResI in order to understand whether the discrepancy is statistically significant and whether the GZK feature has indeed been detected in the cosmic ray spectrum. The number of events above $10^{19}$ eV, $10^{19.6}$ eV and $10^{20}$ eV for AGASA and HiResI are shown in Table \[tab:data\_shifted\]. For reasons that will be clear below, we also show in the same table the number of events above the same energy thresholds, in the case that AGASA has a systematic error that overestimates the energy by $15\%$ while HiResI systematically underestimates the energy by $15\%$.
---------------------------- ------ ------ ------ ------
$\log(E_{th})\,({\rm eV})$
data -15% data +15%
19 866 651 300 367
19.6 72 48 27 39
20 11 7 1 2.2
---------------------------- ------ ------ ------ ------
: Number of events for AGASA and HiResI detected above the energy thresholds reported in the first column.
\[tab:data\_shifted\]
In order to understand the difference, if any, between AGASA and HiRes data we first determine the injection spectrum required to best fit the observations. In order to do this, we run 400 realizations of the AGASA and HiRes event statistics, as reported in the column labelled [*data*]{} in Table \[tab:data\_shifted\].
The injection spectrum is taken to be a power law with index $\gamma$ between 2.3 and 2.9 with steps of 0.1. For each injection spectrum we calculated the $\chi^2$ indicator (averaged over 400 realizations for each injection spectrum). The errors used for the evaluation of the $\chi^2$ are due to the square roots of the number of observed events. As reported in Table \[tab:chiq\], the best fit injection spectrum can change appreciably depending on the minimum energy above which the fit is calculated. In these tables we give $\chi_{e}^2(N)$, where $N$ is the number of degrees of freedom, while the subscript, $e$, is the logarithm of $E_{th}$ (in eV), the energy above which the fit has been calculated. The numbers in bold-face correspond to the best fit injection spectrum. These fits are dominated by the low energy data rather than by the poorer statistics at the higher energies.
---------- --------------------- ------------------- -------------------- --------------------- ------------------- --------------------
$\gamma$ $\chi^2_{18.5}(17)$ $\chi^2_{19}(12)$ $\chi^2_{19.6}(6)$ $\chi^2_{18.5}(15)$ $\chi^2_{19}(10)$ $\chi^2_{19.6}(4)$
2.3 1694 34 17 145 29 23
2.4 1215 24 12 80 19 15
2.5 724 19 [**10**]{} 36 14 11
2.6 248 [**16**]{} [**10**]{} [**14**]{} 9 7
2.7 82 17 11 33 [**7**]{} 5
2.8 [**80**]{} 21 13 126 [**7**]{} [**4**]{}
2.9 316 27 15 257 9 [**4**]{}
---------- --------------------- ------------------- -------------------- --------------------- ------------------- --------------------
: $\chi^2$ for fits to AGASA and HiResI data above $10^{18.5}$ eV, $10^{19}$ eV, and $10^{19.6}$ eV for varying spectral index $\gamma$. In parenthesis the number of degrees of freedom.[]{data-label="tab:chiq"}
If the data at energies above $10^{18.5}$ eV are taken into account for both experiments, the best fit spectra are $E^{-2.8}$ for AGASA and $E^{-2.6}$ for HiRes. If the data at energies above $10^{19}$ eV are used for the fit, the best fit injection spectrum is $E^{-2.6}$ for AGASA and between $E^{-2.7}$ and $E^{-2.8}$ for HiRes. If the fit is carried out on the highest energy data ($E>10^{19.6}$ eV), AGASA prefers an injection spectrum between $E^{-2.5}$ and $E^{-2.6}$, while $E^{-2.8}$ or $E^{-2.9}$ fit better the HiRes data in the same energy region. Note that the two sets of data uncorrected for any possible systematic errors require different injection spectra that change with $E_{th}$.
${E_{th}}$ $\gamma=2.5$ $\gamma=2.6$ $\gamma=2.8$
------------- --------------------------- --------------------------- ---------------------------
$10^{19.6}$ [$65\pm8.2\,(+0.5)\,$]{} [$58\pm7.6\,(+1.4)\,$]{} [$46\pm6.8\,(+2.8)\,$]{}
$10^{20}$ [$3.5\pm1.9\,(+2.4)\,$]{} [$2.8\pm1.7\,(+2.6)\,$]{} [$2.0\pm1.4\,(+2.8)\,$]{}
: Number of events expected above $E_{th}$(eV) for different injection spectra assuming the AGASA statistics above $10^{19}$ eV. In parenthesis are the number of standard deviations, $\sigma$, between the observed and expected number of events. In square brackets are the discrepancies calculated with a combined error bar of simulation and observation uncertainties, $\sigma_{tot}$.
\[tab:discrepanza\_agasa\]
${E_{th}}$ $\gamma=2.6$ $\gamma=2.7$ $\gamma=2.8$
------------- --------------------------- --------------------------- ---------------------------
$10^{19.6}$ [$31\pm5.6\,(-0.8)\,$]{} [$28\pm5.3\,(-0.2)\,$]{} [$26\pm5.2\,(+0.3)\,$]{}
$10^{20}$ [$1.9\pm1.4\,(-0.9)\,$]{} [$1.5\pm1.2\,(-0.5)\,$]{} [$1.3\pm1.2\,(-0.3)\,$]{}
: Number of events expected above $E_{th}$(eV) for different injection spectra assuming the HiResI statistics above $10^{19}$ eV from Table \[tab:data\_shifted\]. In parenthesis are the number of $\sigma$ between the observed and expected number of events. In square brackets are the number of $\sigma_{tot}$.
\[tab:discrepanza\_hires\]
In order to quantify the significance of the detection or lack of the GZK flux suppression, we report in Tables \[tab:discrepanza\_agasa\] and \[tab:discrepanza\_hires\] the mean number of events above the indicated energy threshold, $\langle N (E > E_{th})
\rangle$, for different injection specta. In parenthesis , we show the discrepancy between the observations as in Table \[tab:data\_shifted\] and the simulations in number of standard deviations, $\sigma$, where $\sigma^2 = \langle {N(E > E_{th})^2 -
\langle N (E > E_{th}) \rangle^2} \rangle$. From Tables \[tab:discrepanza\_agasa\] and \[tab:discrepanza\_hires\] we can see that while HiResI is consistent with the existence of the GZK feature in the spectrum of UHECRs, AGASA detects an increase in flux, but only at the $\sim 2.5\sigma$ level for the best fit injection spectra.
![Simulations of AGASA statistics with injection spectra $E^{-2.6}$ (upper plot) and $E^{-2.8}$ (lower plot). The crosses with error bars are the results of simulations, while the grey points are the AGASA data.[]{data-label="fig:agasa_2627"}](agasa_26.eps "fig:"){width="70.00000%"} ![Simulations of AGASA statistics with injection spectra $E^{-2.6}$ (upper plot) and $E^{-2.8}$ (lower plot). The crosses with error bars are the results of simulations, while the grey points are the AGASA data.[]{data-label="fig:agasa_2627"}](agasa_28.eps "fig:"){width="70.00000%"}
![Simulations of HiRes statistics with injection spectra $E^{-2.6}$ (upper plot) and $E^{-2.7}$ (lower plot). The crosses with error bars are the results of simulations, while the squares are the HiRes data.[]{data-label="fig:hires_2528"}](hiresI_26.eps "fig:"){width="70.00000%"} ![Simulations of HiRes statistics with injection spectra $E^{-2.6}$ (upper plot) and $E^{-2.7}$ (lower plot). The crosses with error bars are the results of simulations, while the squares are the HiRes data.[]{data-label="fig:hires_2528"}](hiresI_27.eps "fig:"){width="70.00000%"}
A more graphical representation of the uncertainties involved are displayed in Figs. \[fig:agasa\_2627\] for AGASA and \[fig:hires\_2528\] for HiRes, for two choices of the injection spectrum. These plots show clearly the low level of significance that the detections above $E_{GZK}$ have with low statistics. The large error bars that are generated by our simulations at the high energy end of the spectrum are mainly due to the stochastic nature of the process of photo-pion production: in some realizations some energy bins are populated by a few events, while in others the few particles in the same energy bin do not produce a pion and get to the observer unaffected. The large fluctuations are unavoidable with the extremely small statistics available with present experiments. On the other hand, the error bars at lower energies are minuscule, so that the two data sets (AGASA and HiResI) cannot be considered to be two different realizations of the same phenomenon. Instead, systematic errors in at least one if not both experiments are needed to explain the discrepancies at lower energies.
Taking into account the (theoretical) error bars in the analysis makes the significance of the presence or absence of the GZK feature much weaker than the consistency checks shown in Tables \[tab:discrepanza\_agasa\] and \[tab:discrepanza\_hires\]. In order to estimate this effect, we proceed in the following way: we calculate the expected number of events above some threshold with its corresponding standard deviation ($\sigma_{sim}$), as determined by the fluctuations in the flux simulation. The observed number of events above the same threshold is also known with the error bar $\sigma_{obs}$. The discrepancy between the two is now calculated using the error $\sigma_{tot}=(\sigma_{sim}^2+
\sigma_{obs}^2)^{1/2}$. Our results are summarized in Tables \[tab:discrepanza\_agasa\] for AGASA and \[tab:discrepanza\_hires\] for HiRes. The numbers with error bars are the simulated expectations, while the discrepancy between simulations and observations, calculated as described above is in square brackets, in units of $\sigma_{tot}$. It becomes clear that the effective discrepancy between predictions and the AGASA data is at the level of $2.1-2.5\sigma$. Therefore a definite answer to the question of whether the GZK feature is there or not awaits a significant improvement in statistics at high energies.
As seen in Fig. \[fig:agasahires\], the difference between the AGASA and HiResI spectra is not only in the presence or absence of the GZK feature: the two spectra, when multiplied by $E^3$, are systematically shifted by about a factor of two. This shift suggests that there may be a systematic error either in the energy or the flux determination of at least one of the two experiments. Possible systematic effects have been discussed in [@astro0209422] for the AGASA collaboration and in [@HIRES2] for HiResI. At the lower end of the energy range the offset may be due to the notoriously difficult determination of the AGASA aperture at threshold while the discrepancies at the highest energies is unclear at the moment. In any case, a systematic error of $\sim 15\%$ in the energy determination is well within the limits that are allowed by the analysis of systematic errors carried out by both collaborations.
In order to illustrate the difficulty in determining the existence of the GZK feature in the observed data in the presence of systematic errors, we split the energy gap by assuming that the energies as determined by the AGASA collaboration are overestimated by $15\%$, while the HiRes energies are underestimated by the same factor. The number of events above an energy threshold is again reported in Table \[tab:data\_shifted\], in the column labelled $15\%$. In this case the total number of events above $10^{19}$ eV is reduced for AGASA from 866 to 651, while for HiResI it is enhanced from 300 to 367. We ran our simulations with these new numbers of events and repeat the statistical analysis described above. The values of $\chi^2$ obtained in this case are reported in Table \[tab:chiq\_shift\].
---------- --------------------- ------------------- -------------------- --------------------- ------------------- --------------------
$\gamma$ $\chi^2_{18.6}(15)$ $\chi^2_{19}(11)$ $\chi^2_{19.6}(5)$ $\chi^2_{18.6}(14)$ $\chi^2_{19}(10)$ $\chi^2_{19.6}(4)$
2.3 505 18 12 79 13 7
2.4 351 13 8.5 40 7 4
2.5 188 [**9**]{} [**5.6**]{} 13 3.7 2.0
2.6 54 [**9**]{} [**5.6**]{} [**6**]{} [**2.0**]{} [**1.1**]{}
2.7 [**20**]{} 11 6.4 23 3.1 1.4
2.8 54 15 7.2 94 6 2.4
2.9 145 20 9.1 176 10 4
---------- --------------------- ------------------- -------------------- --------------------- ------------------- --------------------
: $\chi^2$ for AGASA and HiResI in which a correction for a systematic $15\%$ overestimate of the energies has been assumed for AGASA and a $15\%$ underestimate of the energies has been assumed for HiResI.
\[tab:chiq\_shift\]
For AGASA, the best fit injection spectrum is now between $E^{-2.5}$ and $E^{-2.6}$ above $10^{19}$ eV and above $10^{19.6}$ eV (the $\chi^2$ values are identical). For the HiRes data, the best fit injection spectrum is $E^{-2.6}$ for the whole set of data, independent of the threshold. It is interesting to note that the best fit injection spectrum appears much more stable after the correction of the $15\%$ systematics has been carried out. Moreover, the best fit injection spectra as derived for each experiment independently coincides for the corrected data unlike the uncorrected case. This suggests that combined systematic errors in the energy determination at the $\sim$ 30% level may in fact be present.
The expected numbers of events with energy above $10^{19.6}$ eV and above $10^{20}$ eV with the deviation from the data are reported in Tables \[tab:discrepanza\_agasa085\] and \[tab:discrepanza\_hires115\]: while HiResI data remain in agreement with the prediction of a GZK feature, the AGASA data seem to depart from such prediction but only at the level of $\sim 1.8\sigma$. The systematics in the energy determination further decreased the significance of the GZK feature, such that the AGASA and HiResI data are in fact only less than $2\sigma$ away from each other.
${E_{th}}$ $\gamma=2.5$ $\gamma=2.6$ $\gamma=2.7$
------------- --------------------------- --------------------------- ---------------------------
$10^{19.6}$ [$49\pm6.9\,(+0.2)\,$]{} [$43\pm6.5\,(+0.8)\,$]{} [$39\pm6.1\,(+1.3)\,$]{}
$10^{20}$ [$2.6\pm1.6\,(+1.7)\,$]{} [$2.3\pm1.5\,(+1.8)\,$]{} [$1.8\pm1.4\,(+2.0)\,$]{}
: Number of events expected above $E_{th}$ (eV) for AGASA energies decreased systematically by $15\%$. In parenthesis is the number of standard deviations between observations and simulations, $\sigma$. In square brackets are the discrepancies calculated in units of $\sigma_{tot}$.
\[tab:discrepanza\_agasa085\]
${E_{th}}$ $\gamma=2.5$ $\gamma=2.6$
------------- --------------------------- ---------------------------
$10^{19.6}$ [$43\pm6.3\,(-0.6)\,$]{} [$38\pm6.0\,(+0.1)\,$]{}
$10^{20}$ [$2.8\pm1.7\,(-0.4)\,$]{} [$2.3\pm1.5\,(-0.1)\,$]{}
: Number of events expected above $E_{th}$ (eV) for HiResI energies increased systematically by $15\%$. In parenthesis is the number of standard deviations between observations and simulations, $\sigma$. In square brackets are the discrepancies calculated in units of $\sigma_{tot}$.
\[tab:discrepanza\_hires115\]
We can use the same procedure illustrated above to evaluate the effect of the error bars in the simulated data compared to the data corrected by 15%. The results are reported in square brackets in Tables \[tab:discrepanza\_agasa085\] (for AGASA) and \[tab:discrepanza\_hires115\] (for HiRes), showing that the effective discrepancy between expectations (with uncertainties due to discrete energy losses and cosmic variance) and AGASA data is even smaller, only at the $1.5\sigma$ level. In Fig. \[fig:26big\], we plot the simulated spectra for injection spectrum $E^{-2.6}$ and compare them to observations of AGASA (upper plot) and HiRes (lower plot).
![Simulated spectra for the best fit injection spectrum with $\gamma=2.6$. Upper panel shows simulations for the AGASA event statistics after correcting for energy by an overall shift of -15%. The lower panel shows the fluctuations expected for the event statistics of HiResI after shifting HiResI energies by +15%. The shifted data for AGASA (grey circles) and HiResI (dark squares) are shown in both panels.[]{data-label="fig:26big"}](agasa085_26.eps "fig:"){width="70.00000%"} ![Simulated spectra for the best fit injection spectrum with $\gamma=2.6$. Upper panel shows simulations for the AGASA event statistics after correcting for energy by an overall shift of -15%. The lower panel shows the fluctuations expected for the event statistics of HiResI after shifting HiResI energies by +15%. The shifted data for AGASA (grey circles) and HiResI (dark squares) are shown in both panels.[]{data-label="fig:26big"}](hiresI115_26.eps "fig:"){width="70.00000%"}
Conclusions
===========
We considered the statistical significance of the UHECR spectra measured by the two largest experiments in operation, AGASA and HiRes. The spectrum released by the HiResI collaboration seems to suggest the presence of a GZK feature. This has generated claims that the GZK cutoff has been detected, reinforced by data from older experiments [@waxmanb]. However, no evidence for such a feature has been found in the AGASA experiment. We compared the data with theoretical predictions for the propagation of UHECRs on cosmological distances with the help of numerical simulations. We find that the very low statistics of the presently available data hinders any statistically significant claim for either detection or the lack of the GZK feature.
A comparison of the spectra obtained from AGASA and HiResI shows a systematic shift of the two data sets, which may be interpreted as a systematic error in the relative energy determination of about $30\%$. If no correction for this systematic shift is carried out, the AGASA data are best fit by an injection spectrum $E^{-\gamma}$ with $\gamma=2.6$ for energies above $10^{19}$ eV. The fit steepens to $\gamma=2.8$ when considering events down to $10^{18.5}$ eV. For HiResI, the best fit is between $\gamma=2.7$ and $\gamma=2.8$ for events with energy above $10^{19}$ eV and $\gamma=2.6$ for events above $10^{18.5}$ eV. With these best fits to the injection spectrum the AGASA data depart from the prediction of a GZK feature by $2.6\sigma$ for $\gamma=2.6$. The HiRes data are fully compatible with the prediction of a GZK feature in the cosmic ray spectrum. The fit to the data with energy above $10^{19}$ eV is probably less susceptible to contamination by a possible galactic flux. In this case the AGASA departure from the expected GZK feature is, as stressed above, at the level of about $2.6\sigma$. Taking into account the uncertainties derived from the simulations, attributed to the discreteness of the photo-pion production and to cosmic variance, this discrepancy becomes even less significant ($\sim
2.3\sigma$). It is clear that, if confirmed by future experiments with much larger statistics, the increase in flux relative to the GZK prediction hinted by AGASA would be of great interest. This may signal the presence of a new component at the highest energies that compensates for the expected suppression due to photo-pion production, or the effect of new physics in particle interactions (for instance the violation of Lorentz invariance or new neutrino interactions).
Identifying the cause of the systematic energy and/or flux shift between the AGASA and the HiRes spectra is crucial for understanding the nature of UHECRs. This discrepancy has stimulated a number of efforts to search for the source of these systematic errors including the construction of hybrid detectors, such as Auger, that utilize both ground arrays and fluorescence detectors. A possible overestimate of the AGASA energies by $15\%$ and a corresponding underestimate of the HiRes energies by the same amount would in fact bring the two data sets in agreement in the region of energies below $10^{20}$ eV. In this case both experiments are consistent with a GZK feature with large error bars. The AGASA excess is at the level of $1.7\sigma$ ($1.4\sigma$ if the observational uncertainties are also taken into account). Interestingly enough, the correction by $15\%$ in the error determination implies that the best fit injection spectrum becomes basically the same for both experiments ($E^{-2.6}$).
With the low statistical significance of either the excess flux seen by AGASA or the discrepancies between AGASA and HiResI, it is inaccurate to claim either the detection of the GZK feature or the extension of the UHECR spectrum beyond $E_{GZK}$ at this point in time. A new generation of experiments is needed to finally give a clear answer to this question. In Fig. \[fig:newgen\] we report the simulated spectra that should be achieved in 3 years of operation of Auger (upper panel) and EUSO (lower panel). The error bars reflect the fluctuations expected in these high statistics experiments for the case of injection spectrum $E^{-2.6}$. (Note that the energy threshold for detection by EUSO is not yet clear.) It is clear that the energy region where statistical fluctuations dominate the spectrum is moved to $\sim 10^{20.6}$ eV for Auger, allowing a clear identification of the GZK feature. The [*fluctuations*]{} dominated region stands beyond $10^{21}$ eV for EUSO.
![Predicted spectra and error bars for 3 years of operation of Auger (upper plot) and EUSO (lower plot).[]{data-label="fig:newgen"}](auger.eps "fig:"){width="70.00000%"} ![Predicted spectra and error bars for 3 years of operation of Auger (upper plot) and EUSO (lower plot).[]{data-label="fig:newgen"}](euso.eps "fig:"){width="70.00000%"}
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Douglas Bergman for providing the number of events in different bins for the HiRes results. We also thank Venya Berezinsky, James Cronin, Masahiro Teshima, Mario Vietri and Alan Watson for a number of helpful comments. This work was supported in part by the NSF through grant AST-0071235 and DOE grant DE-FG0291-ER40606 at the University of Chicago.
[99]{}
K. Greisen, Phys. Rev. Lett. 16 (1966) 748; G. T. Zatsepin and V. A. Kuzmin, Sov. Phys. JETP Lett. 4 (1966) 78
V.S. Berezinsky, P.Blasi and A. Vilenkin, [**58**]{} (1998) 103515
P. Bhattacharjee and G. Sigl, Phys. Rept. 327 (2000) 109
A. Olinto, Phys. Rep. 333 (2000) 329
A.A. Watson, Phys. Rep., 333 (2000) 309
M. Takeda et al., Phys. Rev. Lett. 81 (1998) 1163; M. Takeda et al. preprint astro-ph/9902239 N. Hayashida et al. Phys. Rev. Lett. 73 (1994) 3491; D. J. Bird et al. Astrophys J. 441 (1995) 144; Phys. Rev. Lett. 71 (1993) 3401; Astrophys. J. 424 (1994) 491; M. A. Lawrence, R. J. O. Reid and A. A. Watson, J. Phys. G. Nucl. Part. Phys. 17 (1991) 773; N. N. Efimov et al., Ref. Proc. International Symposium on [ *Astrophysical Aspects of the Most Energetic Cosmic Rays*]{}, eds. M. Nagano and F. Takahara (World Scientific, Singapore, 1991), p. 20
S. C. Corbató et al., Nucl. Phys. B (Proc. Suppl.) 28B (1992) 36
T. Abu-Zayyad, et al., preprint astro-ph/0208301
T. Abu-Zayyad, et al., preprint astro-ph/0208243
J. W. Cronin, Proceedings of ICRC 2001 (2001).
see http://www.euso-mission.org
R. E. Streitmatter, Proc. of [*Workshop on Observing Giant Cosmic Ray Air Showers from $>10^{20}$ eV Particles from Space*]{}, eds. J. F. Krizmanic, J. F. Ormes, and R. E. Streitmatter (AIP Conference Proceedings 433, 1997).
M. Blanton, P. Blasi and A. V. Olinto, Astropart. Phys. [**15**]{} (2001) 275
G. R. Blumenthal, Phys. Rev. [**D1**]{} (1970) 1596.
M. J. Chodorowski, A. A. Zdziarski, and M. Sikora, Astrophys. J. [ **400**]{} (1992) 181.
T. Stanev, R. Engel, A. Mucke, R. J. Protheroe and J. P. Rachen, Phys. Rev. D [**62**]{} (2000) 093005.
D.E. Groom et al., The European Physical Journal C[**15**]{} (2000) 1. [ http://pdg.lbl.gov/\~sbl/gammap\_total.dat]{}
V. Berezinsky and S. Grigorieva, Astron. Astroph. 199 (1988) 1
V. S. Berezinsky, S.V. Bulanov, V. A. Dogiel, V. L. Ginzburg, and V. S. Ptuskin, Astrophysics of Cosmic Rays, (Amsterdam: North Holland, 1990)
V. Berezinsky, A. Z. Gazizov and S. Grigorieva, preprint astro-ph/0210095
V. Berezinsky, A. Z. Gazizov and S. Grigorieva, preprint hep-ph/0204357.
M. Takeda et al., preprint astro-ph/0209422
J.N. Bahcall and E. Waxman, preprint hep-ph/0206217
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We prove that the variation in a smooth projective family of varieties admitting a good minimal model forms a lower bound for the Kodaira dimension of the base, if the dimension of the base is at most five and its Kodaira dimension is non-negative. This gives an affirmative answer to the conjecture of Kebekus and Kovács for base spaces of dimension at most five.'
address: 'Behrouz Taji, University of Notre Dame, Department of Mathematics, 278 Hurley, Notre Dame, IN 46556 USA.'
author:
- Behrouz Taji
bibliography:
- 'bibliography/general.bib'
title: Remarks on the Kodaira dimension of base spaces of families of manifolds
---
Introduction and main results {#sect:Section1-Introduction}
=============================
Introduction
------------
A conjecture of Shafarevich [@Shaf63] and Viehweg [@Viehweg01] predicted that if the geometric general fiber of a smooth projective family $f_U:U\to V$ is canonically polarized, then $\kappa(V) = \dim(V)$, assuming that $f_U$ has *maximal variation*. The problem was subsequently generalized to the case of fibers with good minimal models, cf. [@Vie-Zuo01] and [@PS15]. This conjecture is usually referred to as the *Viehweg Hyperbolicity Conjecture* and has been recently settled in full generality.
Generalizing Viehweg’s conjecture, in their groundbreaking series of papers, Kebekus and Kovács predicted that variation in the smooth family $f_U$, which we denote by $\operatorname{Var}(f_U)$, should be closely connected to $\kappa(V)$, even when it is *not* maximal.
\[conj:kk0\] Let $f_U: U\to V$ be a smooth projective family whose general fiber admits a good minimal model [^1]. Then, either
1. $\kappa(V) = -\infty$ and $\operatorname{Var}(f_U) < \dim V$, or
2. $\kappa(V)\geq 0$ and $\operatorname{Var}(f_U)\leq \kappa(V)$.
Once the family $f_U$ arises from a moduli functor with an algebraic coarse moduli scheme, then an even stronger version of Conjecture \[conj:kk0\], that is due to Campana, can be verified (see Section \[sect:Section5-Future\]). However, the main focus of this paper is to study Conjecture \[conj:kk0\], when the family $f_U$ is not associated with a well-behaved moduli functor, even after running a relative minimal model program. Most families of higher dimensional projective manifolds that are not of general type but have pseudo-effective canonical bundle belong to this category.
The following theorem is the main result of this paper.
\[thm:main\] Conjecture \[conj:kk0\] holds when $\dim(V)\leq 5$.
When dimension of the base and fibers are equal to one, Viehweg’s hyperbolicity conjecture (or equivalently Conjecture \[conj:kk0\]) was proved by Parshin [@Parshin68], in the compact case, and in general by Arakelov [@Arakelov71]. For higher dimensional fibers and assuming that $\dim(V)=1$, this conjecture was confirmed by Kovács [@Kovacs00a], in the canonically polarized case (see also [@Kovacs02]), and by Viehweg and Zuo [@Vie-Zuo01] in general. Over Abelian varieties Viehweg’s conjecture was solved by Kovács [@Kovacs97c]. When $\dim(V)=2$ or $3$, it was resolved by Kebekus and Kovács, cf. [@KK08] and [@KK10]. In the compact case it was settled by Patakfalvi [@MR2871152]. Viehweg’s conjecture was finally solved in complete generality by the fundamental work of Campana and Păun [@CP16] and more recently by Popa and Schnell [@PS15]. For the more analytic counterparts of these results please see [@Vie-Zuo03a], [@Sch12], [@TY15], [@BPW], [@TY16], [@PTW] and [@Deng].
By using the solution of Viehweg’s hyperbolicity conjecture, one can reformulate Conjecture \[conj:kk0\] as follows.
\[conj:kk\] Let $f_U : U \to V$ be a smooth family of projective varieties and $(X,D)$ a smooth compactification of $V$ with $V \cong X\smallsetminus D$. Assume that the geometric general fiber of $f_U$ has a good minimal model. Then, the inequality
$$\operatorname{Var}(f_U) \leq \kappa(X, D)$$ holds, if $\kappa(X,D)\geq 0$.
When $f_U$ is canonically polarized, Conjecture \[conj:kk\] was confirmed by Kebekus and Kovács in [@KK08], assuming that $\dim(X)=2$, in [@KK10], if $\dim(X)=3$, and as a consequence of [@taji16] in full generality. The latter result establishes an independent, but closely related, conjecture of Campana; the so-called *Isotriviality Conjecture*.
Campana’s conjecture (Conjecture \[conj:iso\]) predicts that once the fibers of $f_U$ are canonically polarized (or more generally have semi-ample canonical bundle, assuming that $f_U$ is polarized with a fixed Hilbert polynomial), then $f_U$ is isotrivial, if $V$ is *special*. We refer to Section \[sect:Section5-Future\] for more details on the notion of special varieties and various other particular cases where the Isotriviality Conjecture and Conjecture \[conj:kk\] can be confirmed.
Brief review of the strategy of the proof
-----------------------------------------
A key tool in proving Conjecture \[conj:kk\], in the canonically polarized case, is the celebrated result of Viehweg and Zuo [@Vie-Zuo01 Thm. 1.4.(i)] on the existence of an invertible subsheaf ${\scr{L}}\subseteq \Omega^{\otimes i}_X\log(D)$, for some $i\in {\mathbb{N}}$, whose Kodaira dimension verifies the inequality:
$$\label{eq:VZ-ineq}
\kappa(X, {\scr{L}}) \geq \operatorname{Var}(f_U).$$
The sheaf ${\scr{L}}$ is usually referred to as a *Viehweg-Zuo subsheaf*. In general, once the canonically polarized condition is dropped, in the absence of a well-behaved moduli functor associated to the family, the approach of [@Vie-Zuo01] cannot be directly applied. Nevertheless, we show that one can still construct a subsheaf of $\Omega_X^{\otimes i}\log(D)$ arising from the variation in $f_U$, as long as the geometric general fiber of $f_U$ has a good minimal model. But in this more general context the sheaf ${\scr{L}}$ injects into $\Omega^{\otimes i}_X\log(D)$ only after it is twisted by some pseudo-effective line bundle ${\scr{B}}$. As such we can no longer guarantee that this (twisted) subsheaf verifies the inequality (\[eq:VZ-ineq\]).
\[thm:VZ\] Let $f_U: U\to V$ be a smooth non-isotrivial family whose general fiber admits a good minimal model. Let $(X, D)$ a smooth compactification of $V$. There exist a positive integer $i$, a line bundle ${\scr{L}}$ on $X$, with $\kappa(X, {\scr{L}}) \geq \operatorname{Var}(f_U)$, and a pseudo-effective line bundle ${\scr{B}}$ with an inclusion $${\scr{L}}\otimes {\scr{B}}\subseteq \Omega_X^{\otimes i}\log(D).$$
To mark the difference between the two cases we refer to these newly constructed sheaves as *pseudo-effective Viehewg-Zuo subsheaves*.
For the proof of Theorem \[thm:VZ\] we follow the general strategy of [@VZ02] and more recently [@PS15], where one combines the positivity properties of direct images of relative dualizing sheaves with the negativity of curvature of Hodge metrics along certain subsheaves of variation of Hodge structures to construct subsheaves of $\Omega_X^{\otimes i}\log(D)$ that are sensitive to the variation in the birational structure of the members of the family. We note that, in our constructions, we do not make use of the theory of Hodge modules. In particular Saito’s decomposition theorem will not be used.
It is worth pointing out that when $\dim(X) = \operatorname{Var}(f_U)$, ${\scr{L}}$ in Theorem \[thm:VZ\] is big and we have $\kappa(X, {\scr{L}}\otimes {\scr{B}}) = \dim(X)$. Thus, in this case, Theorem \[thm:VZ\] coincides with the result of [@PS15 Thm. B] and [@VZ02 Thm. 1.4.(i)] (the latter in the canonically polarized case), while providing a simpler proof.
When variation is not maximal, Theorerm \[thm:VZ\] is notably different from—and in some sense weaker than—the theorem of Viehweg and Zuo [@VZ02 Thm. 1.4.(i)] in the canonically polarized case. The reason for this difference is due to the fact that without a reasonable functor associated to the family, the existence of Viehweg-Zuo subsheaves can no longer be extracted from their construction at the level of moduli stacks, where the variation is maximal. We refer the reader to Section \[sect:Section5-Future\] for more details and a brief review of the cases where this difficulty can be overcome.
Once Theorem \[thm:VZ\] is established, it remains to trace a connection between $\kappa({\scr{L}})$ and $\kappa(X,D)$. In the maximal variation case, Viehweg-Zuo subsheaves are guaranteed to be big (as soon as the geometric general fiber has a good minimal model). In this case a key result of Campana and Păun then implies that $\kappa(X,D) = \dim(X)$, cf. [@CP16 Thm 7.11]. But when $\operatorname{Var}(f_U)<\dim(X)$, as ${\scr{L}}$ is not big, this strategy can no longer be applied. Instead, our proof of Theorem \[thm:main\] relies on the following vanishing result.
\[thm:vanishing\] Let $(X,D)$ be a pair consisting of a smooth projective variety $X$ of dimension at most equal to $5$ and a reduced effective divisor $D$ with simple normal crossing support. If $\kappa(X, D)\geq 0$, then the equality
$$\label{eq:CamVanishing}
H^0 \big(X, \Omega^{\otimes i}_X\log(D) \otimes ({\scr{L}}\otimes {\scr{B}})^{-1} \big) = 0$$
holds, assuming that ${\scr{B}}$ is pseudo-effective and $\kappa({\scr{L}}) > \kappa(X,D)$.
Theorem \[thm:main\] is now an immediate consequence of Theorems \[thm:vanishing\] and \[thm:VZ\].
In the presence of a flat Kähler-Einstein metric, or assuming that the main conjectures of the minimal model program hold, various analogues of Theorem \[thm:vanishing\] can be verified. When $D = 0$ and $c_1(X)=0$, and ${\scr{B}}\cong {\scr{O}}_X$, Theorem \[thm:vanishing\] follows, in all dimensions, directly from Yau’s solution to Calabi’s conjecture [@MR0451180] and the Bochner formula. When ${\scr{B}}\cong {\scr{O}}_X$ and $D=0$, the vanishing in Theorem \[thm:vanishing\] was conjectured by Campana in [@Ca95] where he proved that it holds for an $n$-dimensional variety $X$, if the Abundance Conjecture holds in dimension $n$. As was shown by Campana, such vanishing results are closely related to compactness properties of the universal cover of algebraic varieties. We refer to [@Ca95] for details of this very interesting subject (see also the book of Kollár [@Kollar95s] and [@Kollar93]). We also invite the reader to consult [@CP16 Thm. 7.3] where the authors successfully deal with a similar problem with $\kappa$ replaced by another invariant $\nu$; the latter being the numerical Kodaira dimension.
Structure of the paper
----------------------
In Section \[sect:Section2-Hodge\] we provide the preliminary constructions needed for the proof of Theorem \[thm:VZ\]. The proof of Theorem \[thm:VZ\] appears in Section \[sect:Section3-VZ\]. In Section \[sect:Section4-Birational\] we prove the vanishing result; Theorem \[thm:vanishing\]. Section \[sect:Section5-Future\] is devoted to further results and related problems, including a discussion on the connection between Conjecture \[conj:kk\] and a conjecture of Campana.
Acknowledgements
----------------
I would like to thank Sándor Kovács, Frédéric Campana and Mihnea Popa for their interest and suggestions. I am also grateful to Christian Schnell for pointing out a mistake in an earlier draft of this paper.
Preliminary results and constructions {#sect:Section2-Hodge}
=====================================
Our aim in this section is to establish two ket background results that we will need in order to construct the pseudo-effective Viehweg-Zuo subsheaves in the proceeding section.
Positivity of direct images of relative dualizing sheaves
---------------------------------------------------------
In [@Kawamata85 Thm 1.1] Kawamata shows that, assuming that the general fiber admits a good minimal model, for any algebraic fiber space $f: Y\to X$ of smooth projective varieties $Y$ and $X$, the inequality
$$\label{eq:KAW}
\kappa\Big(X, \big( \det(f_*\omega^m_{Y/X}) \big)^{**} \Big) \geq \operatorname{Var}(f),$$
holds, for all sufficiently large integers $m\geq 1$.
One can use (\[eq:KAW\]) to extract positivity results for $f_*\omega^m_{Y/X}$. We refer the reader to [@Vie-Zuo03a], [@VZ02] and [@PS15] for the case where $\operatorname{Var}(f) = \dim(X)$ (see also [@Kawamata85] and [@Viehweg83]). The key ingredient is the fiber product trick of Viehweg, where, given $f: Y\to X$ as above, one considers the $r$-fold fiber product
$$Y^r : = \underbrace{Y\times_X Y \times_X \ldots \times_X Y}_{\text{$r$ times}}.$$ We denote by $Y^{(r)}$ a desingularization of $Y^{r}$ and the resulting morphism by $f^{(r)}: Y^{(r)} \to X$. The next proposition is the extension of the arguments of [@PS15 pp. 708–709] to the case where variation is not maximal.
\[prop:positivity\] Let $f: Y\to X$ be a fiber space of smooth projective varieties $Y$ and $X$. If the general geometric fiber admits a good minimal model, then, for every sufficiently large $m\geq 1$, there exists $r:= r(m)\in {\mathbb{N}}$ and a line bundle ${\scr{L}}$ on $X$, with $\kappa(X, {\scr{L}})\geq \operatorname{Var}(f)$, and an inclusion
$$\label{eq:include}
{\scr{L}}^m \subseteq f_*^{(r)} \big( \omega^m_{Y^{(r)}/ X} \big)$$
which holds in codimension one.
According to [@Viehweg83 Prop. 6.1] there is a be a finite, flat and Galois morphism $\gamma: X_1 \to X$, with $G:= \operatorname{Gal}(X_1/ X)$, such that the induced morphism $f_1: Y_1 \to X_1$ from the $G$-equivariant resolution $Y_1$ of the fiber product $Y \times_{X_1} X$ is semistable in codimesnion one:
$$\xymatrix{
Y_1 \ar[r] \ar[dr]_{f_1} & Y \times_{X_1} X \ar[d] \ar[rr] && Y \ar[d]^f \\
& X_1 \ar[rr]^{\gamma}_{\text{flat and Galois}} && X.
}$$
By [@Viehweg83 Sect. 3] (see also [@Mori87 Thm 4.10]), for any $m\in {\mathbb{N}}$, there is an inclusion $$(f_1)_*\omega^m_{Y_1/X_1} \subseteq \gamma^*(f_*\omega^m_{Y/X})$$ and thus $\det((f_1)_*\omega^m_{Y_1/X_1}) \subseteq \det(\gamma^*(f_*\omega^m_{Y/X}))$. Let us define
$${\scr{B}}_1 : = \det((f_1)_*\omega^m_{Y_1/X_1}) \; \; \; \text{and} \; \; \; {\scr{B}}:= \det(\gamma^*(f_*\omega^m_{Y/X})).$$ We can allow ourselves to remove codimension two subsets from $X$. In particular we may assume that ${\scr{B}}_1$ and ${\scr{B}}$ are locally free and that
$$\label{eq:pullback}
{\scr{B}}_1 \subseteq \gamma^*({\scr{B}}).$$
Next, we observe that, as $\Omega^1_{X_1}$ and $\Omega^1_{Y_1}$ are naturally equipped with the structure of $G$-sheaves (or linearized sheaves), $\omega^m_{Y_1/X_1}$ is also a $G$-sheaf. It follows that $(f_1)_*\omega^m_{Y_1/X_1}$ is a $G$-sheaf. The inclusion (\[eq:pullback\]) then implies that ${\scr{B}}_1$ descends, that is there is a line bundle ${\scr{L}}$ on $X$ such that ${\scr{B}}_1 \cong \gamma^*({\scr{L}})$, cf. [@MR2665168 Thm. 4.2.15]. Moreover, as $\gamma$ is Galois, it follows that $\kappa(X_1, {\scr{B}}_1) = \kappa(X, {\scr{L}})$. Note that we also have $\kappa({\scr{B}}_1)\geq \operatorname{Var}(f)$, again by using [@Kawamata85 Thm 1.1].
Our aim is now to show that ${\scr{L}}$ is the line bundle admitting the inclusion (\[eq:include\]). To this end, let $t: = \mathrm{rank}\bigl( (f_1)_* \omega^m_{Y_1/X_1} \bigr)$ and consider the injection $${\scr{B}}_1^m {\lhook\joinrel\longrightarrow}\bigotimes^{(t+m)} (f_1)_* \omega^m_{Y_1/X_1}.$$ Set $r:= (t+m)$. Let $Y_1^{(r)}$ be a desingularization of the $r$-fold fiber product $Y_1\times_{Y_1} \ldots \times_{X_1} Y_1$ such that the resulting morphism from $Y_1^{(r)}$ to $Y^r$ factors through the desingulariation map $Y^{(r)}\to Y^{r}$. Let $f_1^{(r)} : Y_1^{(r)} \to X_1$ be the induced map. Using the fact that $f_1$ is semistable in codimension one (and remembering that we are arguing in codimension one), thanks to [@Viehweg83 Lem. 3.5], we have
$$(f_1^{(r)})_*\big( \omega^m_{Y_1^{(r)}/ X_1} \big) = \bigotimes^r (f_1)_* \big( \omega^m_{Y_1/X_1} \big).$$
On the other hand, since $\gamma$ is flat, according to [@Viehweg83 Lem. 3.2], we have
$$(f_1^{(r)})_* \big( \omega^m_{Y_1^{(r)} / X_1} \big) \subseteq \gamma^* \big( f^{(r)}_* \omega^m_{Y^{(r)}/ X} \big),$$ where $Y^{(r)}$ is a desingularization of $Y^r$ with the induced map $f^{(r)} : Y^{(r)} \to X$. Therefore, there is an injection
$${\scr{B}}_1^m = \gamma^*({\scr{L}}^m) {\lhook\joinrel\longrightarrow}\gamma^*\big( f_*^{(r)} \omega^m_{Y^{(r)}/ X} \big).$$ By applying the $G$-invariant section functor $\gamma_*(\cdot)^G$ to both sides we find the required injection
$${\scr{L}}^m {\lhook\joinrel\longrightarrow}f_*^{(r)} \omega^m_{Y^{(r)}/ X}.$$
Hodge theoretic constructions {#subsect:Hodge}
-----------------------------
Let $f: Y \to X$ be a surjective, projective morphism of smooth quasi-projective varieties $Y$ and $X$ of relative dimension $n$ and let $D=\operatorname{disc}(f)$ be the discriminant locus of $f$. Set $\Delta: = f^*(D)$ and assume that the support of $D$ and $\Delta$ is simple normal crossing.
Let ${\scr{W}}$ be a coherent sheaf on $X$. We call the graded torsion free sheaf ${\scr{F}}= \bigoplus {\scr{F}}_{\bullet}$ a system with ${\scr{W}}$-valued operator, if it can be equipped with a sheaf morphism $\tau: {\scr{F}}\to {\scr{F}}\otimes {\scr{W}}$ satisfying the Griffiths transversality condition $\tau|_{{\scr{F}}_{\bullet}} : {\scr{F}}_{\bullet} \to {\scr{F}}_{\bullet+1}\otimes {\scr{W}}$.
Throughout the rest of this section we will allow ourselves to discard closed subsets of $X$ of $\operatorname{codim}_X\geq 2$, whenever necessary.
Our goal is to construct a system $({\scr{F}}, \tau)$ with $\Omega^1_X\log(D)$-operator $\tau$ which can be equipped with compatible maps to a system of Hodge bundles underlying the variation of Hodge structures of a second family that arises from $f$ via certain covering constructions.
To this end, let ${\scr{M}}$ be a line bundle on $Y$ with $H^0(Y, {\scr{M}}^m) \neq 0$. Let $\psi: Z\to Y$ be a desingularization of the finite cyclic covering associated to taking roots of a non-zero section $s\in H^0(Y, {\scr{M}}^m)$ so that
$$\label{eq:NV}
H^0(Z, \psi^*{\scr{M}})\neq 0,$$
cf. [@Laz04-I Prop. 4.1.6].
Now, consider the exact sequence of relative differential forms
$$\label{eq:exact}
\xymatrix{
0 \ar[r] & f^*(\Omega^1_X \log (D)) \ar[r] & \Omega^1_Y\log(\Delta) \ar[r] & \Omega^1_{Y/X}\log(\Delta) \ar[r] & 0,
}$$
which is locally free over $X$ (after removing a codimension two subset of $X$). Define $h:= f\circ \psi$. Using the pullback of (\[eq:exact\]) the bundle $\psi^*(\Omega^{\bullet}_Y\log(\Delta))$ can be filtered by a decreasing filtration $F_{i}^{\bullet}$, $i\geq 0$, with $F^{\bullet}_i /F^{\bullet}_{i+1} \cong \psi^*(\Omega_{Y/X}^{\bullet -i}\log(\Delta)) \otimes h^*(\Omega^i_X\log (D) )$. After taking the quotient by $F^{\bullet}_2$, the exact sequence corresponding to $F^{\bullet}_0 / F^{\bullet}_1 \cong \psi^*( \Omega^{\bullet}_{Y/ X}\log(\Delta) )$ reads as
$$\label{eq:long}
\xymatrix{
0 \ar[r] & {\psi^*( \Omega_{Y/X}^{\bullet -1} \log(\Delta))} \otimes h^*( \Omega^1_X\log(D)) \ar[r] & F^{\bullet}_0/F^{\bullet}_2 \ar[r] & \psi^*(\Omega^{\bullet}_{Y/X} \log(\Delta) ) \ar[r] & 0,
}$$
where ${\psi^*( \Omega_{Y/X}^{\bullet -1} \log(\Delta) )} \otimes h^*( \Omega^1_X\log(D))\cong F^{\bullet}_1/F^{\bullet}_2$ and
$$\label{eq:1st}
F^{\bullet}_0/F^{\bullet}_2 \cong \psi^*(\Omega^{\bullet}_Y\log(\Delta))/ \big( h^* \Omega^2_X\log(D) \otimes
\psi^*\Omega^{\bullet-2}_{Y/X}\log(\Delta) \big).$$
We tensor the sequence (\[eq:long\]) by $\psi^*({\scr{M}}^{-1})$ to get the exact sequence
$$\begin{multlined}
A_{\bullet}: \; \; 0 \to \psi^*(\Omega_{Y/X}^{\bullet -1} \otimes {\scr{M}}^{-1}) \otimes h^*( \Omega^1_X\log(D)) \to
(F^{\bullet}_0/F^{\bullet}_2)\otimes \psi^*({\scr{M}}^{-1}) \to \\
\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \longrightarrow \psi^*(\Omega^{\bullet}_{Y/X} \log(\Delta) \otimes {\scr{M}}^{-1}) \to 0.
\end{multlined}$$
Now, define a $\Omega^1_X\log(D)$-valued system $({\scr{F}}= \bigoplus {\scr{F}}_{\bullet}, \tau)$ of weight $n$ on $X$ by
$$\label{eq:Fsystem}
{\scr{F}}_{\bullet} : = \operatorname{R}^{\bullet} h_* \big( \psi^*( \Omega^{n-\bullet}_{Y/X} \log(\Delta) \otimes {\scr{M}}^{-1} ) \big) / {\text{torsion}},$$
whose map $\tau$ is determined by $\tau|_{{\scr{F}}_{\bullet}}: {\scr{F}}_{\bullet} \to {\scr{F}}_{\bullet+1} \otimes \Omega^1_X\log(D)$ being equal to the corresponding morphism in the relative log complex $\operatorname{\mathbf{R}}h_{*}(A_{n-\bullet})$.
Next, let $\operatorname{disc}(h) = (D+S)$. By removing a subset of $X$ of $\operatorname{codim}_X\geq 2$ we may assume that $(D+S)$ has simple normal crossing support. We further assume that, after replacing $Z$ by a higher birational model, if necessary, the divisor $h^*(D+S)$ has simple normal crossing support, which we denote by $\Delta'$.
Similar to the above construction, we can consider the exact sequence
$$B_{\bullet} : \; \; \Omega^{\bullet -1}_{Z/X} \log(\Delta') \otimes h^*(\Omega_X^1\log(D+S))
\to G^{\bullet}_0/{G^{\bullet}_2} \to \Omega^{\bullet}_{Z/X}\log(\Delta') \to 0,$$ with $G^{\bullet}_i$ being the decreasing filtration associated to the sequence
$$\xymatrix{
0 \ar[r] & h^*\Omega^1_X\log(D+S) \ar[r] & \Omega^1_Z\log(\Delta') \ar[r] & \Omega^1_{Z/X} \log(\Delta') \ar[r] & 0.
}$$ In this case we have
$$\label{eq:2nd}
G^{\bullet}_0 / G^{\bullet}_2 \cong \Omega_Z^{\bullet}\log(\Delta') / \big( h^*\Omega^2_X\log(D+S) \otimes \Omega_{Z/X}^{\bullet-2}
\log(\Delta') \big).$$
Now, let $({\scr{E}}, \theta)$ with $\theta: {\scr{E}}\to {\scr{E}}\otimes \Omega^1_X\log(D+S)$ be the logarithmic Higgs bundles underlying the canonical extension of the local system $\operatorname{R}^n h_*({\mathbb{C}}_{Z\smallsetminus \Delta'})$, whose residues have eigenvalues contained in $[0,1)$, cf. [@Deligne70 Prop. I.5.4]. According to [@Ste76 Thm. 2.18] the associated graded module of the Hodge filtration induces a structure of a Hodge bundle on $({\scr{E}}, \theta)$ whose gradings ${\scr{E}}_{\bullet}$ are given by
$$\label{eq:Esystem}
{\scr{E}}_{\bullet} = \operatorname{R}^{\bullet} h_* \big( \Omega^{n-\bullet}_{Z/X} \log(\Delta') \big),$$
with morphisms $\theta: {\scr{E}}_{\bullet} \to {\scr{E}}_{\bullet +1}\otimes \Omega^1_X\log(D+S)$ determined by those in the complex $\operatorname{\mathbf{R}}h_*(B_{n-\bullet})$.
Our aim is now to construct a morphism from $({\scr{F}}, \tau)$ to $({\scr{E}}, \theta)$ that is compatible with $\tau$ and $\theta$. To this end, we observe that the non-vanishing (\[eq:NV\]) implies that there is a natural injection
$$\psi^*\big( \Omega^{n-\bullet}_{Y/X} \log(\Delta) \otimes {\scr{M}}^{-1} \big)
{\lhook\joinrel\longrightarrow}\Omega^{n-\bullet}_{Z/X} \log(\Delta').$$ Together with the two isomorphisms (\[eq:1st\]) and (\[eq:2nd\]) it then follows that the induced map
$$\frac{F^{n-\bullet}_0}{F^{n - \bullet}_2} \otimes \psi^*({\scr{M}}^{-1} ) \longrightarrow \frac{G^{n-\bullet}_0}{G^{n-\bullet}_2}$$ is an injection. Therefore, the sequence defined by $A_{n-\bullet}$ is a subsequence of $B_{n-\bullet}$, inducing a morphism of complexes between $\operatorname{\mathbf{R}}h_*(A_{n-\bullet})$ and $\operatorname{\mathbf{R}}h_*(B_{n-\bullet})$. In particular there is a sheaf morphism
$$\Phi_{\bullet}: \underbrace{\operatorname{R}^{\bullet} h_* \big( \psi^*(\Omega^{n-\bullet}_{Y/X} \log(\Delta)) \otimes {\scr{M}}^{-1} \big)/{\text{torsion}}}_{{\scr{F}}_{\bullet}}
\longrightarrow \underbrace{\operatorname{R}^{\bullet} h_* \big( \Omega^{n-\bullet}_{Z/X} \log(\Delta') \big)}_{{\scr{E}}_{\bullet}}.$$ The compatibility of $\Phi_{\bullet}$, with respect to $\tau$ and $\theta$, follows from the fact that, by construction, each $\tau|_{{\scr{F}}_{\bullet}}$ is defined by the corresponding morphism in the complex $\operatorname{\mathbf{R}}h_*(A_{n-\bullet})$;
$$\xymatrix{
\operatorname{\mathbf{R}}h_*(A_{n-\bullet})/\text{torsion} : & ... \ar[r] & {\scr{F}}_{\bullet} \ar[rr]^(0.35){\tau} \ar[d]^{\Phi_{\bullet}} &&
{\scr{F}}_{\bullet+1}\otimes \Omega^1_X\log(D) \ar[d]^{\Phi_{\bullet+1}} \ar[r] & ...\\
\operatorname{\mathbf{R}}h_*(B_{n-\bullet}) : & ... \ar[r] & {\scr{E}}_{\bullet} \ar[rr]^(0.3){\theta} &&
{\scr{E}}_{\bullet+1}\otimes \Omega^1_X\log(D+S) \ar[r] & ... \; .
}$$
Constructing pseudo-effective Viehweg-Zuo subsheaves {#sect:Section3-VZ}
====================================================
In the current section we will prove Theorem \[thm:VZ\]. We will be working in the context of the following set-up.
\[setup\] Let $f: Y\to X$ be a smooth compactification of the smooth projective family $f_U: U\to V$ whose general fiber has a good minimal model and $\operatorname{Var}(f) \neq 0$. Set $D$ to be the divisor defined by $X\smallsetminus D\cong V$ and $\Delta = \operatorname{Supp}(f^*D)$ (both $D$ and $\Delta$ are assumed to have simple normal crossing support).
\[prop:Hodge\] In the situation of Set-up \[setup\], after removing a subset of $X$ of $\operatorname{codim}_X\geq 2$, the following constructions and properties can be verified.
1. \[item:H1\] There exists a system of logarithmic Hodge bundles $({\scr{E}}= \bigoplus {\scr{E}}_{\bullet}, \theta)$ on $X$ with $\theta: {\scr{E}}_{\bullet} \to {\scr{E}}_{\bullet + 1}\otimes \Omega^1_{X}\log (D+S)$.
2. \[item:H2\] The torsion free sheaf $\ker(\theta|_{{\scr{E}}_{\bullet}})$ is seminegatively curved.
3. \[item:H3\] There exists a subsystem $({\scr{G}}= \bigoplus {\scr{G}}_{\bullet}, \theta)$ of $({\scr{E}}, \theta)$ such that $\theta({\scr{G}}_{\bullet}) \subseteq {\scr{G}}_{\bullet+1} \otimes \Omega^1_X\log(D)$.
4. \[item:H4\] There is a line bundle ${\scr{L}}$ on $X$, with $\kappa(X, {\scr{L}})\geq \operatorname{Var}(f)$, equipped with an injection ${\scr{L}}{\lhook\joinrel\longrightarrow}{\scr{G}}_0$.
Here, and following the terminology introduced in [@MR2449950 p. 357] by Berndtsson and Păun, we say that a torsion free sheaf ${\scr{N}}$ is seminegatively curved if it carries a seminegatively curved (possibly singular) metric $h$ over its smooth locus $X_{{\scr{N}}}$ (where ${\scr{N}}$ is locally free). We refer to [@MR2449950] for more details (see also the survey paper [@Pau16]). An immediate consequence of this property is the fact that, once it is satisfied, then $\det({\scr{N}})$ extends to a anti-pseudo-effective line bundle on the projective variety $X$ and this is all that is needed for our purposes in the current paper.
Granting Proposition \[prop:Hodge\] for the moment let us proceed with the proof of Theorem \[thm:VZ\].
*Proof of Theorem \[thm:VZ\].* Let $X^\circ\subseteq X$ be the open subset over which Items \[item:H1\]–\[item:H4\] in Proposition \[prop:Hodge\] are valid. By iterating the morphism $$\theta\otimes \operatorname{id}: {\scr{G}}_j \otimes \Omega^{\otimes j}_{X^\circ} \log(D) \longrightarrow
{\scr{G}}_{j+1} \otimes \Omega^{\otimes(j+1)}_{X^\circ} \log(D),$$ for every $j \in {\mathbb{N}}$, we can construct a map
$$\theta^j : {\scr{G}}_{0} \longrightarrow {\scr{G}}_j \otimes \Omega^{\otimes j}_{X^\circ} \log(D).$$
Note that $\theta({\scr{G}}_0)\neq 0$. Otherwise, there is an injection
$${\scr{L}}|_{X^\circ} {\lhook\joinrel\longrightarrow}\ker(\theta|_{{\scr{G}}_0}).$$ On the other hand, by construction we have $\ker(\theta|_{{\scr{G}}_0}) \subseteq \ker(\theta|_{{\scr{E}}_0})$ and according to Item \[item:H2\] $\ker(\theta|_{{\scr{E}}_0})$ is seminegatively curved. This implies that ${\scr{L}}$ is anti-pseudo-effective on $X$ and therefore $\kappa(X, {\scr{L}})\leq 0$, contradicting our assumption on $\operatorname{Var}(f)$ not being equal to zero.
Now, let $k$ be the positive integer defined by $k = \max\{ j \in {\mathbb{N}}\; | \; \theta^j({\scr{G}}_0) \neq 0 \}$, so that $$\theta^k ({\scr{G}}_0) \subset \underbrace{\ker(\theta|_{{\scr{G}}_k})}_{{\scr{N}}_k} \otimes \Omega^{\otimes k}_{X^\circ} \log(D).$$ From Item \[item:H4\] it follows that there is a non-trivial morphism
$${\scr{L}}\longrightarrow {\scr{N}}_k \otimes \Omega^{\otimes k}_{X^\circ} \log(D),$$ which implies the existence of a non-zero map
$$\label{eq:FinalInject}
{\scr{L}}\otimes \big(\det({\scr{N}}_k) \big)^{-1} \longrightarrow \Omega^{\otimes i}_{X^\circ} \log(D),$$
for some $i\in {\mathbb{N}}$. Now, let ${\scr{B}}$ be the line bundle on $X$ defined by the extension of $\det({\scr{N}}_k)^{-1}$ so that (\[eq:FinalInject\]) extends to the injection
$${\scr{L}}\otimes {\scr{B}}{\lhook\joinrel\longrightarrow}\Omega^{\otimes i}_X \log(D).$$
It remains to verify that ${\scr{B}}$ is pseudo-effective. But again, according to Item \[item:H2\], the torsion free sheaf ${\scr{N}}_{k} \subseteq \ker(\theta|_{{\scr{E}}_k})$ is seminegatively curved and therefore so is $\det({\scr{N}}_k)$ and thus extends to a anti-pseudo-effective line bundle ${\scr{B}}^{-1}$ on $X$.
*Proof of Proposition \[prop:Hodge\].* The proof is a consequence of Proposition \[prop:positivity\] combined with the Hodge theoretic constructions in Subsection \[subsect:Hodge\]. To lighten the notation we will replace the initial family $f: Y\to X$ in Set-up \[setup\] by $f^{(r)}: Y^{(r)}\to X$, which was constructed in Proposition \[prop:positivity\].
After removing a codimension two subset from $X$ over which the inclusion (\[eq:include\]) holds, define the line bundle $${\scr{M}}= \omega_{Y/X}(\Delta)\otimes f^*({\scr{L}}^{-1})$$ so that $H^0(Y, {\scr{M}}^m ) \neq 0$. The arguments in Subsection \[subsect:Hodge\] can now be used to construct the two systems $({\scr{E}}, \theta)$ and $({\scr{F}}, \tau)$ defined in (\[eq:Fsystem\]) and (\[eq:Esystem\]), with logarithmic poles along $(D+S)$ and $D$, respectively. Here after deleting a codimension two subset we have assumed that $\operatorname{Supp}(D+S)$ is simple normal crossing.
Item \[item:H2\] follows from Zuo’s result [@Zuo00]—based on Cattani, Kaplan and Schmid’s work on asymptotic behaviour of Hodge metrics, cf. [@CKS] and [@PTW Lem 3.2]—and Brunbarbe [@Bru15], and more generally from Fujino and Fujisawa [@FF17]. The reader may wish to consult Simpson [@MR1040197] and [@Bru17] where these problems are dealt with in the more general setting of tame harmonic metrics.
For \[item:H3\], define $({\scr{G}}= \bigoplus {\scr{G}}_{\bullet}, \theta) \subset ({\scr{E}}, \theta)$ to be the image of the system $({\scr{F}}= \bigoplus {\scr{F}}_{\bullet}, \tau)$ under the morphism $\Phi_{\bullet}$, constructed in Subsection \[subsect:Hodge\].
It remains to verify \[item:H4\]. According to the definition of $\Phi_{\bullet}$ we have
$$\Phi_0 : \operatorname{R}^0 h_* \big( \psi^*( \omega_{Y/X}(\Delta)\otimes {\scr{M}}^{-1} ) \big) \longrightarrow \operatorname{R}^0 h_*(\omega_{Z/X}(\Delta')),$$ which is an injection; for the map
$$\psi^*\big( \omega_{Y/X}(\Delta) \otimes {\scr{M}}^{-1} \big) \longrightarrow \omega_{Z/X}(\Delta')$$ is injective. Item \[item:H4\] now follows from the isomorphism $$h_*\big( \psi^*( \omega_{Y/X}(\Delta) \otimes \underbrace{\omega^{-1}_{Y/X}(\Delta) \otimes f^*{\scr{L}}}_{{\scr{M}}^{-1}} ) \big)
\cong {\scr{L}}.$$
Vanishing results {#sect:Section4-Birational}
=================
In this section we will prove Theorem \[thm:vanishing\]. The methods will heavily rely on birational techniques and results in the minimal model program. For an in-depth discussions of preliminary notions and background we refer the reader to the book of Kollár and Mori [@KM98] and the references therein.
Not surprisingly an important construction that we will repeatedly make use of is the Iitaka fibration. Please see [@Mori87 Sect. 1] and [@Laz04-I Sect. 2.1.C] for the definition and a review of the basic properties.
Let ${\scr{L}}$ be a line bundle on a normal projective variety $X$ with $\kappa(X, {\scr{L}})>0$. By $\phi^{(I)}: X^{(I)}\to Y^{(I)}$ we denote the Iitaka fibration of ${\scr{L}}$ with an induced birational morphism $\pi^{(I)}: X^{(I)} \to X$.
Proof of Theorem \[thm:vanishing\]
----------------------------------
We begin by stating the following lemma concerning the behaviour of the Kodaira dimension on fibers of the Iitaka fibration. The proof follows from standard arguments; see for example [@Laz04-I pp. 136–137].
\[lem:effective\] Let $X$ be a smooth projective variety and ${\pi^{(\operatorname{I})}}: X^{(\operatorname{I})} \to X$ a birational morphism such that $\phi^{(\operatorname{I})}: X^{(\operatorname{I})} \to Z^{(\operatorname{I})}$ is the Iitaka fibration of the line bundle ${\scr{L}}$ on $X$. Then, for any ${\pi^{(\operatorname{I})}}$-exceptional and effective divisor $E$, we have
$$\kappa\big(F^{(\operatorname{I})}, ({\pi^{(\operatorname{I})}}^*({\scr{L}}) \otimes {\scr{O}}_{X^{(\operatorname{I})}}(E))\big|_{F^{(\operatorname{I})}}\big) = 0,$$ where $F^{(\operatorname{I})}$ is a very general fibre of $\phi^{(\operatorname{I})}$.
The next lemma is the final technical background that we need before we can proceed to the proof of Theorem \[thm:vanishing\]. It relies on the so-called flattening lemma, due to Gruson and Raynaud, which we recall below.
Let $f: X\to Z$ be an algebraic fiber space of normal (quasi-)projective varieties. There exists an equidimensional fiber space $f' : X' \to Z'$ of normal varieties that is birationally equivalent to $f$ through birational morpshisms $\sigma : X' \to X$ and $\tau: Z' \to Z$. We call $f'$ the *flattening* of $f$.
\[lem:flat\] Let $f: X\to Z$ be a fiber space of normal projective varieties with a flattening $f' : X' \to Z'$ as above. Let $A$ be a $f$-nef and effective ${\mathbb{Q}}$-divisor and assume that $A|_F\equiv 0$, where $F$ is the general fiber of $f$. There exists $A_{Z'}\in \operatorname{Div}(Z')_{{\mathbb{Q}}}$ such that $\sigma^*(A) \sim_{{\mathbb{Q}}} (f')^*(A_{Z'})$.
It suffices to show that $\sigma^*(A)$ is $f'$-numerically trivial. Aiming for a contradiction, assume that there exists a fiber $F_{z'}$ of $f'$ containing an irreducible contractible curve $C$ such that $\sigma^*(A)\cdot C \neq 0$. Let $A_1, \ldots, A_{d-1}$ and $A_d$ be a collection of ample divisors in $X'$ such that $(H_1\cdot \ldots \cdot H_d)\cap F_{z'}$ defines the numerical cycle of $(C+\widetilde C)$, where $\widetilde C$ is an effective $1$-cycle in $F_{z'}$. Since $\sigma^*(A)|_{F'}\equiv 0$, where $F'$ is a general fiber of $f'$, we have
$$\label{eq:intersect}
\sigma^*(A) \cdot (C+ C') =0.$$
On the other hand, $\sigma^*(A)$ is $f'$-nef. Combined with (\[eq:intersect\]), this implies that $\sigma^*(A)\cdot C = 0$, which contradicts our initial assumption.
*Proof of Theorem \[thm:vanishing\]. (Preparation).* Let ${\scr{L}}$ and ${\scr{B}}$ be two line bundles on $X$, with ${\scr{B}}$ being pseudo-effective, such that, for some $i\in {\mathbb{N}}$, there is a non-trivial morphism $${\scr{L}}\otimes {\scr{B}}\longrightarrow \Omega_X^{\otimes i}\log(D).$$
\[claim:CP\] The line bundle $w_X^i(D) \otimes {\scr{L}}^{-1}$ is pseudo-effective.
*Proof of Claim \[claim:CP\].* Let ${\scr{F}}$ be the saturation of the image of the non-trivial morphism $${\scr{L}}\otimes {\scr{B}}\longrightarrow \Omega^{\otimes i}_X\log(D)$$ and ${\scr{Q}}$ the torsion-free quotient resulting in the exact sequence $$\xymatrix{
0 \ar[r] & {\scr{F}}\ar[r] & \Omega^{\otimes i}_X\log(D) \ar[r] & {\scr{Q}}\ar[r] & 0.
}$$ After taking determinants we find that $$\label{eq:CP}
\omega_X^i(D) \otimes {\scr{F}}^{-1} \cong \det({\scr{Q}}).$$ Thanks to [@CP16 Thm. 1.3] the right-hand side of (\[eq:CP\]) is pseudo-effective and thus so is the left-hand side. This implies that $\omega^i_X(D)\otimes ({\scr{L}}\otimes {\scr{B}})^{-1} \in \overline{\operatorname{NE}}^1(X).$ But ${\scr{B}}$ is also assumed to be pseudo-effective and therefore $\omega^i_X(D)\otimes {\scr{L}}^{-1}\in \overline{\operatorname{NE}}^1(X)$. This finishes the proof of the claim.
Before we proceed further, notice that we may assume that $(i(K_X+D)+L)$ is not big, where ${\scr{L}}= {\scr{O}}_X(L)$. Otherwise the divisor $K_X+D$ is big, as it can be written as the sum of a pseudo-effective and big divisors: $$K_X + D = \frac{1}{2i} \big( ( i\cdot(K_X+D) +L ) + ( i\cdot( K_X+D ) - L )\big).$$ We can also assume that $\kappa({\scr{L}}) \geq 1$.
The first step in proving the theorem consists of replacing $X$ by a birational model $Y$ where establishing the vanishing in Theorem \[thm:vanishing\] proves to be easier.
\[claim:KeyClaim\] There exists a birational morphism $\pi : Y\to X$ from a smooth projective variety $Y$ that can be equipped with a fiber space $f: Y \to Z$ over a smooth projective variety $Z$. Furthermore, $Y$ contains a reduced divisor $D_Y$ such that $(Y,D_Y)$ is log-smooth and a divisor $L_Y$ which satisfy the following properties.
1. \[item:1k\] $\dim(Z) = \kappa(i(K_X+D)+L)$,
2. \[item:2k\] ${\scr{O}}_Y(L_Y) \otimes \pi^*({\scr{B}}) \subseteq \Omega^{\otimes i}_Y\log(D_Y)$,
3. \[item:3k\] $\kappa(K_Y+D_Y) = \kappa(K_X+D)$,
4. \[item:4k\] $\kappa(L_Y) = \kappa(L)$,
5. \[item:5k\] $\kappa(F, (i(K_Y+D_Y) + L_Y)|_{F}) = 0$ and $\kappa(F, (K_Y+D_Y)|_F)=0$, where $F$ is a very general fiber of $f$,
6. \[item:6k\] $\big( i(K_Y+D_Y) - L_Y \big) \in \overline{\operatorname{NE}}^1(Y)$, and
7. \[item:7k\] $L_Y \sim_{{\mathbb{Q}}} f^*(L_Z)$, for some $L_Z\in \operatorname{Div}_{{\mathbb{Q}}}(Z)$.
*Proof of Claim \[claim:KeyClaim\].* Let $\psi : X \dashrightarrow Z$ be the rational mapping associated to the linear system $|m\cdot(i(K_X+D)+L) |$, with $m$ being sufficiently large so that $\dim(Z) = \kappa(i(K_X+D)+L)$. Note that as $\kappa(L)\geq 1$, we have $\dim(Z) \geq 1$. Let $\psi_1: X_1 \to Z_1$ be the Iitaka fibration of $(i(K_X+D)+L)$ resulting in the commutative diagram
$$\xymatrix{
X_1 \ar[d]_{\pi_1} \ar[rr]^{\psi_1} && Z_1 \ar@{-->}[d] \\
X \ar@{-->}[rr]^{\psi} && Z,
}$$ where $\pi_1: X_1 \to X$ is a birational morphism. Define $L_1: = \pi_1^*(L)$. Let $E_1$ and $E_2$ be two effective and exceptional divisors such that
$$K_{X_1} + \underbrace{\widetilde D + E_2}_{: = D_1} \sim \pi_1^*(K_X+D) + E_1,$$ where $\widetilde D$ is the birational transform of $D$. Note that Claim \[claim:CP\] implies that $\big( i\cdot (K_{X_1} + D_1 ) - L_1 \big)\in \overline{\operatorname{NE}}^1(X_1)$. Furthermore, by Lemma \[lem:effective\] we have $\kappa(F_1, (i(K_{X_1} +D_1) + L_1)|_{F_1}) = 0$, where $F_1$ is a very general fibre of $\psi_1$. On the other hand, we have
$$\kappa(F_1, L_1|_{F_1}) \leq \kappa( F_1 , (i(K_{X_1} + D_1) + L_1 )|_{F_1}) =0.$$ Therefore, we find that $\kappa(F_1, L_1|_{F_1}) =0$. As a result, and thanks to [@Mori87 Def–Thm. 1.11], the Iitaka fibration $\psi_2: X_2 \to Y$ of $L_1$ factors through the fiber space $\psi_1: X_1 \to Z_1$ via a birational morphism $\pi_2: X_2 \to X_1$ and a rational map $\nu: Z_1 \dashrightarrow Y$ (see Diagram \[eq:diag2\] below). Let $\widetilde \nu: Z_2 \to Y$ be a desingularization of $\nu$ through the birational morphism $\mu: Z_2 \to Z_1$. Finally, let $\psi_3: X_3 \to Z_2$ be a desingularization of the rational map $X_2 \dashrightarrow Z_2$, defined by the composition of $\pi_2$, $\psi_1$ and $\mu^{-1}$ (where it is defined), via the biraitonal morphism $\pi_3: X_3 \to X_2$.
$$\label{eq:diag2}
\begin{gathered}
\xymatrix{
& X_3 \ar[d]_{\pi_3} \ar@/^11mm/[ddrrrr]^{\psi_3} \ar@/_6mm/[ddl]_{\pi} \\
& X_2 \ar[d]_{\pi_2} \ar[rr]^{\psi_2} && Y \\
X & X_1 \ar[l]^{\pi_1} \ar[rr]^{\psi_1} && Z_1 \ar@{-->}[u]^{\nu} && Z_2 \ar[ll]_{\mu} \ar[ull]_{\widetilde \nu}
}
\end{gathered}$$
By construction, there is an effective ${\mathbb{Q}}$-divisor $E\subset X_2$ and a very ample ${\mathbb{Q}}$-divisor $L_Y$ in $Y$ such that $$\pi_2^*(L_1) - E \sim_{{\mathbb{Q}}} \psi_2^*(L_Y).$$ Define $L_3: = \pi_3^*\big( \pi_2^*(L_1) - E \big)$. Let $E_3$ and $E_4$ be two effective exceptional divisors for which we have
$$K_{X_3} + \underbrace{\widetilde D_3 + E_3}_{:= D_3} \sim \pi_3^*\big( \pi_2^*(K_{X_1}+D_1) \big) + E_4,$$ where $\widetilde D_3$ is the birational transform of $D_1$.
We now claim that the two divisors $\big( i\cdot (K_{X_3} + D_3) \big)$ and $L_3$ together with the fiber space $\psi_3: X_3 \to Z_2$ satisfy the properties listed in Claim \[claim:KeyClaim\] for $Y$, $D$, $L_Y$, $Z$ and $f$.
To see this, first note that $\kappa(L_3)= \kappa(L)$ and that
$$L_3 \sim \psi_3^* \big( \underbrace{\widetilde \nu^*(L_Y)}_{:= L_{Z_3}} \big),$$ thanks to the commutativity of Diagram \[eq:diag2\].
Next, to verify Property \[item:5k\], let $F_3$ to be a very general fiber of $\psi_3$ and note that
$$0 \leq \kappa(F_3, (K_{X_3} + D_3)\big|_{F_3}) = \kappa(F_3, (i(K_{X_3} + D_3) + L_3)\big|_{F_3} ),$$ On the other hand we have
$$\label{eq:zero}
\kappa\big(F_3, (i(K_{X_3} +D_3) +L_3)|_{F_3} \big) \leq \kappa \big( F_3, (i(K_{X_3} + D_3) + \pi_3^*(\pi_2^*L_1))
\big|_{F_3} \big).$$
The two equalities in \[item:5k\] now follow form the fact that the right-hand side of (\[eq:zero\]) is less than or equal to zero, cf. Lemma \[lem:effective\].
The pseudo-effectivity of $i\cdot (K_{X_3} + D_3)-L_3$ (Property \[item:6k\]) follows from the relation
$$i\cdot (K_{X_3} + D_3)-L_3 \sim \pi_3^* \big( \pi_2^*( i\cdot(K_{X_1}+D_1) - L_1 ) \big)
+ \big( i\cdot E_4 + \pi_3^*(E) \big)$$ and the fact that $\big(i\cdot (K_{X_1} + D_1)-L_1\big) \in \overline{\operatorname{NE}}^1(X_1)$. The remaining properties hold by construction. This concludes the proof of Claim \[claim:KeyClaim\].
To lighten the notation, from now on we will assume that $i=1$.
*Proof of Theorem \[thm:vanishing\]*. After fixing the dimension of $X$, our proof will be based on induction on $d: = \kappa(K_X+D+L)$, assuming that $\kappa(L)>0$. The next claim provides the base case.
\[claim:base\] If $d=1$, then $\kappa({\scr{L}}) = \kappa(\omega_X(D))$.
*Proof of Claim \[claim:base\].* Using \[item:5k\] we can see that the Iitaka fibration $\phi^{(I)}: Y^{(I)} \to Z^{(I)}$ of $(K_Y+D_Y+L_Y)$ factors through $f: Y\to Z$ via a birational morphism $\pi^{(I)} : Y^{(I)}\to Y$ and a finite morphism $\nu: Z\to Z^{(I)}$. As both $(\pi^{(I)}\circ f)$ and $\phi^{(I)}$ are fiber spaces, the finite map $\nu$ must be trivial, that is the two maps $\phi^{(I)}$ and $f$ coincide. In particular we have $(K_Y+D_Y+L_Y)\sim_{{\mathbb{Q}}} f^* (B_Z)$, for some very ample divisor $B_Z$ in $Z$. By using \[item:7k\] it then follows that $$\label{eq:curve}
(K_Y+D_Y) \sim_{{\mathbb{Q}}} \frac{1}{2} \big( f^*(B_Z - 2 \cdot L_Z) + f^*(B_Z) \big),$$ Now, as $(B_Z- 2\cdot L_Z) \in \overline{\operatorname{NE}}^1(Z)$ by \[item:6k\], we conclude that the right-hand side of (\[eq:curve\]) is ample in $Z$. Therefore $\kappa(K_Y+D_Y) = \kappa(L_Y) = 1$, which establishes the claim.
*Inductive step.* We assume that Theorem \[thm:vanishing\] holds for any line bundle ${\scr{A}}$ on a smooth projective variety $W$ (having the same dimension as $X$) that satisfies the following two properties.
1. There is a reduced divisor $D_W$ such that $(W, D_W)$ is log-smooth and $({\scr{A}}\otimes {\scr{M}}) \subseteq \Omega^{\otimes i}_W\log(D_W)$, for some pseudo-effective line bundle ${\scr{M}}$.
2. $\kappa(W, \omega_W(D_W) \otimes {\scr{A}}) < d$.
Let $(Y,D_Y)$, $L_Y$ and $f: Y\to Z$ be as in the setting of Claim \[claim:KeyClaim\]. By the inductive step we may assume that $\dim(Z) = \kappa(K_Y+D_Y+L_Y)$. Furthermore, we can use Claim \[claim:base\] to exclude the possibility that $\kappa(K_Y+D_Y+L_Y)=1$.
We treat the case where the dimension of the fibers of $f$ is equal to $3$ (i.e. $\dim(Y) =5$ and $\kappa(K_Y+D_Y+L_Y) =2$). The case of lower dimensional fibers can be dealt with similarly.
Let $g: (Y, D_Y) \dashrightarrow (Y_n , D_{Y_n})$ be the birational map associated to a relative minimal model program for $(Y,D_Y)$ over $Z$, consisting of $n$ number of divisorial and flipping contractions
$$g_j: (Y_j, D_{Y_j}) \dashrightarrow (Y_{D_{j+1}}, D_{Y_{j+1}}),$$ cf. [@SecondAsterisque Chapt. 4]. Here we have set $(Y_0, D_{Y_0}):= (Y,D_Y)$. For each $1\leq j \leq n$, let $f_j: Y_{D_j} \to Z$ be the induced morphism resulting in the following diagram.
$$\xymatrix{
Y \ar@{-->}@/^9mm/[rrrrrr]^{g} \ar@{-->}[rr]^{g_1} \ar[drrr]_{f} && Y_1 \ar@{-->}[r]^{g_2} \ar[dr]_(.30){f_1} & ....
\ar@{-->}[r]^(0.38){g_{n-1}} & Y_{n-1} \ar@{-->}[rr]^{g_n} \ar[dl]^(.30){f_{n-1}} && Y_n \ar[dlll]^{f_n} \\
&&& Z.
}$$
\[claim:1\] $\kappa(Y_n, K_{Y_n} + D_{Y_n} + f_n^*L_Z ) = \kappa(Y, K_Y+D_Y+L_Y)$.
\[claim:2\] $\big( (K_{Y_n}+D_{Y_n}) - (f_n^*L_Z) \big) \in \overline{\operatorname{NE}}^1(Y_n).$
Let us for the moment assume that Claims \[claim:1\] and \[claim:2\] hold and proceed with the proof of Theorem \[thm:vanishing\]. Let $f_n': Y_{n}' \to Z'$ be a flattening of $f_n$ (see the discussion preceding Lemma \[lem:flat\]), with induced biratinal maps $\tau: Z'\to Z$ and $\sigma : Y_n ' \to Y_n$ leading to the following commutative diagram.
$$\xymatrix{
Y_n' \ar[rr]^{\sigma} \ar[d]_{f'_n} && Y_n \ar[d]^{f_n} \\
Z' \ar[rr]^{\tau} && Z.
}$$
Thanks to the solution of the relative log-abundance problem in $\dim(X)=3$ by Keel, Matsuki and McKernan [@MR2057020], using \[item:5k\], we have $(K_{Y_n} + D_{Y_n})|_{F_z}\equiv 0$, where $F_z$ is the general fiber of $f_n$. Therefore, by Lemma \[lem:flat\], there exists a ${\mathbb{Q}}$-divisor $A_{Z'}$ in $Z'$ such that $\sigma^*(K_{Y_n} + D_{Y_n}) \sim_{{\mathbb{Q}}} (f_n')^*(A_{Z'})$ so that
$$\sigma^*\big( (K_{Y_n} + D_{Y_n}) \pm f_n^*L_Z \big) \sim_{{\mathbb{Q}}} (f_n')^* (A_{Z'} \pm \tau^*L_Z).$$ By Claim \[claim:1\] it thus follows that $\kappa(Z', A_{Z'} + \tau^*(L_Z)) =
\kappa(Y_n, K_{Y_n}+ D_{Y_n}+ f_n^*L_Z) = \dim(Z)$. Moreover, by using Claim \[claim:2\], we find $(A_{Z'} - \tau^*(L_Z)) \in \overline{\operatorname{NE}}^1(Z')$. Therefore $A_{Z'}$ is big in $Z'$ and we have
$$\label{eq:ineq1}
\kappa(Z' , A_{Z'}) \geq \kappa(Z', \tau^*(L_Z)) = \kappa(Z, L_Z).$$
On the other hand, by using the negativity lemma, we have
$$\label{eq:ineq2}
\kappa(Z', A_{Z'}) = \kappa(Y_n , K_{Y_n} + D_{Y_n}) = \kappa(Y, K_Y+D_Y).$$
By combining (\[eq:ineq1\]) and (\[eq:ineq2\]) we reach the inequality
$$\kappa(Y, L_Y) \leq \kappa(Y, K_Y+D_Y).$$
We now turn to proving Claims \[claim:1\] and \[claim:2\].
*Proof of Claim \[claim:1\].* Let $(\widetilde Y, \widetilde D_Y)$ be a common log-smooth higher birational model for $(Y,D_Y)$ and $(Y_n, D_{Y_n})$, with birational morphisms $\mu: \widetilde Y \to Y$ and $\mu_n :\widetilde Y\to Y_n$. According to [@KM98 Lem. 3.38] there is an effective $\mu$-exceptional divisor $E_{\mu}$ and an effective $\mu_n$-exceptional divisor $E_{\mu_n}$ such that
$$\mu^*(K_Y+D_Y) + E_{\mu} \sim_{{\mathbb{Q}}} (\mu_n)^*(K_{Y_n}+ D_{Y_n}) + E_{\mu_n}.$$ Therefore, we have $$\mu^*\big( (K_Y+D_Y) + f^*L_Z \big) + E_{\mu} \sim_{{\mathbb{Q}}} (\mu_n)^*(K_{Y_n} + D_{Y_n} + f_n^*L_Z) + E_{\mu_n},$$ which establishes the claim.
*Proof of Claim \[claim:2\].* The proof will be based on induction on $n$. Assume that $$\big( (K_{Y_{n-1}} + D_{Y_{n-1}}) - f_{n-1}^*(L_Z) \big) \in \overline{\operatorname{NE}}^1(Y_{n-1}).$$ Now, the map $g_n: Y_{n-1}\dashrightarrow Y_n$ is either a divisorial contraction or a flip over $Z$. In the case of the latter the claim is easy to check, so let us assume that $g_n$ is a divisorial contraction and let $E_1$ and $E_2$ be two effective exceptional divisors such that the equivalence
$$\label{eq:sim}
K_{Y_{n-1}} + D_{Y_{n-1}} + E_1 \sim_{{\mathbb{Q}}} g_n^*(K_{Y_n} + D_{Y_n}) + E_2$$
holds. After subtracting $f_{n-1}^*(L_Z) = g_n^*(f_n^*L_Z)$ from both sides of (\[eq:sim\]) and by using the inductive hypothesis we find that
$$\big(g_n^* ( K_{Y_n} + D_{Y_n} - f_n^*L_Z) + E_2 \big) \in \overline{\operatorname{NE}}^1(Y_{n-1}),$$ which implies $\big( K_{Y_n} + D_{Y_n} - f_n^*(L_Z) \big)\in \overline{\operatorname{NE}}^1(Y_n)$. This finishes the proof of Claim \[claim:2\].
Theorem \[thm:vanishing\] is naturally related to a conjecture of Campana and Peternell, where the authors predict that over a smooth projective variety $X$, if $K_X\sim_{{\mathbb{Q}}} L + B$, where $L$ is effective and $B$ is pseudo-effective, then $\kappa(X)\geq \kappa(L)$, cf. [@CP11]. They also provide a proof to this conjecture when $B$ is numerically trivial [@CP11 Thm. 0.3]. (See also [@CKP12] for the generalization to the logarithmic setting.)
Concluding remarks and further questions {#sect:Section5-Future}
========================================
We recall that a quasi-projective variety $V$ of dimension $n$ is said to be *special* if, for every invertible subsheaf ${\scr{L}}\subseteq \Omega^p_X\log(D)$, the inequality $\kappa(X, {\scr{L}})< p$ holds, for all $1\leq p\leq n$. Here, by $(X,D)$ we denote a log-smooth compactification of $V$. We refer to [@Cam04] for an in-depth discussion of this notion. We note that while varieties of Kodaira dimension zero form an important class of special varieties [@Cam04 Thm 5.1], there are special varieties of every possible (but not maximal) Kodaira dimension.
A conjecture of Campana predicts that a smooth projective family of canonically polarized manifolds $f_U : U \to V$ parametrized by a special quasi-projective variety $V$ is isotrivial. One can naturally extend this conjecture to the following setting.
\[conj:iso\] Let $f_U: U \to V$ be a smooth projective family, where $V$ is equipped with a morphism $\mu: V\to P_h$ to the coarse moduli scheme $P_h$ associated with the moduli functor ${\scr{P}}_h(\cdot)$ of polarized projective manifolds with semi-ample canonical bundle and fixed Hilbert polynomial $h$. If $V$ is special, then $f_V$ is isotrivial.
By using the refinement of [@VZ02], due to Jabbusch and Kebekus [@MR2976311], and the main result of [@CP13], in the canonically polarized case, Conjecture \[conj:iso\] was established in [@taji16]. We invite the reader to also consult [@CKT16], Claudon [@Claudon15], [@CP16] and Schnell [@Sch17].
After a close inspection one can observe that the the strategy of [@taji16] can be extended to establish \[conj:iso\] in its full generality. More precisely, existence of the functor ${\scr{P}}_h$ with is associated algebraic coarse moduli scheme gives us a twofold advantage. It ensures the existence of “effective" Viehweg-Zuo subsheaves ${\scr{L}}\subseteq \Omega^{\otimes i}_X\log (D)$ whose Kodaira dimension verifies $\kappa({\scr{L}}) \geq \operatorname{Var}(f_U)$, cf. [@VZ02] and is constructed at the level of moduli stacks, cf. [@MR2976311]. After imposing some additional orbifold structures naturally arising from the induced moduli map $\mu: V \to P_h$, these two properties allow us to essentially reduce the problem to the case where ${\scr{L}}$ is big, cf. [@taji16 Thm. 4.3].
\[thm:iso\] Conjecture \[conj:iso\] holds.
Campana has kindly informed the author that, extending their current result [@AC17], in a joint work with Amerik, they have established Theorem \[thm:iso\] in a more general context of projective families with orbifold base.
Using the results of [@Cam04], it is not difficult to trace a connection between the two Conjectures \[conj:iso\] and \[conj:kk\]; a solution to Conjecture \[conj:iso\] leads to a solution for Conjecture \[conj:kk\]. We record this observation in the following theorem.
\[thm:IsoToKK\] For any quasi-projective variety $V$ equipped with $\mu_V: V\to P_h$, induced by a family $f_U : U \to V$ of polarized manifolds, we have
$$\label{eq:end}
\operatorname{Var}(f_U) \leq \kappa(X,D),$$
$(X,D)$ being a log-smooth compactification of $V$.
Before proceeding to the proof of Theorem \[thm:IsoToKK\], let us recall the notion of the *core map* defined by Campana. Given a smooth quasi-projective variety $V$ that is not of log-general type, the core map is a rational map $c_X: X\dashrightarrow Z$ satisfying the following two key properties.
1. \[item:core1\] $c_X$ is almost holomorphic with special geometric fibers.
2. \[item:core2\] $c_X$ is birationally equivalent to a fiber space $c_{\widetilde X}: (\widetilde X, \widetilde D)
\to (\widetilde Z, \Delta_{\widetilde Z})$, where $\Delta_{\widetilde Z}\in \operatorname{Div}_{{\mathbb{Q}}}(\widetilde Z)$ and the pair $(\widetilde Z, \Delta_{\widetilde Z})$ is a log-smooth orbifold base for $C_{\widetilde X}$ and is of log-general type.
Assume that $f_U$ is not isotrivial and $V$ is not of log-general type. Then, by Item \[item:core1\] and Theorem \[thm:iso\], the compactification $\overline{\mu}_V: X\to \overline{P_h}$ of $\mu_V$ factors through the core map $c_X: X \dashrightarrow Z$ with positive dimensional fibers. In particular we have $$\label{eq:ineqVar}
\operatorname{Var}(f_U) \leq \dim(Z).$$ The theorem then follows from Campana’s orbifold $C_{n,m}^{\rm{orb}}$ theorem for any orbifold, log-general type fibration; an example of which is $c_{\widetilde X}$. More precisely, by using \[item:core2\] and [@Cam04 Thm. 4.2] we can conclude that the inequality
$$\kappa(\widetilde X, \widetilde D) \geq \kappa(\widetilde Z, \Delta_{\widetilde Z})$$ holds, which, together with (\[eq:ineqVar\]) and the inequality $\kappa(X, D) \geq \kappa(\widetilde X, \widetilde D)$, establishes the theorem.
When the fibers are of general type, it is conceivable that one may be able to use the result of Birkar, Cascini, Hacon and McKernan [@BCHM10] on the existence of good minimal models for varieties of general type (and relative base point freeness theorem) to reduce to the case of maximal variation. More precisely, the arguments of [@VZ02 Lem. 2.8] combined with Hodge theoretic construction of [@PS15], or the ones in Section \[sect:Section3-VZ\] of the current paper, may allow for the construction of a Viehweg-Zuo subsheaf ${\scr{L}}$ over a new base where variation of the pulled back family is maximal. If so, in this case the discussion prior to Theorem \[thm:iso\] again applies and Theorem \[thm:iso\] and consequently Theorem \[thm:IsoToKK\] hold. But as pointed out from the outset, the main remaining difficulty is solving the isotriviality problem in the absence of a well-behaved functor, such as ${\scr{P}}_h$, detecting the variation in the family (after running the minimal model program, if necessary). At the moment, it is not clear to the author that one can expect Conjecture \[conj:iso\] to hold for all smooth projective families whose geometric fiber has semi-ample canonical bundle or—even more generally—admits a good minimal model.
Let $f_U: U\to V$ be a smooth projective family of manifolds admitting a good minimal model. If $V$ is special, (apart from the cases discussed above) is it true that $f_U$ is isotrivial?
[^1]: The original conjecture of Kebekus and Kovács was formulated for the case of canonically polarized fibers.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Three problems for a discrete analogue of the Helmholtz equation are studied analytically using the plane wave decomposition and the Sommerfeld integral approach. They are: 1) the problem with a point source on an entire plane; 2) the problem of diffraction by a Dirichlet half-line; 3) the problem of diffraction by a Dirichlet right angle. It is shown that total field can be represented as an integral of an algebraic function over a contour drawn on some manifold. The latter is a torus. As the result, the explicit solutions are obtained in terms of recursive relations (for the Green’s function), algebraic functions (for the half-line problem), or elliptic functions (for the right angle problem).'
author:
- 'A. V. Shanin, A. I. Korolkov'
bibliography:
- 'Bibliography.bib'
title: 'Sommerfeld–type integrals for discrete diffraction problems'
---
NOTATION {#notation .unnumbered}
========
-------------------------------------- ----------------------------------------------------------------------------------------------------------------
$\overline{\mathbb{C}}$ Riemann sphere
$K$ wavenumber parameter
${\mathbf{S}}$ two-dimensional discrete lattice
${\mathbf{S}}_2$ 2-sheet branched discrete lattice
${\mathbf{S}}_3$ 3-sheet branched discrete lattice
$u(m,n)$ wave field on a lattice ${\mathbf{S}}$
$U(\xi_1, \xi_2)$ Fourier transform of $u$
$(r , \phi)$ polar coordinates on the plane $(m,n)$
$\phi_{\rm in}$ angle of propagation of the incident wave
$x = e^{i \xi_1}$, $y = e^{i \xi_2}$ “algebraic wavenumbers”
$\tilde u(m,n)$ total field on a discrete branched surface
${\mathbf{C}}$ Circle $[0,2\pi]$ with $0$ and $2\pi$ glued to each other
$ D(\xi_1, \xi_2)$ dispersion function, defined by (\[eq0105\])
$\hat D(x, y)$ dispersion function, defined by (\[eq0105a\])
$\Xi(x)$ root of dispersion equation (\[eq0111\]), defined by (\[eq0111b\])
${\mathbf{R}}$ Riemann surface of $\Xi(x)$
${\mathbf{R}}_2$ 2-sheet covering of ${\mathbf{R}}$
${\mathbf{R}}_3$ 3-sheet covering of ${\mathbf{R}}$
${\mathbf{H}}$ dispersion surface, the set of all points $(x,y),x,y \in\overline{\mathbb{C}}$ such that (\[eq0111\]) is valid
${\mathbf{H}}_2$ 2-sheet covering of ${\mathbf{H}}$
${\mathbf{H}}_3$ 3-sheet covering of ${\mathbf{H}}$
$B_1 \dots B_4$ branch points of ${\mathbf{R}}$
$J_1 \dots J_4$ zero / infinity points on ${\mathbf{H}}$
$\sigma_1 \dots \sigma_4$ integration contours for the representations (\[eq0107b\]), (\[eq0107g\]), (\[eq0107h\]), (\[eq0107j\])
$\Gamma_j$ contours for the Sommerfeld integral
$(\alpha, \beta) \in {\mathbf{C}}^2$ coordinates on the torus
$\zeta$ mapping between ${\mathbf{H}}$ and ${\mathbf{R}}$
$\Psi$ analytic 1-form on ${\mathbf{H}}$ defined by (\[eq0301\])
$f_0(x) , \dots ,f_3(x)$ basis algebraic functions on ${\mathbf{R}}_2$
$t(p)$ elliptic integral (\[elfunF\])
$E(t, \omega_1, \omega_2)$ elliptic function (\[etafunct\])
-------------------------------------- ----------------------------------------------------------------------------------------------------------------
Introduction
============
In the beginning of 20th century, Sommerfeld found a closed integral solution for the problem of diffraction by a half-plane [@Sommerfeld1954] by combining the plane wave decomposition integral with the reflection method. Later on, this plane wave decomposition integral with some particular contour of integration has been named after Sommerfeld. The Sommerfeld integral approach has been then applied to a number of problems such as problem of diffraction by a wedge [@Sommerfeld1901; @MacDonald1902; @Babich2008] and some others [@Luneburg1997; @Hannay2003].
In this paper, we build an analogy of the Sommerfeld integral for discrete problems. The following problems on a 2D square lattice are considered in the paper:
- radiation of a point source in an entire plane (i. e., finding of a Green’s function of a plane),
- diffraction by a Dirichlet half-line,
- diffraction by a Dirichlet right angle.
The discrete Green’s function problem has been studied in two different physical contexts. They are the discrete potential theory [@Duffin1953; @Thomassen1990; @Zemanian1988; @Atkinson1999], and the problem of the random walk [@Pearson1905; @McCrea1940; @Spitzer1964]. Both problems can be reduced to the calculation of the Green’s function for a discrete Laplace equation [@Ito1960]. The Green’s function in these works is represented in terms of a double Fourier integral. Depending on the position of the observation point, four different single integral representations can be introduced with the help of residue integration. In the current work we study the representation as an integral on the complex manifold of dimension 1. The latter is a torus. The problem of diffraction by a half-line is well known in the context of fracture mechanics [@Slepyan1982; @Slepyan2002], where the Dirichlet half-line models a rigid constraint in a square lattice. The problem has been solved by several authors [@Eatwell1982; @Slepyan1982; @Sharma2015b] with the help of the Wiener-Hopf technique. The solution has been expressed in terms of elliptic integrals. Here we introduce an analogue of the Sommerfeld integral for this problem and obtain an expression for the solution in terms of algebraic functions. We are not aware of any analytical results on right angle diffraction problem for a lattice. Here we obtain a solution of this problem in terms of elliptic functions by using the Sommerfeld integral approach.
Discrete Green’s function on a plane
====================================
Problem formulation {#Sec:2.1}
-------------------
Let there exist a two-dimensional lattice, whose nodes are indexed by $m,n \in \mathbb{Z}$. This lattice is referred to as ${\mathbf{S}}$. Let a function $u(m,n)$ defined on ${\mathbf{S}}$ obey the equation $$u(m+1, n) + u(m-1,n) + u(m,n-1) + u(m,n+1) + (K^2 - 4) u(m,n) =
\delta_{m,0} \delta_{n,0},
\label{eq0101}$$ where $\delta_{m,n}$ is the Kronecker’s delta. Indeed the expression $$u(m+1, n) + u(m-1,n) + u(m,n-1) + u(m,n+1) - 4\, u(m,n)$$ is the discrete analogue of the continuous 2D Laplace operator $\Delta$. Our aim is to compute $u(m,n)$.
The wavenumber parameter $K$ is close to positive real, but has a small positive imaginary part mimicking an attenuation on the lattice. The radiation condition (in the form of the limiting absorption principle) states that $u(m,n)$ should decay exponentially as $\sqrt{m^2 + n^2} \to \infty$.
Double integral representation. Dispersion equation {#Sec:2.2}
---------------------------------------------------
Apply a double Fourier transform to (\[eq0101\]). The transform is as follows: $$u(m,n)
\,
\longrightarrow
\,
U(\xi_1, \xi_2) = \sum_{m,n = -\infty}^{\infty}
u(m,n) \exp \{ - i (m \xi_1 + n \xi_2) \} .
\label{eq0102}$$ The result is $$D(\xi_1, \xi_2) U(\xi_1 , \xi_2) = 1,
\label{eq0102a}$$ where $$D(\xi_1 , \xi_2) \equiv
2 \cos \xi_1 + 2 \cos \xi_2 - 4 + K^2.
\label{eq0105}$$ The inverse Fourier transform is given by $$U(\xi_1 , \xi_2)
\,
\longrightarrow
\,
u(m, n) = \frac{1}{4 \pi^2}
\int \! \! \! \! \int_{-\pi} ^{\pi}
U(\xi_1,\xi_2) \exp \{ i (m \xi_1 + n \xi_2) \}\, d\xi_1 \, d\xi_2 ,
\label{eq0103}$$ and thus the following representation for $u(m,n)$ holds: $$u(m,n) = \frac{1}{4\pi^2}
\int \! \! \! \! \int_{-\pi} ^{\pi}
\frac{
\exp \{ i (m \xi_1 + n \xi_2) \}
}{D(\xi_1 , \xi_2)}
d\xi_1 \, d\xi_2 .
\label{eq0104}$$
Introduce the variables $$x = e^{i \xi_1} ,
\qquad
y = e^{i \xi_2},
\label{eq0110}$$ and the function $$\hat D(x , y) \equiv
x + x^{-1} + y + y^{-1} - 4 + K^2.
\label{eq0105a}$$ (Indeed, $D(\xi_1, \xi_2) = \hat D(e^{i \xi_1}, e^{i\xi_2})$.) The integral (\[eq0104\]) can be rewritten as $$u(m,n)
= - \frac{1}{4 \pi^2}
\int_\sigma \int_\sigma
\frac{x^m y^n}{\hat D(x , y)} \frac{dx}{x} \frac{dy}{y},
\label{eq0104a}$$ where contour $\sigma$ is the unit circle in the $x$-plane ($|x|=1$) passed in the positive direction (anti-clockwise). Expression (\[eq0104a\]) is the double integral representation for the Green’s function $u$.
The combination $$x^m y^n = e^{i (m \xi_1 + n \xi_2)}$$ plays the role of a plane wave on a lattice. If $x$ and $y$ obey the [*dispersion equation*]{} $$\hat D (x, y) = 0,
\label{eq0111}$$ then such a wave can travel along a lattice, being supported by a homogeneous equation $$u(m+1, n) + u(m-1,n) + u(m,n-1) + u(m,n+1) + (K^2 - 4) \, u(m,n) = 0.
\label{eq0101h}$$
Among general plane waves corresponding to any solution of (\[eq0111\]) we would like to select the subset of [*real waves*]{}. If $K$ is real, real waves are just waves with $|x| = |y| = 1$. They are plane waves in the usual understanding (in contrast with waves attenuating in some direction). Such waves can be characterized by a (real) wavenumber $(\xi_1, \xi_2)$, or by the propagation angle ${\theta}$ such that $$\xi_1 = \xi \cos {\theta},
\qquad
\xi_2 = \xi \sin {\theta},
\label{eq4101}$$ and $\xi = \xi({\theta})$ is real positive.
Angle ${\theta}$ takes values on the circle $[0, 2\pi]$ with $0$ and $2\pi$ glued to each other. Below we refer to this circle as ${\mathbf{C}}$.
It follows from (\[eq4101\]) that $${\rm Im} [\xi_1 / \xi_2] = 0.
\label{eq4102}$$ So, if $K$ is real then real waves correspond to solutions of dispersion equation $D(\xi_1 , \xi_2) =0$ obeying (\[eq4102\]). Note that not all such pairs $(\xi_1, \xi_2)$ correspond to the real waves. There are two branches of such pairs, both organized as ${\mathbf{C}}$, and only one branch corresponds to real $(\xi_1, \xi_2)$, i. e. to the real waves.
Mainly for clarity and convenience, we are going to introduce “real waves” for the case of complex $K$. In this case, there are no non-decaying plane waves, so there exists an ambiguity in the choice of the real waves. We solve this ambiguity by selecting the solutions of the dispersion equation with $${\rm Im} [\sin (\xi_1) / \sin (\xi_2)] = 0,
\label{eq4102a}$$ or, the same, the solutions of (\[eq0111\]) with $${\rm Im} \left[
\frac{x - x^{-1}}{y - y^{-1}}
\right] = 0.
\label{rewaves}$$ One can show that there is a branch of such pairs $(x, y)$ having the shape of a loop on the Riemann surface of $y(x)$ that tends to usual real waves as ${\rm Im}[K]\to 0$. Topologically, the real waves remain organized as ${\mathbf{C}}$.
The choice of the relation (\[rewaves\]) for the real waves is explained below. Namely, the saddle points of the integral representations of the field found from equation (\[eq:sadpoint\]) belong to the set described by (\[rewaves\]).
Single integral representations {#Sec:2.3}
-------------------------------
The integral (\[eq0104a\]) can be taken with respect to one of the variables by the residue integration. As the result, one can obtain a single integral representation. There are four cases, possibly intersecting: $$n \ge 0, \qquad
n \le 0, \qquad
m \ge 0, \qquad
m \le 0.$$ Each of these cases results in its own single integral representation formula.
[**Case $n \ge 0$**]{}:
Consider the integral (\[eq0104a\]). Fix $x \in \sigma$ and study the integral with respect to $y$. The $y$-plane is shown in Fig. \[fig03b\]. One can see that there are four possible singular points in this plane.
Two of them are the roots of equation (\[eq0111\]) considered with respect to $y$. These roots are $$y(x) = \Xi (x), \qquad y(x) = \Xi^{-1} (x),
\label{eq0111a}$$ where $$\Xi (x) = -\frac{K^2 - 4 + x + x^{-1}}{2}
+
\frac{\sqrt{(K^2 - 4 + x + x^{-1})^2 - 4}}{2} .
\label{eq0111b}$$
The value of the square root is chosen in such a way that $|\Xi (x)| < 1$. Note that $|\Xi (x)|$ cannot be equal to 1 if $|x| = 1$ since $K$ is not real. The points (\[eq0111a\]) are simple poles of the integrand of (\[eq0104a\]).
Note also that $$\Xi^{-1} (x) = -\frac{K^2 - 4 + x + x^{-1}}{2}
-
\frac{\sqrt{(K^2 - 4 + x + x^{-1})^2 - 4}}{2},
\label{eq0111p}$$ and, indeed, $y = \Xi(x)$ and $y = \Xi^{-1}(x)$ are two roots of the quadratic equation (\[eq0111\]).
Beside (\[eq0111a\]), there maybe singularities of the integrand at two other points: $y = 0$ and $y= \infty$, (the latter is a certain point of the Riemann sphere $\overline{\mathbb{C}}$). The presence of singularities at these points depends on the value of $n$. If $n \ge 0$ (as is in the case under consideration) then the integrand is regular at $y = 0$ and may have a pole at $y = \infty$.
Thus, $y = \Xi(x)$ is the only singularity of the integrand inside the contour $\sigma$. Apply the residue theorem. The result is $$u(m,n) = \frac{1}{2\pi i}
\int_\sigma
\frac{
x^m \, \Xi^n(x)
}{
\Xi(x) \,
{\partial}_{y} \hat D( x , \Xi(x) )
}
\frac{d x}{x} .
\label{eq0107}$$ As one can find by direct computation, $$y \, {\partial}_y \hat D (x, y) = y - y^{-1}.
\label{eq0107a}$$ Thus, (\[eq0107\]) can be rewritten as $$u(m,n) = \frac{1}{2\pi i}
\int_\sigma
x^m \, y^n
\frac{
d x
}{
x (y - y^{-1})
},
\qquad
y = \Xi(x).
\label{eq0107b}$$ This is a single integral representation of the field.
[**Case $n \le 0$**]{}:
In this case $y = 0$ is a singular point of the integrand of (\[eq0107\]), but $y = \infty$ is a regular point (more rigorously, $y = \infty$ is a regular point of the differential form that is integrated in (\[eq0107\])). This means that the integrand has no branching at $y = \infty$, and it decays not slower than $\sim y^{-2}$. For such an integrand, one can apply the residue theorem to the exterior of $\sigma$. The result is $$u(m,n) = -\frac{1}{2\pi i}
\int_\sigma
x^m \, y^n
\frac{
d x
}{
x (y - y^{-1})
},
\qquad
y = \Xi^{-1}(x).
\label{eq0107g}$$
[**Case $m \ge 0$**]{}:
The representation of the field is $$u(m,n) = \frac{1}{2\pi i}
\int_\sigma
x^m \, y^n
\frac{
d y
}{
y (x - x^{-1})
},
\qquad
x = \Xi(y) .
\label{eq0107h}$$
[**Case $m \le 0$**]{}:
The representation of the field is $$u(m,n) = -\frac{1}{2\pi i}
\int_\sigma
x^m \, y^n
\frac{
d y
}{
y (x - x^{-1})
},
\qquad
x = \Xi^{-1}(y).
\label{eq0107j}$$
Thus, we obtained four single integral representations: (\[eq0107b\]), (\[eq0107g\]), (\[eq0107h\]), (\[eq0107j\]).
Field representation by integration on a manifold {#Sec:2.4}
-------------------------------------------------
Let us analyze the integral (\[eq0107b\]). Consider $x$ and $y$ as complex variables taking values on the Riemann sphere $\mathbb{\bar C}$. We remind that $\mathbb{\bar C}$ is a compactified complex plane, i. e. a plane to which the infinite point is added. The usage of the Riemann sphere is convenient when it is necessary to study functions having algebraic growth, or just algebraic functions, which is the case in (\[eq0107b\]).
Each point $(x, y)$ thus belongs to $\overline{\mathbb{C}} \times
\overline{\mathbb{C}}$. Let us describe the set of points $(x, y)$ such that equation (\[eq0111\]) is valid. It is easy to prove that this set is an analytic manifold of complex dimension 1 (so it has real dimension 2). We comment this below. This manifold will be referred to as ${\mathbf{H}}$. Since (\[eq0111\]) is the dispersion relation for the lattice, one can call ${\mathbf{H}}$ the [*dispersion surface*]{}.
The manifold ${\mathbf{H}}$ can be easily built using the function $\Xi(x)$ defined by (\[eq0111b\]). This function has been defined only for $x \in \sigma$. Continue this function analytically onto the whole $\overline{\mathbb{C}}$. This function becomes double-valued, with some branch points. Indeed, ${\mathbf{H}}$ is the set of all points $(x , \Xi(x))$, $x \in \overline{\mathbb{C}}$.
We make here an obvious observation that the Riemann surface of $\Xi(x)$, which will be referred to as ${\mathbf{R}}$, is the projection of ${\mathbf{H}}$ onto $x$. Thus, topologically ${\mathbf{H}}$ coincides with the Riemann surface ${\mathbf{R}}$. We will denote the mapping between ${\mathbf{H}}$ and ${\mathbf{R}}$ by $\zeta$. More often, we will use the inverse mapping $$x \stackrel{\zeta^{-1}}{\longrightarrow} (x , \Xi(x))
,
\qquad
x \in {\mathbf{R}}.$$
Let us study the Riemann surface ${\mathbf{R}}$. Function $\Xi(x)$ has four branch points. They are the points where the argument of the square root in (\[eq0111b\]) is equal to zero. By solving the equation $$(K^2 - 4 + x + x^{-1})^2 - 4 =0$$ we find that the branch points are $x = \eta_{j,k}$, $j, k = 1,2$, where $$\eta_{1,1} = -\frac{d}{2} + \frac{\sqrt{d^2 -4}}{2},
\qquad d = K^2 - 2 ,
\label{eq0111c1}$$ $$\eta_{1,2} = -\frac{d}{2} + \frac{\sqrt{d^2 -4}}{2},
\qquad d = K^2 - 6 ,
\label{eq0111c2}$$ $$\eta_{2,1} = -\frac{d}{2} - \frac{\sqrt{d^2 -4}}{2},
\qquad d = K^2 - 2 ,
\label{eq0111c3}$$ $$\eta_{2,2} = -\frac{d}{2} - \frac{\sqrt{d^2 -4}}{2},
\qquad d = K^2 - 6 .
\label{eq0111c4}$$
Let us list some important properties of the branch points that can be checked directly or derived from elementary properties of quadratic equations. These properties are:
- The branch points are the points at which $\Xi(x) = \pm 1$.
- For $y = \Xi(x)$ $$\Upsilon(x) \equiv x (y - y^{-1}) = \sqrt{(x - \eta_{1,1})(x - \eta_{1,2})(x - \eta_{2,1})(x - \eta_{2,2})}.
\label{eq0111w}$$ Note that the left-hand side of (\[eq0111w\]) is the denominator of the integrand of (\[eq0107b\]).
- $
\eta_{1,1} \, \eta_{2,1} = 1,
\qquad
\eta_{1,2} \, \eta_{2,2} = 1.
$
- Exactly two of the branch points $(\eta_{1,1}, \eta_{1,2}, \eta_{2,1} , \eta_{2,2})$ are located inside the circle $|x|<1$. By the choice of the square root branches, we can make these points be called $\eta_{2,1}$ and $\eta_{2,2}$.
The scheme of ${\mathbf{R}}$ is shown in Fig. \[fig03\]. Two sheets ${\mathbf{R}}$ are Riemann spheres. They are shown projected onto a plane, such that the infinity point of $\overline{\mathbb{C}}$ becomes infinitely remote.
The branch points are connected by cuts shown by bold curves. For definiteness, the branch cuts are conducted along the lines at which $|\Xi(x)|=1$. The shores of the cuts labeled by equal Roman number should be connected with each other.
One of the sheets drawn in Fig. \[fig03\] is called [*physical*]{}, and the other is [*unphysical*]{}. The physical sheet is the one on which $|\Xi (x)| < 1$ for $|x| = 1$. Respectively, on the unphysical sheet $|\Xi (x)| > 1$ for $|x| = 1$. The integration in (\[eq0107b\]) is taken along contour $\sigma$ drawn on the physical sheet.
It is convenient to label the points of ${\mathbf{R}}$ by coordinates of corresponding points of ${\mathbf{H}}$, i. e. by the pairs $(x, \Xi(x))$.
There are several important points on this Riemann surface. First, they are the branch points $$B_1\,:\,(\eta_{2,1},1),
\qquad
B_2\,:\,(\eta_{2,2},-1),
\qquad
B_3\,:\,(\eta_{1,1},1),
\qquad
B_4\,:\,(\eta_{1,2},-1).$$ Second, they are zero/infinity points, i. e. the points at which either $x$ or $y = \Xi(z)$ is zero or infinity. They are the points $$J_1 \, : \, (0,0),
\qquad
J_2 \, : \, (0,\infty),
\qquad
J_3 \, : \, (\infty, \infty),
\qquad
J_4 \, : \, (\infty,0).$$
Topologically, ${\mathbf{H}}$ is a torus (i. e. it has genus equal to 1). This can be easily understood, since ${\mathbf{H}}$ is obtained by taking two spheres, making two cuts, and connecting their shores. The scheme of making a torus out of two Riemann spheres is shown in Fig. \[fig03d\].
The statement that ${\mathbf{H}}$ is an analytic manifold means that in each (small enough) neighborhood of any point of ${\mathbf{H}}$ one can introduce a complex local variable, such that all transformation mappings between the neighboring local variables are biholomorphic. It is clear that such local variables can be:
- $x$ for all points except the branch points $B_1, \dots , B_4$, and and two infinities $J_3$ and $J_4$;
- $y$ for the branch points $B_1, \dots , B_4$;
- $\tau = 1/x$ for the infinities $J_3$ and $J_4$.
To gain some clarity, we introduce coordinates $(\alpha, \beta)$ on ${\mathbf{H}}$ showing that this is a torus. Both coordinates are real and take values in ${\mathbf{C}}$. The coordinate lines on ${\mathbf{H}}$ (projected on ${\mathbf{R}}$) are shown in Fig. \[fig11a\]. The explicit formulae for the coordinates are not important (for the topological purposes the coordinates can be drawn just “by hands”), but we keep the following properties valid:
- Points $B_1$, $B_2$, $B_3$, $B_4$ have coordinates $(0, \pi)$, $(0,0)$, $(\pi, \pi)$, $(\pi, 0)$, respectively.
- Points $J_1$, $J_2$, $J_3$, $J_4$ have coordinates $(\pi/4, 0)$, $(7\pi/4,0)$, $(5\pi /4, 0)$, $(3\pi / 4, 0)$, respectively.
- Contour $\sigma$ corresponds to the line $\alpha = \pi/2$ passed in the negative direction.
- The cuts (bold lines), taken for $|y| = 1$, correspond to $\alpha = 0$ and $\alpha = \pi$.
- There is an important set of points on ${\mathbf{H}}$ where relation (\[rewaves\]) is fulfilled. A study of the explicit expressions for this set shows that it consists of two loops. One of the loops passes through the points $B_1$ and $B_3$. This is the [*real waves*]{} line discussed above. We force the coordinate line $\beta = \pi$ coincide with this line. The other loop bears all infinity points and the branch points $B_2$ and $B_4$. We make the coordinate line $\beta = 0$ coincide with this loop.
- On the line $\beta = \pi$ (the real waves line) we force the coordinate $\alpha$ to have values $$\alpha = \arctan \left(
\frac{y-y^{-1}}{x - x^{-1}}
\right) = \arctan \left(
\frac{\sin \xi_2}{\sin \xi_1}
\right).
\label{rewaves1}$$ Our $\arctan$ function takes values in ${\mathbf{C}}$ (not in $[-\pi/2, \pi/2]$). We assume that $\alpha = 0$ at $B_2$, and then use (\[rewaves\]) taking the values on ${\mathbf{C}}$ by continuity.
Note that the variables $(\alpha, \beta)$ are real, and they are used for display purposes only. They are not related to the complex analytic structure on the torus (for which one needs a single complex variable). Some “proper” coordinates will be introduced on ${\mathbf{H}}$ by using the elliptic variable $t$ in Section \[wedge4\].
The introduction of the manifold ${\mathbf{H}}$ (dispersion surface) is needed mainly to obtain the field representation invariant to the change of variables. Beside ${\mathbf{H}}$ itself, we need a formalism of analytic differential forms on ${\mathbf{H}}$ and integrating of them. An analytic 1-form can be defined in the manifold ${\mathbf{H}}$ (see e.g. [@Gurvitz1968; @Shabat1992]) by introducing a formal expression $h(z) dz$, where $z$ is a local complex variable for some neighborhood, and $h(z)$ is an analytic function in this neighborhood. In intersecting neighborhoods the representations of a form can be different, say $h_1 (z_1) dz_1$ and $h_2 (z_2) dz_2$, but they should match in an obvious way: $$h_2 = h_1 \frac{d z_1}{d z_2}.$$
The 1-form can be analytic/meromorphic if the functions $h_j$ are analytic/meromorphic. In the same sense the form can have a zero or a pole of some order.
Analyticity of a 1-form is an important property since if a form is analytic in some domain of an analytic manifold, and this form is integrated along some contour, this contour can be deformed within this domain, i. e. a usual Cauchy’s theorem for a complex plane can be generalized onto an analytic manifold. Integration over the poles also keeps the same.
Let us prove that the form $$\Psi = \frac{dx}{x (y - y^{-1})},
\label{eq0301}$$ which is a part of (\[eq0107g\]), is analytic everywhere on ${\mathbf{H}}$. The statement is trivial everywhere except the infinities and the branch points. Consider the infinities. At the points $J_1$ and $J_2$ it is easy to show that $(y-y^{-1}) \sim x^{-1}$ as $x \to 0$, thus the denominator is non-zero. At the points $J_3$ and $J_4$ one can show that $(y-y^{-1}) \sim x$ as $x \to \infty$, thus $ \Psi \sim x^{-2} dx $. A change to the variable $\tau = 1/x$ shows that the form is regular.
Finally, consider the branch points $B_1 , \dots , B_4$. As it has been mentioned, one can take $y$ as a local variable at these points. An important observation is that due to the theorem about an implicit function, $$\frac{d y}{d x} = - \frac{{\partial}_x \hat D}{{\partial}_y \hat D}
\label{eq0115}$$ everywhere on ${\mathbf{H}}$. Thus, $$\frac{dx}{x(y - y^{-1})} = - \frac{dy}{y(x-x^{-1})}.
\label{eq0116}$$ The denominator of the right-hand side of (\[eq0116\]) is non-zero at the branch points, so the form is regular.
The representation (\[eq0107b\]) can be rewritten as a contour integral of the form $$\psi_{m,n} = \frac{i}{2\pi} x^m y^n \Psi
\label{eq0302}$$ along some contour drawn directly on ${\mathbf{H}}$. The contour is, indeed, the preimage of $\sigma$ shown in Fig. \[fig03\], i. e. $\zeta^{-1} (\sigma)$.
It can be easily shown that three other representations, (\[eq0107g\]), (\[eq0107h\]), (\[eq0107j\]), can be written as contour integrals of [*the same differential form*]{} $\psi_{m,n}$ on ${\mathbf{H}}$, but taken along some other contours. Namely, for the integrals (\[eq0107g\]), (\[eq0107h\]), (\[eq0107j\]), these contours projected onto ${\mathbf{R}}$ are shown in Fig. \[fig07\]. They are denoted by $\sigma_3$, $\sigma_2$, $\sigma_4$, respectively. The contour $\sigma$ is denoted by $\sigma_1$ for uniformity. The contours on ${\mathbf{H}}$ are $\zeta^{-1} (\sigma_j)$ for $j = 1,\dots ,4$.
The contours $\sigma_1$, $\sigma_2$, $\sigma_3$, $\sigma_4$ in the coordinates $(\alpha, \beta)$ correspond to coordinate lines $\alpha = \pi/2, \, 0, \, 3 \pi/2, \, \pi$, respectively. The direction of bypass of each contour is negative with respect to the variable $\beta$.
The representations (\[eq0107b\])–(\[eq0107j\]) can be written in the common form $$u(m,n) = \int_{\zeta^{-1}(\sigma_j)} w_{m,n} \Psi,
\label{eq0303}$$ where $$w_{m,n} = w_{m,n} (x,y) = x^m y^n
\label{eq0304}$$ is the “plane wave”. Note that the integration is held over a contour on ${\mathbf{H}}$, so $(x,y) \in {\mathbf{H}}$, and any such $w_{m,n}$ obey the homogeneous stencil equation (\[eq0101h\]). Representation (\[eq0303\]) can be considered as a generalized plane wave decomposition.
Below we don’t distinguish between ${\mathbf{H}}$ and ${\mathbf{R}}$ and write $\sigma_j$ instead of $\zeta^{-1} (\sigma_j)$ if it does not lead to an ambiguity.
The form $\psi_{m,n}$ is analytic everywhere on ${\mathbf{H}}$ only for $m= n= 0$. Depending on $m$ and $n$, this form can have poles at the infinity points. For each of the points $J_1 , \dots , J_4$ one can derive a condition providing regularity of that point. These conditions of regularity for the infinity points are as follows: $$\begin{aligned}
J_1 &:& m+n \ge 0,
\\
J_2 &:& m-n \ge 0,
\\
J_3 &:&-m-n \ge 0,
\\
J_4 &:&-m+n \ge 0.\end{aligned}$$
Note that the contours $\sigma_1$, $\sigma_2$, $\sigma_3$, $\sigma_4$ can be deformed into each other. Topologically, the relative positions of the contours $\sigma_j$ and the infinity points $J_1, \dots , J_4$ on the torus ${\mathbf{H}}$ are shown in Fig. \[fig08\]. One can see that carrying the contours in the positive $\alpha$ direction corresponds to moving the observation point in the $(m,n)$-plane in the counter-clockwise (positive) direction. The representations are converted into each other, and every time there is a region where at least two representations are valid simultaneously.
Representing the field in the form of an integral of a differential form over some compact Riemann surface puts the problem into the context of Abelian differentials and integrals. Some benefits can be gained from this. Namely, one can notice that, for example, (\[eq0107b\]) is a period of an elliptic integral of a general form. One can apply the theorem by Legendre stating that a general elliptic integral can be represented as a linear combination of four basic elliptic integrals with rational coefficients [@Bateman1955], p.297. Since the periods of the integrals are studied, the coefficients should be constant. Following the proof of the theorem, one can conclude that there should exist recursive relations between the values of $u(m,n)$ enabling one to express any $u(m,n)$ from several initial values computed by integration. Such a system of recursive relations is presented in Appendix A. These relations can be used for an efficient computation of the Green’s function.
Diffraction by a Dirichlet half-line
====================================
Problem formulation {#subsec:form}
-------------------
Consider the following scattering problem. Let the homogeneous Helmholtz equation (\[eq0101h\]) be satisfied by the field $u(m,n)$ for all $(m,n)$ except the half-line $n=0,m\geq 0$ (see Fig. \[fig12\]). On this line we impose the “Dirichlet boundary condition” $u = 0$.
![Geometry of the problem of diffraction by a half-line. The black circles show the position of the Dirichlet half-line[]{data-label="fig12"}](half_line_geometry.eps){width="30.00000%"}
Let $u$ be a sum of an incident and a scattered field: $$u(m,n)= u_{\rm in} (m,n) + u_{\rm sc} (m,n),
\label{uinusc}$$ where $$\label{uin}
u_{\rm in} (m,n) = x_{\rm in}^m \, y_{\rm in}^n.$$ The point $(x_{\rm in}, y_{\rm in})$ belongs to ${\mathbf{H}}$, i. e. $$\hat D(x_{\rm in}, y_{\rm in}) = 0.$$
We assume that the point $(x_{\rm in}, y_{\rm in})$ is taken on the line of “real waves” $\beta = \pi$. Introduce a real parameter $\phi_{\rm in}$, which has the meaning of angle of incidence linked with $(x_{\rm in}, y_{\rm in})$ by the relation $$\frac{y_{\rm in} - y_{\rm in}^{-1}}{x_{\rm in} - x_{\rm in}^{-1}}
= \tan \phi_{\rm in} .
\label{phiin}$$ Angle $\phi_{\rm in}$ is the angle of propagation of the incident wave, while the angle of incidence is, obviously, $\phi_{\rm in} + \pi$. Let be $-\pi/ 2< \phi_{\rm in} < \pi/2$, and, thus, $$|x_{\rm in}| < 1.
\label{eq:incond}$$ Equation (\[phiin\]) with condition (\[eq:incond\]) defines two point, and only one of them belongs to the line $\beta = \pi$. We remind that this line is the branch of the set defined by (\[rewaves\]) tending to a part of the circle $|x| = 1$ as ${\rm Im}[K] \to 0$. By construction of the coordinates $(\alpha, \beta)$, the coordinate $\alpha$ of the point $(x_{\rm in}, y_{\rm in})$ is equal to $\phi_{\rm in}$.
The scattered field should obey the limiting absorption principle, i. e.decay as $\sqrt{m^2 + n^2} \to \infty$.
The problem formulated here can be solved using the Wiener–Hopf method [@Eatwell1982; @Slepyan1982; @Sharma2015b]. This solution can be found in Appendix B. Here, however, our aim is to develop the Sommerfeld integral approach for this problem.
Formulation on a branched surface {#SecBranch}
---------------------------------
Here, following A. Sommerfeld, we use the principle of reflections to get rid of the scatterer and, instead, to formulate a propagation problem on a coordinate plane with branching.
Parametrize the points $(m,n)$ by the coordinates $(r, \phi)$: $$m = r \cos \phi, \qquad n = r \sin \phi.
\label{eq0401}$$ Indeed $(r, \phi)$ take values from a discrete set.
Initially, $\phi$ belongs to ${\mathbf{C}}$. Allow $\phi$ be $4\pi$-periodic, i. e. let $\phi$ and $\phi + 4 \pi$ mean the same, but let $\phi$ and $\phi + 2\pi$ correspond to different points. Taking the points $(r, \phi)$ with such $\phi$ allows one to construct a branched planar lattice. The scheme of this lattice is shown in Fig. \[fig\_lattice\]. The lattice is composed of two discrete sheets. The origin $O=(0,0)$ is common for both sheets. The nodes $n = 0$, $m > 0$ of the first sheet are linked with corresponding nodes $n = -1$, $m > 0$ of the second sheet. The nodes $n = 0$, $m > 0$ of the second sheet are linked with corresponding nodes $n = -1$, $m > 0$ of the first sheet. The resulting discrete branched surface is referred to as ${\mathbf{S}}_2$ hereafter.
Each point of ${\mathbf{S}}_2$ except $(0,0)$ has exactly four neighbors in the lattice. Thus, one can look for a function $\tilde u$ defined on ${\mathbf{S}}_2$ and obeying equation (\[eq0101h\]) on ${\mathbf{S}}_2 \setminus O$.
Let $u$ be the solution of the diffraction problem formulated in the Section \[subsec:form\]. Define the function $\tilde u$ on ${\mathbf{S}}_2$ by the following formulae: $$\tilde u(r, \phi) = u(r , \phi) \qquad \mbox{ for } 0 \le \phi \le 2\pi,
\label{eq1201a}$$ $$\tilde u(r, 4\pi - \phi) = -u(r , \phi) \qquad \mbox{ for } 2\pi \le \phi \le 4\pi.
\label{eq1201b}$$ We assume also that $\tilde u (0,0) =0$.
One can check that $\tilde u$ obeys equation (\[eq0101h\]) on ${\mathbf{S}}_2 \setminus O$. This statement is trivial for $\phi$ not equal to $0$ or $2\pi$ and follows from the symmetry of the field for $\phi = 0, 2\pi$.
The new field $\tilde u$ has two incident field contributions. They are the plane wave $x_{\rm in}^m \, y_{\rm in}^n$ on sheet 1, and the plane wave $-x_{\rm in}^m \, y_{\rm in}^{-n}$ on sheet 2. The second wave is the reflection of the first one in the “mirror” coinciding with the half-line $\phi = 0$. Note that $(x_{\rm in}, y_{\rm in}^{-1}) \in {\mathbf{H}}$.
We can now formulate the diffraction problem on the branched surface:
[*Find a field $\tilde u$ defined on ${\mathbf{S}}_2$, obeying equation (\[eq0101h\]) on ${\mathbf{S}}_2 \setminus O$, and equal to zero at the origin. The difference $\tilde u - u_{\rm in}$ with $$u_{\rm in}(m,n) = w_{m,n}(x_{\rm in}, y_{\rm in})
\quad \mbox{ for } \quad
0 \le \phi \le 2\pi,$$ $$u_{\rm in}(m,n) =-w_{m,n}(x_{\rm in}, y_{\rm in}^{-1})
\quad \mbox{ for } \quad
2\pi \le \phi \le 4\pi.$$ should decay exponentially as $r \to \infty$.* ]{}
The structure of the Sommerfeld integral for the half-plane problem {#Sec:3.3}
-------------------------------------------------------------------
Being inspired by the classic Sommerfeld integral (Appendix C), we are building an analog of the Sommerfeld integral on the surface ${\mathbf{H}}$ for finding the field $\tilde u$.
For this, consider an analytic manifold ${\mathbf{H}}_2$ that is a two-sheet covering of ${\mathbf{H}}$, such that the variable $\alpha$ takes values in $[0 , 4 \pi]$ (its period is doubled with respect to that on ${\mathbf{H}}$), while $\beta$ still takes values in ${\mathbf{C}}$. The manifold ${\mathbf{H}}_2$ can be imagined as two copies of ${\mathbf{H}}$, cut along the line $\alpha = 0$, put one above another, and connected in a single torus. We assume that all functions on ${\mathbf{H}}_2$ are $4\pi$-periodic with respect to $\alpha$ and $2\pi$-periodic with respect to $\beta$.
The covering ${\mathbf{H}}_2$ has eight infinity points. Beside the points $J_1, \dots , J_4$ keeping the old coordinates $(\alpha, \beta)$, they are points $$J'_1 \, : \, (9\pi/ 4, 0),
\qquad
J'_2 \, : \, (15\pi/ 4, 0),
\qquad
J'_3 \, : \, (13\pi/ 4, 0),
\qquad
J'_4 \, : \, (11\pi/ 4, 0).$$
The Sommerfeld integral for the field on ${\mathbf{S}}_2$ is an expression $$\tilde u(m,n) = \int_{\Gamma_j} w_{m,n}(p) \, A(p) \Psi,
\label{ZomPlane}$$ where $\Psi$ is defined by (\[eq0301\]), $p = (x,y)$ is a point on ${\mathbf{H}}_2$, $A(p)$ is the [*Sommerfeld transformant*]{} of the field that is a function meromorphic on ${\mathbf{H}}_2$, and thus, possibly, double-valued on ${\mathbf{H}}$. We explain below that $A(p)$ should have two poles on the “real waves” line: with $\alpha = \phi_{\rm in} + 2\pi$ and $\alpha = 4\pi - \phi_{\rm in}$, corresponding to the incident plane waves.
An important part of the Sommerfeld method is the choice of the contour of integration in (\[ZomPlane\]). By analogy with the continuous case, we need to construct a family of contours depending on the angle of observation $\phi$, i. e. the contours $\Gamma = \Gamma(\phi)$. We consider a discrete family of contours, i. e. there is a set $\Gamma_j$ with $j$ cyclically depending on $\phi$. In the continuous case (Appendix C), the family $\Gamma (\phi)$ is formally continuous, but it also can be reduced to its finite subset.
Let such a family of contours $\Gamma_j$ be constructed. Then for each point of ${\mathbf{S}}_2$ and each contour $\Gamma_j$ it is possible to say, whether the field at the point is described by the integral (\[ZomPlane\]) with this contour $\Gamma_j$ or not. We will say that the point is described by the contour if the answer is “yes”.
Formally, the family $\Gamma_j$ should have the following properties:
1. For each point $P$ of ${\mathbf{S}}_2 \setminus O$ there should be a contour $\Gamma_j$ describing the point $P$ and at four its neighbors in the lattice.
2. If a point $P \in {\mathbf{S}}_2 \setminus O$ with coordinates $m,n$ is described by two contours $\Gamma_j$ and $\Gamma_{j'}$ then $\Gamma_{j'}$ should be transformable into $\Gamma_j$ without crossing the singularities of the integrand of (\[ZomPlane\]) taken for given $m,n$.
The second condition states that the field is describes consistently, while the first condition states that the field obeys $(\ref{eq0101h})$.
Note that in the previous section, a system of contours is built for the plane wave decomposition (\[eq0303\]). The contours are $\Gamma_1^\sigma = \sigma_1$, $\Gamma_2^\sigma = \sigma_4$, $\Gamma_3^\sigma = \sigma_3$, $\Gamma_4^\sigma = \sigma_2$. they are cyclically changed as $\phi$ increases. However, we cannot use this (or similar) family now, since the transformant $A(p)$ has poles on the line $\beta = \pi$ corresponding to the incident plane waves, and thus the contours $\sigma_j$ cannot be transformed one into another without hitting these poles.
That is why, we need a new family of contours $\Gamma_j$ that do not cross the “real waves” line $\beta = \pi$. We still assume that the contours should be obtained one from another by carrying along the $\alpha$-axis.
Introduce contour $\Gamma$ on ${\mathbf{H}}_2$ as it is shown in Fig \[fig14a\]. The manifold ${\mathbf{H}}_2$ is the rectangle $0 \le \alpha \le 4\pi$, $-\pi/2 \le \beta \le 3\pi /2$. Contour $\Gamma$ encircles the infinity points $J_4$, $J_3$, $J_2$, $J_1'$.
The family $\Gamma_j$ is set as follows. For $j = 1 \dots 8$ declare that the set of points $(r, \phi)$ with $$(j-1)\pi /2 < \phi < (j+1)\pi /2$$ is described by the contour $\Gamma_j = \Gamma + (j - 1)\pi /2$. The origin is described by all contours.
Here we introduced the notation $\Gamma + \delta$, where $\delta$ is some angle, that should be understood as follows. Contour $\Gamma$ can be considered as a set of points $(\alpha, \beta)$ on ${\mathbf{H}}_2$. Contour $\Gamma + \delta$ is $\Gamma$ shifted in the $\alpha$ direction by $\delta$, i. e. changed by the transformation $$(\alpha , \beta) \to (\alpha+ \delta, \beta).$$ The direction of $\Gamma + \delta$ is kept the same as for $\Gamma$.
Each of the contours $\Gamma + \pi l/2$ has exactly four infinity points inside.
The check that the family $\Gamma_j$ obeys condition 1 for the integration contour is trivial. To provide the validity of condition 2 one, generally, has to impose an additional requirement on $A$. Let $A(p)$ have no singularities except the poles at $(\phi_{\rm in} + 2\pi, \pi)$ and $(4\pi-\phi_{\rm in} , \pi)$. Let for example the point $(r, \phi)$ be described by $\Gamma_1$ and $\Gamma_2$, i. e. $$\pi/2 \le \phi \le \pi.
\label{range}$$ Let us show that the contour $\Gamma_1$ can be safely transformed into $\Gamma_2$. For this, we need to show that the integrand is non-singular at the infinity points $J_4$ and $J_4'$. Function $A$ is analytic there by our assumption. The form $\Psi$ is analytic everywhere on ${\mathbf{H}}_2$. Consider the function $w_{m,n}$. Infinities $J_4$ and $J_4'$ correspond to $x \to \infty$, $y \to 0$, and the range (\[range\]) means that $m \le 0$, $n \ge 0$. Under these conditions, $w_{m,n} = x^m y^n$ is analytic.
Any other pair of neighboring contours $\Gamma_j$ is checked the same way. Finally, the set of contours $\Gamma_j$ obey both conditions imposed on the Sommerfeld integral contours.
Properties of the Sommerfeld integral {#Sec:3.4}
-------------------------------------
Let be $0 \le \phi \le \pi$, so the contour $\Gamma_1 = \Gamma$ can be used in (\[ZomPlane\]). Consider the far field, i. e.build the asymptotics of $u(mN, nN)$ as $N \to \infty$ for fixed $m,n$.
As usual, such an asymptotics can be built by applying the saddle-point method (we follow [@Martin2006]). Let us find the saddle points on ${\mathbf{H}}_2$. Represent $w_{mN, nN} (x, y)$ as $$w_{mN, nN} (x, y) =
\exp \{
i N (m \xi_1 + n \xi_2 )
\}
=
\exp \{
N (m \log(x) + n \log(\Xi(x)))
\}
.$$ A saddle point $x_*$ corresponds to $$\frac{d}{dx} \left(
m \log(x) + n \log(\Xi(x))
\right) = 0,$$ i. e. $$\frac{d\Xi(x_*)}{dx} = - \frac{m}{n}\frac{\Xi(x_*)}{x_*}
\label{eq:sadpoint}$$ This equation can be solved explicitly, since $\Xi(x)$ is given by (\[eq0111b\]). As the result, we get a multivalued expression $x_* = x_* (m/n)$. We do not put this formula here due to its unwieldiness.
Using (\[eq0116\]), one can rewrite equation (\[eq:sadpoint\]) as $$\frac{y_* - y_*^{-1}}{x_* - x_*^{-1}} = \frac{n}{m}, \qquad y_* = \Xi(x_*).
\label{eq3100}$$ Indeed, $$\frac{n}{m} = \tan \phi,$$ and the set of the points corresponding to the solutions of (\[eq3100\]) for real $\phi$ is the line of “real waves” $\beta = \pi$, introduced above.
Note that formally the points on the line $\beta = 0$ also satisfy the equation (\[eq3100\]), but such saddle points yield asymptotic terms that are exponentially small.
There are two values of $x_*(m/n)$ on ${\mathbf{H}}$ belonging to the “real waves” line. They are the points $(\pm \arctan(n/m) , \pi)$. One of them corresponds to the wave going [*from*]{} the origin, and the other corresponds to the wave coming [*to*]{} the origin. Indeed, on ${\mathbf{H}}_2$ there are two values $x_*(m/n)$ corresponding to the waves going from the origin. Denote these points of ${\mathbf{H}}_2$ by $p'(m/n)$ and $p''(m/n)$. They have $(\alpha, \beta)$-coordinates $(\arctan(n/m), \pi)$ and $(\arctan(n/m) + 2\pi, \pi)$.
Deform contour $\Gamma$ as it is shown in Fig. \[fig14a\]. Namely, the deformed contour consists of two saddle-point loops $\sigma' (m/n)$ and $\sigma'' (m/n)$ passing through the aforementioned saddle points, and the loop $\sigma_{\rm pol}$ encircling the poles other than the infinities. Only the poles located between $\sigma' (m/n)$ and $\sigma'' (m/n)$ fall inside $\sigma_{\rm pol}$. On the “real waves” line, the poles that fall between the contours $\sigma' (m/n)$ and $\sigma'' (m/n)$ should have coordinate $\alpha$ falling in the range $$\phi < \alpha < \phi+2\pi.$$
The standard procedure of the saddle-point integration gives the cylindrical wave. The directivity of this wave is proportional to $A(p'(m/n)) - A(p''(m/n))$, and this combination seems typical for a Sommerfeld integral. The polar terms provide the incident plane waves (the initial wave $w_{m,n} (x_{\rm in}, y_{\rm in})$ and its mirror image) in the regions of their geometrical visibility. The initial plane wave corresponding to the pole $\phi_{\rm in} + 2\pi$ is visible in the range $$\phi_{\rm in} < \phi < \phi_{\rm in} + 2\pi,$$ while the reflected wave corresponding to the pole $4\pi - \phi_{\rm in}$ is visible in the range $$2\pi - \phi_{\rm in} < \phi < 4\pi - \phi_{\rm in} .$$
Functional problem for the transformant $A(p)$ and its solution {#Sec:3.5}
---------------------------------------------------------------
Let us formulate the functional problem for $A(p)$:
[*The transformant $A(p)$ should be meromorphic on ${\mathbf{H}}_2$, that is a two-sheet covering of ${\mathbf{H}}$ introduced above. $A(p)$ can only have simple poles at two points of ${\mathbf{H}}_2$, corresponding to the incident waves (the initial wave and its mirror reflection). They are the points $(\phi_{\rm in} + 2\pi, \pi)$ and $(4\pi - \phi_{\rm in} , \pi)$ in the $(\alpha, \beta)$-coordinates. The residues of the form $A \Psi$ at these points should be equal to $-(2\pi i)^{-1}$ and $(2\pi i)^{-1}$, respectively.* ]{}
It should be clear from the consideration above, that these conditions are sufficient to make the Sommerfeld integral (\[ZomPlane\]) describe a solution of the diffraction problem formulated above for ${\mathbf{S}}_2$.
Let us reformulate the problem for $A(p)$ in more usual terms. Consider $x$ as the main variable. As we mentioned above, the torus ${\mathbf{H}}$ is the preimage of the Riemann surface ${\mathbf{R}}$ shown in Fig. \[fig03\]. A usual scheme [@Alekseev2004] of this surface is shown in Fig. \[fig13b\]. The horizontal lines are the samples of the complex planes of variable $x$. The circles are the branch points. The vertical lines are the connections between sheets due to the branching. This scheme should be completed by the scheme of the cuts drawn in the complex $x$-plane shown in Fig. \[fig03\].
![Scheme of Riemann surface ${\mathbf{R}}$[]{data-label="fig13b"}](fig13b.eps){width="50.00000%"}
One can see that ${\mathbf{R}}$ is the covering over $\overline{\mathbb{C}}$ with branching. In the same way we can say that ${\mathbf{H}}_2$ is the preimage of some Riemann surface ${\mathbf{R}}_2$ over $\overline{\mathbb{C}}$: $
{\mathbf{R}}_2 = \zeta ({\mathbf{H}}_2)$. The scheme of ${\mathbf{R}}_2$ over $\overline{\mathbb{C}}$ is shown in Fig. \[fig14b\]. The coverings can be described by the diagram $${\mathbf{R}}_2 \to {\mathbf{R}}\to \overline{\mathbb{C}},$$ where the second mapping is $\zeta$. The first mapping has no branch points.
![Scheme of Riemann surface ${\mathbf{R}}_2$[]{data-label="fig14b"}](fig14b.eps){width="50.00000%"}
Consider the points of ${\mathbf{H}}_2$ with $x = x_{\rm in}$. There are 4 such points: $$p_1 :\, (\phi_{\rm in} + 2\pi , \pi),
\qquad
p_2 :\, (\phi_{\rm in} , \pi),
\qquad
p_3 :\, (2\pi- \phi_{\rm in} , \pi)
\qquad
p_4 :\, (4\pi - \phi_{\rm in} , \pi),$$ in the $(\alpha, \beta)$-coordinates.
Two of these points, $p_1$ and $p_4$ correspond to the incident plane waves mentioned in the condition of the functional problem. Indeed, point $p_1$ has $y = y_{\rm in}$, while $p_4$ has $y = y_{\rm in}^{-1}$. The function $A$ should have poles at these points. The prescribed residues are $${\rm Res}[A \Psi , p_1] = - {\rm Res}[A \Psi , p_4] = -(2\pi i)^{-1}.
\label{system1}$$ According to the functional problem, $A$ should be regular at $p_2$ and $p_3$, otherwise the field would have the plane wave terms other than the incident components listed in the problem formulation. Thus, $${\rm Res}[A \Psi , p_2] = {\rm Res}[A \Psi , p_3] = 0.
\label{system2}$$
The problem for $A(p)$, $p \in {\mathbf{H}}$ becomes reformulated as a problem of finding a 4-valued function $A(x)$ on $\overline{\mathbb{C}}$ having given branch points $\eta_{j,k}$, single valued on a prescribed Riemann surface ${\mathbf{R}}_2$, and having given poles $x_{\rm in}$ with prescribed residues on each sheet. This is a problem of theory of functions. It is easy to show that $A(x)$ should be an [*algebraic*]{} function. To build this function, below we guess four basis functions having all possible symmetries on ${\mathbf{R}}_2$ and construct $A(x)$ explicitly using these basis functions. In general, however, and in particular for the right angle problem studied in the next section of the paper, similar problems seem more complicated and require application of some advanced methods.
An elementary symmetrization argument shows that [*any*]{} function meromorphic on ${\mathbf{R}}_2$, e.g. $A(x)$, can be written as a linear combination of four functions having different types of symmetry with respect to the substitutions of sheets of ${\mathbf{R}}_2$. These four functions are $$f_0 = 1,
\qquad
f_1 (x) = \sqrt{(x-\eta_{1,1})(x-\eta_{1,2})(x-\eta_{2,1})(x-\eta_{2,2})},
\label{eq1205}$$ $$f_2 (x) = \sqrt{(x-\eta_{2,1})(x-\eta_{2,2})},
\qquad
f_3 (x) = \sqrt{(x-\eta_{1,1})(x-\eta_{1,2})}
\label{eq1206}$$ (note that these functions are single-valued on ${\mathbf{R}}_2$). The coefficients of the linear combinations are rational functions of $x$.
Thus, $$A(x) =
R_0(x) + R_1(x) f_1(x) + R_2(x) f_2(x) + R_3(x) f_3(x).
\label{eq1207}$$ Our aim is to find the functions $R_j (x)$.
Let the residues of $R_j (x)$ at $x = x_{\rm in}$ be equal to $c_j$. Using these coefficient, find the residues of the form $A \Psi$ at the points $p_j$ on ${\mathbf{H}}_2$: $$\begin{aligned}
{\rm Res}[A \Psi , p_1] &=& (c_0 + c_1 g_1 + c_2 g_2 + c_3 g_3) / g_1,
\\
{\rm Res}[A \Psi , p_2] &=& (c_0 + c_1 g_1 - c_2 g_2 - c_3 g_3) / g_1,
\\
{\rm Res}[A \Psi , p_3] &=& -(c_0 - c_1 g_1 - c_2 g_2 + c_3 g_3) / g_1,
\\
{\rm Res}[A \Psi , p_4] &=& -(c_0 - c_1 g_1 + c_2 g_2 - c_3 g_3) / g_1,\end{aligned}$$ where $$g_1 = f_1 (p_1),
\quad
g_2 = f_2 (p_1),
\quad
g_3 = f_3 (p_1).$$ Taking the value $f_j (p_1)$ means that the one should take the value $f_j (x_{\rm in})$ on the sheet of ${\mathbf{R}}_2$ where the point $p_1$ is located.
Solving equations (\[system1\]), (\[system2\]), obtain $$c_0 = -\frac{1}{4\pi i} f_1 (p_1),
\qquad
c_2 = -\frac{1}{4\pi i} f_3 (p_1),
\qquad
c_1 = c_3 = 0.
\label{eq1208}$$
Finally, we can construct $A(x)$ taking the simplest rational functions having prescribed residues at $x= x_{\rm in}$: $$R_j = \frac{c_j}{x - x_{\rm in}},$$ and $$A(p) =-\frac{ f_1 (p_1) + f_3(p_1) \, f_2(p) }{4\pi i (x - x_{\rm in})}
.
\label{eq1209}$$ One can check that this transformant obeys all conditions imposed on $A$.
Note that the transformant $A(p)$ has no singularities at infinity. The reason of this is that the Dirichlet problem is studied, and the reflected plane wave has the sign opposite to the incident plane wave. Thus, the sum of residues of the transformant corresponding to the plane waves is equal to zero and there is no need to additional poles at the infinity points.
Analysis of the solution obtained by the Sommerfeld integral {#Sec:3.6}
------------------------------------------------------------
Let us check directly that the Sommerfeld solution (\[ZomPlane\]) with the transformant (\[eq1209\]) coincides with the Wiener–Hopf solution (\[WHsol\]) (the details are given in Appendix B). For definiteness, let us study the case $0 \le \phi < \pi$, i. e. consider the Sommerfeld integral with contour $\Gamma$. Deforming contour $\Gamma$ as shown in Fig. \[fig14a\], obtain: $$\tilde u(m,n) = u_{\rm in}(m,n) + \int_{\sigma' + \sigma''}w_{m,n}A(p)\Psi.$$ Taking into account the symmetry of function $A$ as $\alpha \to \alpha + 4\pi$, obtain for the second term $$-\frac{1}{4\pi i}\int_{\sigma'}\frac{x^{m}y^{n}}{x(y - y^{-1})}\left(\frac{f_1(p_1) + f_3(p_1)f_2(x)}{x-x_{\rm in}} - \frac{f_1(p_1) - f_3(p_1)f_2(x)}{x-x_{\rm in}} \right)dx =$$ $$= -\frac{1}{2\pi i}\int_{\sigma'}\frac{x^{m}y^{n}}{x(y - y^{-1})}\frac{f_3(p_1)f_2(x)}{x-x_{\rm in}}dx.$$ Here we used explicit expressions for $A,\Psi$ and $w_{m,n}$. Taking into account (\[eq0111w\]), obtain (\[WHsol\]).
While representation (\[WHsol\]) seems to be simpler, the Sommerfeld integral representation (\[ZomPlane\]) may be more suitable for computations, since it is an integral over several polar singularities possibly located at the infinity points $J_1', J_2, J_3, J_4$. Thus, the integral can be calculated using the residue theorem. This means essentially that we put our problem into the context of the generating functions [@Lando2003].
Let us for example calculate the integral on the half-line $n=0, m>0$ where the Dirichlet boundary conditions should be satisfied. In this case it has poles at the points $J_3$ and $J_4$. The poles are of order $m$. Straightforward calculations show that $${\rm Res}[w_{m,n}A\Psi,J_3] = - {\rm Res}[w_{m,n}A\Psi,J_4] = -\frac{1}{(m-1)!}\lim\limits_{\tau \to 0}\frac{d^{m-1}}{d\tau^{m-1}}\left[\frac{f_1(x_{\rm in})+f_3(x_{\rm in})f_2(\tau^{-1})}{4\pi i (\tau^{-1}- x_{\rm in})(y-y^{-1})\tau}\right]$$ and $$\tilde u(m,0) = 0, \quad m>0,$$ i. e. the Dirichlet boundary conditions are satisfied.
Values of $\tilde u$ on the whole grid can be calculated in a similar way. If $m > 0, n>0, m>n$ the Sommerfeld integral has poles at $J_3, J_4$ of orders $m-n$ and $m+n$, respectively: $$\tilde u(m,n) = -\frac{1}{(m+n-1)!}\frac{d^{m+n-1}}{d\tau^{m+n-1}}\left[\frac{\tau^{n}y^n(f_1(x_{\rm in})+f_3(x_{\rm in})f_2(\tau^{-1}))}{2 (\tau^{-1}- x_{\rm in})(y-y^{-1})\tau}\right](\tau=0)$$ $$\label{Resc1}
- \frac{1}{(m-n-1)!}\frac{d^{m-n-1}}{d\tau^{m-n-1}}\left[\frac{\tau^{-n}y^{-n}(f_1(x_{\rm in})+f_3(x_{\rm in})f_2(\tau^{-1}))}{2 (\tau^{-1}- x_{\rm in})(y-y^{-1})\tau}\right](\tau=0).$$ If $n>0, n>m, m>-n$ the Sommerfeld integral has poles at $J_2, J_3$ of orders $n-m$ and $m+n$, respectively: $$\tilde u(m,n) = -\frac{1}{(m+n-1)!}\frac{d^{m+n-1}}{d\tau^{m+n-1}}\left[\frac{\tau^{n}y^n(f_1(x_{\rm in})+f_3(x_{\rm in})f_2(\tau^{-1}))}{2(\tau^{-1}- x_{\rm in})(y-y^{-1})\tau}\right](\tau=0)$$ $$\label{Resc2}
- \frac{1}{(n-m-1)!}\frac{d^{n-m-1}}{dx^{n-m-1}}\left[\frac{x^{n}y^{n}(f_1(x_{\rm in})+f_3(x_{\rm in})f_2(x))}{2 (x- x_{\rm in})(y-y^{-1})x}\right](x=0).$$ Following this procedure one can obtain explicit expressions for the field in the remaining nodes.
Thus, using formulae (\[Resc1\]),(\[Resc2\]) one can represent the field in each node $(m,n)$ as an explicit algebraic function of variables $(K, x_{\rm in})$.
Let us check that the homogeneous equation (\[eq0101h\]) is satisfied at node $(-1,0)$. For this, calculate values $\tilde u(m,n)$ explicitly at the nodes $(-2,0)$, $(-1,1)$, $(-1,-1)$, $(-1,0)$. We have: $$\tilde u(-2,0) = \frac{f_1(x_{\rm in})(1- (k^2 - 4)x_{\rm in})}{x^2_{\rm in}},$$ $$\tilde u(-1,1) = \frac{-2f_1(x_{\rm in})-2f_3(x_{\rm in})+(\eta_{2,1}+\eta_{2,2})f_3(x_{\rm in})}{4x^2_{\rm in}},$$ $$\tilde u(-1,-1) = -\frac{2f_1(x_{\rm in})-2f_3(x_{\rm in})+(\eta_{2,1}+\eta_{2,2})f_3(x_{\rm in})}{4x^2_{\rm in}},$$ $$\tilde u(-1,0) = \frac{f_1(x_{\rm in})}{x_{\rm in}}.$$ Substituting the latter in (\[eq0101h\]) and taking into account that $\tilde u(0,0) =0$, we see that the homogeneous Helmholtz equation is satisfied.
Diffracton by a Dirichlet right angle
=====================================
Problem formulation {#wedge1}
-------------------
Let the homogeneous discrete Helmholtz equation (\[eq0101h\]) be satisfied everywhere except the domain $n \geq 0,m\geq 0$, which is the scatterer in this case. The Dirichlet conditions are set on the boundary of the scatterer: $$\label{bound_condw}
u(m,0) = 0,\quad m\geq0, \qquad u(0,n) = 0,\quad n\geq0.$$
This corresponds to the classical 2D problem of diffraction by an angle (or by a wedge in 3D). The geometry of the problem is shown in Fig. \[fig12b\].
![Geometry of the problem of diffraction by an angle. Black circles show the position of the Dirichlet angle[]{data-label="fig12b"}](wedge_geometry_2.eps){width="30.00000%"}
The total field is a sum of the incident field and the scattered field (see (\[uinusc\])). The incident field is the plane wave (\[uin\]). As before, we take the incident plane wave on the “real waves line”, i. e. (\[phiin\]) is valid. The angle of propagation of the incident wave $\phi_{\rm in}$ obeys $$0 < \phi_{\rm in} < \pi / 2,$$ thus $$|x_{\rm in}| < 1 ,\qquad |y_{\rm in}| < 1.$$ The scattered field should obey the radiation condition, i. e. it should decay at infinity.
Formulation on a branched surface {#wedge2}
---------------------------------
Using the same consideration as in Section \[SecBranch\], let us construct a branched lattice ${\mathbf{S}}_3$ with three sheets. Use the polar coordinates $(r, \phi)$ defined as (\[eq0401\]).
The lattice ${\mathbf{S}}_3$ is the set of all integer points $(m,n)$ written in the polar coordinates $(r, \phi)$ introduced above with $0 \le \phi < 6 \pi$, implying the $6\pi$-periodicity in $\phi$ of all functions. Obviously, ${\mathbf{S}}_3$ is a 3-sheet covering of $\mathbb{Z} \times \mathbb{Z}$. The origin is common for all sheets.
Consider a solution $u(m,n)$ of the diffraction problem formulated above. Define the function $\tilde u(r, \phi)$ on ${\mathbf{S}}_3$ by taking $$\tilde u (r, \phi) = u (r, \phi) ,
\qquad \mbox{ for }
\pi / 2 \le \phi \le 2\pi,$$ $$\tilde u (r, \phi) = - u (r, 4\pi - \phi) ,
\qquad \mbox{ for }
2 \pi \le \phi \le 7\pi / 2,$$ $$\tilde u (r, \phi) = u (r, \phi - 3\pi) ,
\qquad \mbox{ for }
7\pi / 2 \le \phi \le 5 \pi ,$$ $$\tilde u (r, \phi) = - u (r, 7\pi - \phi ) ,
\qquad \mbox{ for }
5 \pi \le \phi \le 13 \pi /2 .$$ One can check directly that field $\tilde u$ satisfies equation (\[eq0101h\]) on ${\mathbf{S}}_3 \setminus O$, i. e. the boundaries can be discarded in the formulation of the problem.
The new field $\tilde u$ has four incident field contributions. Define $$u_{\rm in} = w_{m,n} (x_{\rm in} , y_{\rm in}) ,
\qquad
\mbox{ for }
\pi / 2 \le \phi \le 2\pi,$$ $$u_{\rm in} = - w_{m,n} (x_{\rm in} , y_{\rm in}^{-1}) ,
\qquad
\mbox{ for }
2\pi \le \phi \le 7\pi/2,$$ $$u_{\rm in} = w_{m,n} (x_{\rm in}^{-1} , y_{\rm in}^{-1}) ,
\qquad
\mbox{ for }
7\pi/2 \le \phi \le 5\pi,$$ $$u_{\rm in} = -w_{m,n} (x_{\rm in}^{-1} , y_{\rm in}) ,
\qquad
\mbox{ for }
5\pi \le \phi \le 13\pi/ 2.$$ (The incident field is discontinuous, but this does not matter, since we are not going to construct a scattered field alone.) The difference $\tilde u - u_{\rm in}$ should decay as $\sqrt{m^2 + n^2 }\to \infty$.
We can now formulate the diffraction problem on the branched surface:
[*Find a field $\tilde u$ defined on ${\mathbf{S}}_3$, obeying equation (\[eq0101h\]) everywhere except the origin, with $u- u_{\rm in}$ exponentially decaying as $r \to \infty$.* ]{}
The Sommerfeld integral and the problem for the transformant {#wedge3}
------------------------------------------------------------
Analogously to the half-line diffraction problem, introduce an abstract analytic manifold ${\mathbf{H}}_3$ that is a 3-sheet covering of ${\mathbf{H}}$ without branching. To build this manifold, take coordinates $(\alpha, \beta)$ introduced for ${\mathbf{H}}$ and allow $\alpha$ to take values in $[0, 6 \pi]$. This means that we take three copies of ${\mathbf{H}}$ having $\alpha \in {\mathbf{C}}$, all cut into “tubes” along the coordinate lines $\alpha = 0$, and attach them one to another, forming a “ring”. The schemes of ${\mathbf{H}}$, ${\mathbf{H}}_2$ and ${\mathbf{H}}_3$ with respect to coordinates $(\alpha, \beta)$ are shown schematically in Fig. \[figgHs\].
![Schemes of manifolds ${\mathbf{H}}$, ${\mathbf{H}}_2$, ${\mathbf{H}}_3$ in the coordinates $(\alpha, \beta)$[]{data-label="figgHs"}](figgHsa.eps){width="80.00000%"}
The covering ${\mathbf{H}}_3$ has totally $12$ infinity points. They are the preimages of $x = 0$ and $x = \infty$ for the mapping ${\mathbf{H}}_3 \to \overline{\mathbb{C}}$
We seek the solution $\tilde u$ in the form (\[ZomPlane\]) with the Sommerfeld transformant $A(p)$ single-valued and meromorphic on ${\mathbf{H}}_3$. Contours $\Gamma_j$ are selected in the same way as it was for half-plane problem: They are $\Gamma_j = \Gamma + (j - 1)\pi /2$, $j = 1\dots 12$. Contour $\Gamma$ is the same as shown in Fig. \[fig14a\].
The functional problem for $A(p)$ is as follows:
*The transformant $A(p)$ should be meromorphic on ${\mathbf{H}}_3$ that is a three-sheet covering of ${\mathbf{H}}$. $A(p)$ can only have simple poles at four points of ${\mathbf{H}}_3$, corresponding to the incident plane wave and its mirror reflections. These poles have $(\alpha, \beta)$-coordinates $(\phi_{\rm in} + 2\pi, \pi)$, $(4 \pi - \phi_{\rm in}, \pi)$, $(\phi_{\rm in} + 5\pi, \pi)$, $(\pi - \phi_{\rm in}, \pi)$.*
The residues of the form $A \Psi$ at these points should be equal to $-(2\pi i)^{-1}$, $(2\pi i)^{-1}$, $-(2\pi i)^{-1}$, $(2\pi i)^{-1}$ respectively.
Solution of the functional problem for the Sommerfeld transformant {#wedge4}
------------------------------------------------------------------
Unfortunately, we were not able to build a simple representation like (\[eq1207\]) for $A(p)$, since we cannot guess basic algebraic functions on ${\mathbf{H}}_3$ having all six types of symmetry. Here we construct $A(p)$ as an infinite series using the general theory of elliptic functions [@Gurvitz1968; @Akhiezer1980; @Bateman1955].
Let $p$ denote a point on ${\mathbf{H}}$. Introduce a complex function $t(p)$ by the relation $$t(p) = \int^{p}_{B_1} \frac{dx}{x(y-y^{-1})} = \int^{p}_{B_1} \Psi.
\label{elfunF}$$ We are going to use both the function $t(p)$ and its inverse $p(t)$. The lower limit of the integral can be arbitrary, and we take it equal to $B_1$ for convenience.
Indeed, the value of $t$ depends not only on $x$ but also on the path of integration. The ambiguity is taken care about as follows. It is known that the mapping $p \to t$ maps the torus ${\mathbf{H}}$ to a parallelogram ${\mathbf{P}}$ in the $t$-plane with sides $\omega_1$, $\omega_2$. These values are called the periods and are defined by the relations $$\omega_1 = \int_\sigma \frac{dx}{x(y-y^{-1})},\quad \omega_2 = \int_\kappa \frac{dx}{x(y-y^{-1})},$$ Contour $\sigma$ has been used above (this is the line $\alpha = \pi/2$ passed in the negative $\beta$-direction), and contour $\kappa$ is the line $\beta = 0$ passed in the positive $\alpha$-direction. Thus, function $t(p)$ is single-valued on ${\mathbf{H}}$ cut along $\sigma$ and $\kappa$.
We should make a remark that one can use the complex plane of $t$ as the coordinate plane for the torus instead of $(\alpha, \beta)$. Roughly speaking, the $\alpha$-direction is $\omega_2$, while the $\beta$-direction is $-\omega_1$.
Consider a function $$\label{etafunct}
E(t,\omega_1,\omega_2) = \frac{1}{t} +
{\sum^{\infty}_{k = -\infty}}{\sum^{\infty}_{l=-\infty}}'
\left(\frac{1}{t - \omega_1 k - \omega_2 l} + \frac{1}{\omega_1 k + \omega_2 l} + \frac{t}{(\omega_1 k + \omega_2 l)^2}\right).$$ The term $k=0,l=0$ is excluded from the sum. This is denoted by “prime” notation after the sum symbol. One can show [@Akhiezer1980] that function $E$ is a meromorphic function of variable $t$ having poles at the points of the grid $ t = \omega_1 k + \omega_2 l$ with principal parts $$\frac{1}{t-(\omega_1 k + \omega_2 l)}.$$
Function $E$ satisfies the following relations: $$\label{etaperiod}
E(t+\omega_j,\omega_1,\omega_2) - E(t,\omega_1,\omega_2) = d_j,\quad j=1,2,$$ where $d_j$ are some constants.
Function $A(p)$ can be built using (\[etafunct\]). Let us study $\tilde A(t) = A\left(p(t)\right)$ as a function of variable $t$. Note that a transition from ${\mathbf{H}}$ to its covering ${\mathbf{H}}_3$ looks in the $t$ coordinate as triplicating the period $\omega_2$ (note that the $\alpha$-period is triplicated). Namely, for $A(p)$ to be single-valued on ${\mathbf{H}}_3$, $\tilde A(t)$ should be a double periodical function with the periods $(\omega_1,3\omega_2)$. Besides, in each elementary parallelogram the function $\tilde A(t)$ should have four poles corresponding to the incident waves.
The positions of poles on the $t$-plane are defined as follows. Let $t_0$ be $t(x_{\rm in})$ with the integral in (\[elfunF\]) taken along the shortest path along the “real waves” line. Then, one can establish the following correspondence: $$\phi_{\rm in} + 2\pi \longleftrightarrow t_0 + \omega_2,$$ $$4\pi - \phi_{\rm in} \longleftrightarrow 2\omega_2 - t_0,$$ $$\phi_{\rm in} + 5\pi \longleftrightarrow t_0 + 5\omega_2/2,$$ $$\pi - \phi_{\rm in} \longleftrightarrow \omega_2/2-t_0,$$ Thus, one can construct $\tilde A(t)$ as $$A\left(x(t)\right) = 2\pi i \big[-E(t-(t_0 + \omega_2),\omega_1,3\omega_2)+E(t-(2\omega_2-t_0),\omega_1,3\omega_2)-$$ $$\label{Awedge}
E(t-(t_0 + 5\omega_2/2),\omega_1,3\omega_2)+E(t-(\omega_2/2-t_0),\omega_1,3\omega_2)\big].$$ This function is triple periodic with periods $(\omega_1, 3\omega_2)$, since it contains two terms with $E$ and two terms with $-E$. As the result, the constants $d_j$ introduced in (\[etaperiod\]) are canceled. Thus, the integral (\[ZomPlane\]) with (\[Awedge\]) provides the solution for the right-angled wedge problem.
Indeed, the function $A(x) = \tilde A(t(x))$ with $\tilde A$ defined by (\[Awedge\]) is in some sense expressed in a not efficient way. It is easy to prove that $A(x)$ should be an algebraic function that can be expressed explicitly and should contain only square and cubic radicals. However, finding the explicit form of such an elementary function is not an elementary problem, and we prefer to leave the answer in elliptic functions, which are in this case most informative.
While the obtained result seems to be cumbersome and impractical it still can be used for calculations. Indeed, the integral (\[ZomPlane\]) is still a pole integral that can be calculated using residue theorem, and the formulae similar to (\[Resc1\]) and (\[Resc2\]) can be obtained. For this, one needs to compute the values of some elliptic functions in several specific points. The latter problem is well known [@Bateman1955], and it can be solved very efficiently using the theory of $\theta$-functions.
Conclusion {#concl}
==========
The main ideas of the paper are as follows:
- We made an elementary observation that the representation (\[eq0107b\]) for the Green’s function for a discrete plane is an elliptic integral. Thus, an application of the Legendre’s theorem yields recursive relations, say, (\[eq0120\]), (\[eq0505\]). Such recursive relations can be useful for practical computation of the Green’s function, since they require numerical integration only for a few first values of the field. We are not studying here the practical aspects of application of these recursive relations (basically, their stability), leaving this subject for another study.
- We make an observation that the plane wave decomposition (\[eq0107b\]) can be considered as an integral of an analytic differential 1-form over a contour on an analytic manifold ${\mathbf{H}}$. The integral is rewritten as (\[eq0303\]). The manifold ${\mathbf{H}}$ is the set of all plane waves that can travel along the discrete plane, i. e. it is the dispersion diagram of the system. Topologically, ${\mathbf{H}}$ is a torus.
- We develop the Sommerfeld integral formalism for a Dirichlet half-line diffraction problem. Indeed, we follow the classical continuous consideration, and keep the analogy as close as possible. Using the principle of reflections, we formulate the discrete diffraction problem on a branched discrete surface with two sheets. Then we construct an analogue of the Sommerfeld integral. For this, we keep the Ansatz with the differential form (\[eq0303\]) (now it reads as (\[ZomPlane\])). By analogy with the continuous case, we look for the Sommerfeld transformant $A(p)$ that is single-valued on a double-valued covering of ${\mathbf{H}}$. The latter is denoted by ${\mathbf{H}}_2$. The Sommerfeld contour is chosen in a way most close to the continuous case. We formulate and solve a functional problem for $A(p)$. The latter is the classical problem of finding an algebraic function on a given Riemann surface having given singularities.
- We make an observation that, unlike in the continuous case, the Sommerfeld integral has certain computational advantages [*without*]{} a transformation into the combination of the saddle-point contours and the plane wave residual components. Namely, the Sommerfeld integral can be taken by computing its residues at infinity points. This yields explicit representations (\[Resc1\]) and (\[Resc2\]), so numerical integration is not needed at all for the diffraction problem. Surprisingly, the solution of the diffraction problem seems simpler than that of the problem for the Green’s function.
- We are trying to develop the Sommerfeld formalism for another problem, to which the reflection principle can be applied. This is the problem of diffraction by a Dirichlet right angle. By analogy, we write down the Sommerfeld integral and look for the transformant $A(p)$. The transformant is an algebraic function single-valued on a 3-sheet covering of ${\mathbf{H}}$ named ${\mathbf{H}}_3$. Unfortunately, the problem for the transformant is more complicated in this case, and we have found its solution only in terms of the elliptic functions (\[Awedge\]).
Appenidx A. Computation of the Green’s function using the recursive relations {#appenidx-a.-computation-of-the-greens-function-using-the-recursive-relations .unnumbered}
=============================================================================
We study here the problem for the Green’s function (\[eq0101\]). Our aim is to find a way to tabulate function $u(m,n)$ for some set of values $m,n$. It is clear that $$\label{sym_green}
u(m,n) = u(-m,n) = u(m,-n) = u(-m,-n).$$ Thus, one should tabulate $u(m,n)$ only for non-negative $m,n$.
Let it be necessary to tabulate all $u(m,n)$ with $$|m| + |n| \le N.$$ A naive approach requires $\sim N^2$ computations of the integral. However, here we show that one can compute only two integrals, then using “cheap” recursive relations.
Perform the consideration in two steps. On the first (easy) step we assume all $u(m,0)$ to be known, and prove that all other values $u(m,n)$ can be found by recursive relations. On the second (more complicated) step we prove that all $u(m,0)$ can be found from the first two values.
[**Step 1.**]{} Compute the values of $u(m,n)$ row by row. Each row is a set of values with $m\ge0$, $n\ge 0$, $m+ n = \mbox{const}$, i. e. the rows are diagonals.
Let all values with $|m| + |n| \le M$ be already computed, and it is necessary to compute the values with $|m| + |n| = M + 1$. Then use (\[eq0101\]) rewritten as a recursive relation: $$u(M+1 - n , n) + u(M+2 - n , n-1) =
\qquad \qquad \qquad \qquad \qquad
\label{eq0120}$$ $$- u(M+1 - n , n-2) - u(M - n , n-1) + (4 - K^2) u(M-n, n-1).$$ Note that all values in the right have the sum of indices $\le M$, thus they are computed previously. The left-hand side is a recursive relation for $n = 1,2,\dots, M+1$. Thus, if the values $u(m,0)$ are known, one can compute all other values by (\[eq0120\]).
[**Step 2.**]{} Let be $n = 0$. Rewrite the representation (\[eq0107b\]) for $u(m,0)$ in the form $$u(m,0) = \frac{1}{2\pi i}
\int_\sigma \frac{x^m dx}{z(x)},
\qquad
z(x) = \sqrt{(x^2 + (K^2 - 4)x + 1)^2 - 4x^2}.
\label{eq0501}$$ Being inspired by the proof of Legendre’s theorem for the Abelian integrals [@Bateman1955], derive a recursive formula for $u(m,0)$. Introduce the constants $a_0, \dots, a_3$ as follows: $$a_0 = 1 ,
\qquad
a_1 = 2 (K^2 - 4),
\qquad
a_2 = (K^2 - 4)^2 - 2,
\qquad
a_3 = 2(K^2 - 4).
\label{eq0502}$$ Using these constants one can write $$z^2(x) = x^4 + a_3 x^3 + a_2 x^2 + a_1 x + a_0.
\label{eq0503}$$ Note that $$\frac{d}{dx} (x^m z) = \left( m+ 2 \right) \frac{x^{m+3}}{z(x)}
+
\left( m+ 3/2 \right) a_3 \frac{x^{m+2}}{z(x)}
+\qquad \qquad \qquad$$ $$\left( m+ 1 \right) a_2 \frac{x^{m+1}}{z(x)}
+
\left( m+ 1/2 \right) a_1 \frac{x^{m}}{z(x)}
+
m a_0 \frac{x^{m-1}}{z(x)}.
\label{eq0504}$$
Substituting this identity into (\[eq0501\]) and taking into account that contour of integration $\sigma$ is closed, get $$- \left( m+ 2 \right) u(m+3,0)
=
\left( m+ 3/2 \right) a_3\, u(m+2,0)
+ \qquad \qquad \qquad$$ $$\left( m+ 1 \right) a_2\, u(m+1 ,0)
+
\left( m+ 1/2 \right) a_1 \, u(m,0)
+
m \,a_0 \, u(m-1,0) .
\label{eq0505}$$ This is the recursive relation connecting five values of $u(m,0)$. Write down the latter for $m=0$ and use (\[sym\_green\]). Obtain: $$\label{rec_1}
-2u(3,0) = 3/2a_3u(2,0)+a_2u(1,0)+1/2a_1u(0,0),$$ Then, write down (\[eq0101\]) for $m=0,n=0$ taking into account (\[sym\_green\]), and symmetry relation $u(m,n)=u(n,m)$. Obtain: $$\label{rec_3}
4u(1,0) = 1 - u(0,0)(K^2-4).$$ Using (\[rec\_1\]-\[rec\_3\]) one can express $u(3,0)$, $u(1,0)$ in terms of $u(2,0)$ and $u(0,0)$. Then, using (\[eq0505\]) express $u(m,0)$ in terms of $u(2,0)$ and $u(0,0)$. Thus, knowing only two values $u(2,0)$ and $u(0,0)$ it is enough to compute all other $u(m,0)$ without integration.
We should note that similar results were obtained for the discrete Laplace equation in [@McCrea1940; @Duffin1953; @Spitzer1964; @Atkinson1999; @BalthVanDerPol2008].
Appendix B. Wiener-Hopf solution for a discrete half-line problem {#appendix-b.-wiener-hopf-solution-for-a-discrete-half-line-problem .unnumbered}
=================================================================
Following [@Sharma2015b], let us find the solution of the half-plane diffraction problem using the Wiener-Hopf approach. First, let us symmetrize the problem. Namely, represent the incident field (\[uin\]) as a sum: $$u_{\rm in}(m,n) = \frac{1}{2}\left(u_{\rm in}(m,n)+u_{\rm r}(m,n)\right) + \frac{1}{2}\left(u_{\rm in}(m,n)-u_{\rm r}(m,n)\right)
\equiv u_{\rm in,s}(m,n)+u_{\rm in,a}(m,n),$$ where $$u_{\rm in}(m,n) = x_{\rm in}^m \, y_{\rm in}^n,
\qquad
u_{\rm r}(m,n) = x_{\rm in}^m \, y_{\rm in}^{-n}.$$ Then, study the equation (\[eq0101h\]) separately for the symmetrical $u_{\rm sc,s}(m,n)$ and anti-symmetrical $u_{\rm sc,a}(m,n)$ part of the scattered field. Trivially, the anti-symmetrical scattered field is zero: $$u_{\rm sc, a}(m,n) = 0.$$ Thus the solution of the symmetrical problem coincide with the solution of original problem: $$u_{\rm sc,s}(m,n) \equiv u_{\rm sc}(m,n).$$ Without loss of generality assume that $n \geq 0$.
Introduce direct and inverse bilateral $\mathcal{Z}$-transforms as follows: $$F(z) = \mathcal{Z} \{f_n\} = \sum_{n=-\infty}^{\infty}f_nz^{-n},$$ $$f_n = \mathcal{Z} \{F(z)\} = \frac{1}{2\pi{i}}\int_\sigma F(z)z^{n-1}dz,$$ where $\sigma$ is a contour going along a unit circle $|z| = 1$ in the positive direction.
Apply the $\mathcal{Z}$-transform to $u_{\rm sc}(m,0)$ and take into account the Dirichlet boundary condition $$u_{\rm sc}(m,0) = - u_{\rm in} (m,0) , \qquad m \ge 0.$$ The result is $$\label{WH1}
U_{\rm sc}(z) = -U_{\rm in}(z) + G_-(z),$$ where $$U_{\rm in}(z) = \sum_{n=0}^{\infty}u_{\rm in} ( n,0 ) \, z^{-n} = \frac{z}{z-x_{\rm in}}$$ and $$G_- (z) = \sum_{m=-\infty}^{-1} u_{\rm sc}(m,0) \, z^{-m},$$ which is an unknown function analytic inside the unit circle. Note that function $U^{\rm in}_+(z)$ is analytic outside the unit circle.
Study a combination $$w(m) = \frac{1}{2}(u_{\rm sc}(m+1,0)+u_{\rm sc}(m-1,0))
+ u_{\rm sc}(m, 1) + (K^2/2-2) u_{\rm sc}(m,0)
\label{defw}$$ for $m \in \mathbb{Z}$. Due to symmetry $u(m,1) = u(m,-1)$ and to equation (\[eq0101h\]) taken for $n =0$, $m < 0$, $$w(m)=0\quad \mbox{ for } \quad m<0,$$ Note that the scattered field in $n>0$ consists of plane wave decaying as $n \to \infty$, thus $$\mathcal{Z}\{ u_{\rm sc}(\cdot,n) \}(z) =
\Xi^n(z) \,
\mathcal{Z}\{ u_{\rm sc}(\cdot,0) \}(z)$$ Using this relation, one can compute $\mathcal{Z}\{ u_{\rm sc}(m,1)\}$. Taking the $\mathcal{Z}$-transform of (\[defw\]), obtain $$\label{WH2}
W_+(z) = \frac{\sqrt{(K^2 - 4 + z + z^{-1})^2 -4}}{2} \, U^{\rm sc}(z),$$ where $$W_+(z) = \sum_{m=0}^{\infty}w(m)\, z^{-m}$$ is analytic outside the unit circle. Combining (\[WH1\]) and (\[WH2\]) we obtain the following Wiener–Hopf equation: $$\frac{2W_+(z)}{z \sqrt{(K^2 - 4 + z + z^{-1})^2 -4}} = -\frac{1}{z-x_{\rm in}} + \frac{G_-(z)}{z}.$$
This equation can be easily solved. For this, note that the symbol can be factorized as follows: $$z \sqrt{(K^2 - 4 + z + z^{-1})^2 -4} = \Upsilon(z) =
\sqrt{(z - \eta_{2,1})(z - \eta_{2,2})}
\sqrt{(z - \eta_{1,1})(z - \eta_{1,2})} .$$ The first factor is analytic outside the unit circle, while the second factor is analytic inside the unit circle.
Finally, the solution is as follows: $$W_+(z) = - \frac{\sqrt{(z-\eta_{2,1})(z-\eta_{2,2})}\sqrt{(x_{\rm in}-\eta_{1,1})(x_{\rm in}-\eta_{1,2})}}{2(z-x_{\rm in})}.$$ The scattered field is given by the following integral $$\label{WHsol}
u^{\rm sc}_{m,n}=-\frac{1}{2\pi {i}}\int_{\sigma}\frac{z^m y^{|n|}\sqrt{(x_{\rm in}-\eta_{1,1})(x_{\rm in}-\eta_{1,2})}}{\sqrt{(z-\eta_{1,1})(z-\eta_{1,2})}(z-x_{\rm in})}dz.$$
Appendix C. Sommerfeld integral as an integral on the dispersion manifold {#appendix-c.-sommerfeld-integral-as-an-integral-on-the-dispersion-manifold .unnumbered}
=========================================================================
Let us build an analogy between the Sommerefeld integral for the continuous half-line diffraction problem and the Sommerfeld integral for the discrete problem.
Consider the $(x_1, x_2)$-plane, on which the Helmholtz equation $$\Delta U(x_1,x_2) + k^2_0 U(x_1,x_2) = 0.$$ is satisfied everywhere except half-plane $$x_2=0, \qquad x_1 > 0,$$ where Dirichlet boundary condition is imposed: $$U_{\rm sc}(x_1,0) = - U_{\rm in}(x_1,0).$$ Here $U_{\rm in}$ is an incident wave: $$U_{\rm in} = \exp\{i k_0 (\cos \theta_{\rm in} \, x_1+
\sin \theta_{\rm in} \, x_2) \}.$$ The radiation and Meixner conditions should be satisfied by the field in a usual way.
In the continuous case, all possible plane waves have form $$\exp \{
i k_0 (\cos \theta \, x_1 + \sin \theta\, x_2)
\}
=
\exp \{
i k_0 r \cos ( \phi - \theta )
\}$$ for the polar coordinates $$x_1 = r \cos\phi, \qquad x_2 = r \sin \phi.$$ Thus, the set of all plane waves is the set of all (possibly complex) angles of propagation $\theta$. This set is the strip $$0 \le {\rm Re}[\theta] \le 2\pi$$ with the edges ${\rm Re}[\theta] = 0$ and $2\pi$ attached to each other. Thus, topologically, the dispersion diagram is a tube. This tube plays the role of the manifold ${\mathbf{H}}$ in the continuous case. The “real waves” line is indeed the set ${\rm Im}[\theta] = 0$.
The Sommerfeld integral has the following form: $$\label{ZomCont}
U(r, \phi) = \int_{\Gamma + \phi} A(\theta)
\exp\{ikr \cos(\theta-\phi)\} d\theta,$$ The contour of integration $\Gamma = \Gamma_1 + \Gamma_2$ is shown in Fig. \[fig11\].
![Contours $\Gamma_1$ and $\Gamma_2$[]{data-label="fig11"}](Zommerfeld_contours_cont.eps){width="40.00000%"}
Function $A(\theta)$ is the Sommerfeld transformant of the field: $$A(\theta) = \frac{1}{4\pi}
\left(
\frac{\exp\{i\theta/2\}}{\exp\{i\theta/2\}-\exp\{i(\theta_{\rm in}+ \pi)/2\}}
-
\frac{\exp\{i\theta/2\}}{\exp\{i\theta/2\}-\exp\{(3\pi -\theta_{\rm in})/2\}}
\right),$$ It is double-valued on ${\mathbf{H}}$. Thus, one can define a two-sheet covering of ${\mathbf{H}}$ named ${\mathbf{H}}_2$, on which $A$ is single-valued. Such a covering is the strip $$0 \le \phi \le 4 \pi$$ with the edges attached to each other. This covering is analogous to ${\mathbf{H}}_2$ for the discrete case.
The integral (\[ZomCont\]) is analogous to (\[ZomPlane\]). The exponential function plays the role of $w_{m,n}$, and $d\theta$ plays the role of $\Psi$.
The transformant $A(\theta)$ has poles corresponding to the incident and the reflected wave. The poles belong to the “real line” The contour $\Gamma + \phi$ is chosen in such a way that it does not hit the poles as $\phi$ changes.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: |
The idea that the cohomology of finite groups might be fruitfully approached via the cohomology of ambient semisimple algebraic groups was first shown to be viable in the papers [@CPS75] and [@CPSK77]. The second paper introduced, through a limiting process, the notion of generic cohomology, as an intermediary between finite Chevalley group and algebraic group cohomology.
The present paper shows that, for irreducible modules as coefficients, the limits can be eliminated in all but finitely many cases. These exceptional cases depend only on the root system and cohomological degree. In fact, we show that, for sufficiently large $r$, depending only on the root system and $m$, and not on the prime $p$ or the irreducible module $L$, there are isomorphisms ${\text{\rm H}}^m(G(p^r),L)\cong{\text{\rm H}}^m(G(p^r), L')\cong {\text{\rm H}}^m_{\text{\rm gen}}(G,L')\cong {\text{\rm H}}^m(G,L')$, where the subscript “gen" refers to generic cohomology and $L'$ is a constructibly determined irreducible “shift" of the (arbitrary) irreducible module $L$ for the finite Chevalley group $G(p^r)$. By a famous theorem of Steinberg, both $L$ and $L'$ extend to irreducible modules for the ambient algebraic group $G$ with $p^r$-restricted highest weights. This leads to the notion of a module or weight being “shifted $m$-generic," and thus to the title of this paper. Our approach is based on questions raised by the third author in [@SteSL3], which we answer here in the cohomology cases. We obtain many additional results, often with formulations in the more general context of ${{\text{\rm Ext}}}^m_{G(q)}$ with irreducible coefficients.
address:
- |
Department of Mathematics\
University of Virginia\
Charlottesville, VA 22903
- |
Department of Mathematics\
University of Virginia\
Charlottesville, VA 22903
- |
New College, Oxford\
Oxford, UK
author:
- 'Brian J. Parshall'
- 'Leonard L. Scott'
- 'David I. Stewart'
bibliography:
- 'PSbib.bib'
title: Shifted generic cohomology
---
[^1]
Introduction
============
Let $G$ be a simply connected, semisimple algebraic group defined and split over the prime field ${\mathbb F}_p$ of positive characteristic $p$. Write $k=\bar{\mathbb F}_p$. For a power $q=p^r$, let $G(q)$ be the subgroup of ${\mathbb F}_q$-rational points in $G$. Thus, $G(q)$ is a finite Chevalley group. Let $M$ be a finite dimensional rational $G$-module and let $m$ be a non-negative integer. In [@CPSK77], the first two authors of this paper, together with Ed Cline and Wilberd van der Kallen, defined the notion of the generic $m$-cohomology $${\text{\rm H}}^m_{\text{\rm gen}}(G,M):=\underset{\underset{q}\leftarrow}\lim\,{\text{\rm H}}^m(G(q),M)$$ of $M$. The limit is, in fact, a [*stable*]{} limit for any given $M$. Moreover, ${\text{\rm H}}^m_{\text{\rm gen}}(G,M)\cong {\text{\rm H}}^m(G, M^{[e_0]})$, where $M^{[e_0]}$ denotes the twist of $M$ through some $e_0$th power of the Frobenius endomorphism of $G$. Although the non-negative integer $e_0$ may be chosen independently of $p$ and $M$, it can also be chosen as a function $e_0(M)$ of $M$. Unfortunately, given a rational $G$-module $M$ for which one wants to compute ${\text{\rm H}}^m(G(q),M)$, it is frequently necessary to take $e_0(M)>0$. This problem has been noted by others [@Georgia §1]. Worse, it may be necessary to enlarge $q$ in order to obtain ${\text{\rm H}}^m(G(q),M)\cong {\text{\rm H}}_{\text{\rm gen}}^m(G,M)$. The problem is exacerbated if one is interested in calculations for an infinite family of modules $M$, such as the irreducible $G$-modules. By a famous result of Steinberg, all irreducible $kG(q)$-modules are, up to isomorphism, the restrictions to $G(q)$ of the irreducible rational $G$-modules whose highest weights are $q$-restricted.
We propose here a remedy to this situation. Observe that, for any $q$-restricted dominant weight $\lambda$ and non-negative integer $e$, there is a unique $q$-restricted dominant weight $\lambda'$ with $L(\lambda)^{[e]}|_{G(q)}
=L(\lambda')|_{G(q)}$. Write $\lambda'=\lambda^{[e]_q}$ and $L(\lambda')=L(\lambda)^{[e]_q}$. We shall refer to any weight $\lambda'$ of this form as a [*$q$-shift of* ]{}$\lambda$. The main result, Theorem \[shiftedGeneric\], in this paper shows that, for $r\gg 0$[^2], and any $q$-restricted dominant weight $\lambda$ $$\label{main}{\text{\rm H}}^m(G(q), L(\lambda))\cong{\text{\rm H}}^m_{\text{\rm gen}}(G, L(\lambda'))\cong {\text{\rm H}}^m(G,L(\lambda')),$$ for some $q$-restricted weight $\lambda'=\lambda^{[e]_q}$ with $e=e(\lambda)=e(\lambda,q)\geq 0$. Similar results hold for ${{\text{\rm Ext}}}^m_{G(q)}(L(\mu),L(\lambda))$ with $\lambda,\mu$ both $q$-restricted, though with some conditions on $\mu$. The first isomorphism in (\[main\]) may be viewed as saying that $L(\lambda)$ is “shifted $m$-generic at $q$"; see the end of this introduction.[^3] The map $\lambda\mapsto\lambda^{[e]_q}$ defines an action of the cyclic group ${\mathbb Z}/r{\mathbb Z}$ on the set $X^+_r$ of $q$-restricted weights, and $\lambda'$ in (\[main\]) is a “distinguished" member in the orbit of $\lambda$ under this action, chosen to optimize the positions of zero terms in its $p$-adic expansion.
The origin of these results goes back to ${{\text{\rm Ext}}}^m$-questions raised by the third author in [@SteSL3 §3], where the $q$-shift $\lambda^{[e]_q}$ of $\lambda$ was denoted $\lambda^{\{e\}}$, and $L(\lambda^{\{e\}})$ was called a $q$-wrap of $L(\lambda)$. While raised for general $m\geq 0$, these questions arose in part from observations for $m=1,2$, namely, from noting a parallel between the $2$-cohomology result [@SteSL3 Thm. 2] and a $1$-cohomology result in [@BNP06 Thm. 5.5], which also had an ${{\text{\rm Ext}}}^1$-analog [@BNP06 Thm. 5.6]—the conclusions of all these results involve what we now call $q$-shifted weights in their formulation.[^4] Essentially, our main Theorem \[shiftedGeneric\] provides a strong answer to [@SteSL3 Question 3.8] in the cohomology cases, in addition to interpreting it in terms of generic cohomology. Also, Theorem \[thm6.2\](c) proves a similar result for ${{\text{\rm Ext}}}^m_{G(q)}(L(\mu),L(\lambda))$ when $p$ is sufficiently large, and with no requirement on $r$, but with $\lambda$ and $\mu$ required to have a zero digit in common (i. e., $\lambda_i=\mu_i=0$ for some $i<r$, using the terminology below). Remark \[needATwist\] gives an example showing this result is near best possible, especially when $\lambda=\mu$, and that the original [@SteSL3 Question 3.8] must be reformulated. Such a reformulation is given in Question \[conjecture\].
Our investigation yields many other useful results. We mention a few. First, any dominant weight $\lambda$ has a $p$-adic expansion $\lambda=\lambda_0+p\lambda_1 + p^2\lambda_2
+\cdots$, where each $\lambda_i$ is $p$-restricted. We call the pairs $(i,\lambda_i)$ *digits* of $\lambda$, and we say a digit is 0 if $\lambda_i=0$. Theorem \[digitsForG(q)\] states that, given $m\geq 0$, there is a constant $d$, depending only on $\Phi$ and $m$, such that, for any prime $p$, any power $q=p^r$, and any pair of $q$-restricted weights $\lambda,\mu$, if ${{\text{\rm Ext}}}^m_{G(q)}(L(\mu),L(\lambda))\not=0$, then $\lambda$ and $\mu$ differ by at most $d$ digits. Thus, in the cohomology case, if ${\text{\rm H}}^m(G(q), L(\lambda))\not=0$, then $\lambda$ has at most $d$ nonzero digits. Versions of these results hold for both rational $G$-cohomology and ${{\text{\rm Ext}}}^m$-groups; see Theorem \[digits\]. These digit bounding results were inspired by [@SteSL3 Question 3.10], which we answer completely.
Second, combining the main Theorem \[shiftedGeneric\] with the large prime cohomology results [@BNP01 Thm. 7.5] gives a new proof[^5] that there is a bound on $\dim{\text{\rm H}}^m(G(q),L(\lambda))$, for $q$-restricted $\lambda$, depending only on $\Phi$ and $m$, and not on $p$ or $r$. In fact, after throwing away finitely many values of $q$, Theorem \[summary\] shows that $\dim{\text{\rm H}}^m(G(q),L(\lambda))$ is bounded by the maximum dimension of the spaces ${\text{\rm H}}^m(G,L(\mu))$, with $p$ and $\mu\in X^+$ allowed to vary (with only $m$ and $\Phi$ fixed). The latter maximum has been shown to be finite in [@PS11 Thm. 7.1]. Indeed, apart from finitely many exceptional $q$, the finite group cohomology ${\text{\rm H}}^m(G(q),L(\lambda))$ identifies with a rational cohomology group ${\text{\rm H}}^m(G,L(\mu))$, for an explicitly determined dominant weight $\mu$ (which depends on $\lambda$).
Though the main focus of this paper is on results which hold for all primes $p$, we collect several results in Section 6, most formulated in the ${{\text{\rm Ext}}}^m_{G(q)}$-context, which are valid in the special case when $p$ is modestly large. One such result is Theorem \[thm6.2\](c) discussed above. This theorem, given in a “shifted generic" framework, leads also to a fairly definitive treatment of generic cohomology for large primes in Theorem \[lastcor\] and the Appendix.
A key ingredient in this work is the elegant filtration, due to Bendel-Nakano-Pillen, of the induced module ${\mathcal G}_q(k)
:={\operatorname{ind}}_{G(q)}^Gk$; see [@BNP11] and the other references at the start of Section 4. This result is, in our view, the centerpiece of a large collection of results and ideas of these authors, focused on using the induction functor ${\operatorname{ind}}_{G(q)}^G$ in concert with truncation to smaller categories of rational $G$-modules. The filtration of ${\mathcal G}_q(k)$ is described in Theorem \[filtration\] below, and we derive some consequences of it in Section 4.
Also, the specific theorems and ideas establishing generic cohomology, as originally formulated in [@CPSK77], play an important role in Section 5, both directly and as a background motivation for exploring digit bounding.
Finally, to explain the title of this paper, a finite dimensional, rational $G$-module $M$ may be called “$m$-generic at $q$" if ${\text{\rm H}}^m(G(q),M)\cong {\text{\rm H}}^m_{\text{\rm gen}}(G,M)$.[^6] A natural generalization of this notion is to say that $M$ is “shifted" $m$-generic at $q$ if there exists a module $M'$ which is $m$-generic at $q$ and such that $M'|_{G(q)}\cong M^{[e]}|_{G(q)}$ for some $e\geq 0$. Thus, ${\text{\rm H}}^m(G(q),M)\cong {\text{\rm H}}^m(G(q),M')\cong{\text{\rm H}}^m_{\text{\rm gen}}(G,M')$. Our paper shows that many modules may be fruitfully regarded as shifted $m$-generic at $q$, when it is unreasonable or false that they are $m$-generic at $q$. The digit bounding results discussed above, which mesh especially well with the generic cohomology theory, provide the main tool for finding such modules in non-trivial cases, and this is the strategy for the proof of Theorem \[shiftedGeneric\]. In fact, Theorem \[shiftedGeneric\] shows that often one can obtain the additional isomorphism ${\text{\rm H}}^m_{\text{\rm gen}}(G,M')\cong{\text{\rm H}}^m(G,M')$, an attractive property for computations.
We thank Chris Bendel, Dan Nakano, and Cornelius Pillen for remarks on an early draft of this paper, and for supplying several references to the literature.
Some preliminaries
==================
Fix an irreducible root system $\Phi$ with positive (resp., simple) roots $\Phi^+$ (resp., $\Pi$) selected.[^7] Let $\alpha_0\in\Phi^+$ be the maximal short root, and let $h=(\rho,\alpha_0^\vee)+1$ be the Coxeter number of $\Phi$ (where $\rho$ is the half sum of the positive roots). Write $X$ for the full weight lattice of $\Phi$, and let $X^+\subset X$ be the set of dominant weights determined by $\Pi$.
Now fix a prime $p$. For a positive integer $b$, let $X^+_b:=\{\lambda\in X^+\,|\, (\lambda,\alpha^\vee)<p^b, \forall \alpha\in\Pi\,\}$ be the set of $p^b$-restricted dominant weights. At times it is useful to regard the $0$ weight as (the only) $p^0$-restricted dominant weight.
Let $G$ be a simple, simply connected algebraic group, defined and split over a prime field ${\mathbb F}_p$ and having root system $\Phi$. Fix a maximal split torus $T$, and let $B\supset T$ be the Borel subgroup determined the negative roots $-\Phi^+$. For $\lambda\in X^+$, $L(\lambda)$ denotes the irreducible rational $G$-module of highest weight $\lambda$. If $F:G\to G$ is the Frobenius morphism, then, for any positive integer $b$, let $G_b={\operatorname{Ker}}(F^b)$ be the (scheme theoretic) kernel of $F^b$. Thus, $G_b$ is a normal, closed (infinitesimal) subgroup of $G$. Similar notations are used for other closed subgroups of $G$.
The representation and cohomology theory for linear algebraic groups (especially semisimple groups and their important subgroups) is extensively developed in Jantzen’s book [@Jantzen], with which we assume the reader is familar. We generally follow his notation (with some small modifications).[^8] If $M$ is a rational $G$-module and $b$ is a non-negative integer, write $M^{[b]}$ for the rational $G$-module obtained by making $g\in G$ act through $F^b(g)$ on $M$. If $M$ already has the form $M\cong N^{[r]}$ for some $r\geq b$, write $M^{[-b]}:=N^{[r-b]}$.
Let ${\operatorname{ind}}_B^G$ be the induction functor from the category of rational $B$-modules to rational $G$-modules. (See §4 for a brief discussion of induction in general.) Given $\lambda\in X$, we denote the corresponding one-dimensional rational $B$-module also by $\lambda$, and write ${\text{\rm H}}^0(\lambda)$ for ${\operatorname{ind}}_B^G\lambda$. Then ${\text{\rm H}}^0(\lambda)\not=0$ if and only if $\lambda\in X^+$; when $\lambda\in X^+$, ${\text{\rm H}}^0(\lambda)$ has irreducible socle $L(\lambda)$ of highest weight $\lambda$, and formal character ${\operatorname{ch}}{\text{\rm H}}^0(\lambda)$ given by Weyl’s character formula at the dominant weight $\lambda$. In most circumstances, especially when regarding ${\text{\rm H}}^0(\lambda)$ as a co-standard (i.e., a dual Weyl) module in the highest weight category of rational $G$-modules, we denote ${\text{\rm H}}^0(\lambda)$ by $\nabla(\lambda)$.[^9] Given $\lambda\in X$, let $\lambda^*:=-w_0(\lambda)$, where $w_0$ is the longest element in the Weyl group $W$ of $\Phi$. If $\lambda\in X^+$, then $\lambda^*\in X^+$ is just the image of $\lambda$ under the opposition involution. For $\lambda\in X^+$, put $\Delta(\lambda)=
\nabla(\lambda^*)^*$, the dual of $\nabla(\lambda^*)$. In other words, $\Delta(\lambda)$ is the Weyl module for $G$ of highest weight $\lambda$. Of course, $L(\lambda)^*=L(\lambda^*)$.
For $i\geq 0$, let $R^i{\operatorname{ind}}_B^G$ be the $i$th derived functor of ${\operatorname{ind}}_B^G$. Then $R^i{\operatorname{ind}}_B^G=0$ for $i>|\Phi^+|$.
We will need another notion of the magnitude of a weight. If $b$ is a nonnegative integer, $\lambda\in X$ is called *$b$-small* if $|(\lambda,\alpha^\vee)|\leq b$ for all $\alpha\in\Phi^+$. If $\lambda\in X^+$, $\lambda$ is $b$-small if and only if $(\lambda,\alpha^\vee_0)\leq b$. We say a (rational) $G$-module is *$b$-small* provided all of its all of its weights are $b$-small. Equivalently, it is $b$-small provided its maximal weights (in the dominance order) are $b$-small. In particular, if $\lambda\in X^+$ is $b$-small, then any highest weight module $M$ with highest weight $\lambda$, e. g., $L(\lambda)$, $\nabla(\lambda)$, or $\Delta(\lambda)$, is also $b$-small. We make some elementary remarks about small-ness.
\[LenLem\]Let $\nu$ be any dominant weight and let $b,b',r,u$ be non-negative intergers.
1. If $b>0$, assume $u\geq [\log_p b]+1$, where $[\,\,]$ denotes the greatest integer function. Then $b\leq p^u-1$, and, if $\nu$ is $b$-small, $\nu$ is $p^u$-restricted.
2. If $\nu$ is $p^r$-restricted, then $\nu$ is $(h-1)(p^r-1)$-small.
3. Let $M$ and $N$ be two highest weight modules for $G$ with highest weights $\nu$, $\mu$. If $\mu$ and $\nu$ are $b$-small and $b'$-small, respectively, the tensor product $M\otimes N$ is $(b+b')$-small.
4. If $\lambda,\mu\in X^+_b$ are both $p^b$-restricted, then all composition factors of $L(\lambda)\otimes L(\mu)$ are $p^{b+[\log_p(h-1)]+2}$-restricted. If, in fact, $p\geq 2h-2$ all the composition factors of $L(\lambda)\otimes L(\mu)$ are $p^{b+1}$-restricted.
First we prove (a). The case $b=0$ is clear, so assume $b>0$. If $b\geq p^u$, then $\log_pb\geq [\log_pb]+1$, which is false. Thus, $b\leq p^u-1$. If $\nu$ is $b $-small, then $(\nu,\alpha^\vee)\leq b$ for each $\alpha\in\Phi^+$, so $\nu$ is $p^u$-restricted. Hence, (a) holds.
For (b) we note $(\nu,\alpha_0^\vee)\leq ((p^n-1)\rho,\alpha_0^\vee)\leq (p^r-1)(h-1)$. This proves (b) since $\nu\in X^+$ is dominant.
For (c), the highest weight of $M\otimes N$ is $(b+b')$-small. Since any other weight of $M\otimes N$ is obtained by subtracting positive roots, the statement follows.
To prove (d), note that (b) implies that $\lambda$ and $\mu$ are $(h-1)(p^b-1)$-small. Thus by (c) all composition factors of $L(\lambda)\otimes L(\mu)$ are $2(h-1)(p^b-1)$-small. Then by (a), all composition factors of $L(\lambda)\otimes L(\mu)$ are $p^e$-restricted, where $$\begin{aligned}
e&=[\log_p(2(h-1)(p^b-1))]+1\\
&\leq [\log_p2+\log_p(p^b-1)]+[\log_p(h-1)]+2\\
&\leq b+[\log_p(h-1)]+2.\end{aligned}$$ The case $p\geq 2h-2$ follows similarly.
Bounding weights
================
Let $U=R_u(B)$ be the unipotent radical of $B$, and let $u$ be the Lie algebra of $U$.
\[Lem0\]For any non-negative integer $m$, the $T$-weights in the ordinary cohomology space ${\text{\rm H}}^m(u,k)$ are $3m$-small (and they are sums of positive roots).
The $T$-weights in ${\text{\rm H}}^m(u,k)$ are included among the $T$-weights of the exterior power $\bigwedge^m(u^*)$ appearing in the the Koszul complex computing ${\text{\rm H}}^\bullet(u,k)$. Hence, they are sums of $m$ positive roots. Since each positive root is $3$-small, these weights are $3m$-small.
Recall that, for $r\geq 1$, $U_r$ is the Frobenius kernel of $F^r|_U$.
\[Lem1\] For any non-negative integer $m$, the $T$-weights of ${\text{\rm H}}^m(U_1,k)$ are $3mp$-small.
By [@Jantzen I.9.20] and [@FP86 (1.2)(b)] there are spectral sequences $$\begin{cases}
p=2: E_2^{i,j}:=S^i(u^*)^{[1]}\otimes {\text{\rm H}}^i(u,k) \implies {\text{\rm H}}^{i+j}(U_1,k);\\
p\not=2: E_2^{2i,j}:=S^i(u^*)^{[1]}\otimes {\text{\rm H}}^j(u,k)\implies {\text{\rm H}}^{2i+j}(U_1,k).\end{cases}$$ Suppose that $p=2$. A weight $\nu$ in $ {\text{\rm H}}^m(U_1,k)$ is a weight in $E^{i,j}_2$ for some $i+j=m$. Using Lemma \[Lem0\], the largest value of $(\nu,\alpha_0)$ clearly occurs when $i=m$ and $j=0$. Since of weights of $S^m(u)^{[1]}$ are given as a sum $\sum_{k=1}^mp\alpha_k$, the weight $\nu$ is $3mp$-small. Similarly, when $p>2$, the weight $\mu$ is $3(m/2)p$-small, so certainly $3mp$-small also.
\[Lem2\] For any $r>0$ and non-negative integer $m$, the $T$-weights of ${\text{\rm H}}^m(U_r,k)$ are $3mp^r$-small.
We use the Lyndon–Hochschild–Serre spectral sequence $$E_2^{i,j}:={\text{\rm H}}^i(U_r/U_1,{\text{\rm H}}^j(U_1,k))\implies {\text{\rm H}}^{i+j}(U_r,k).$$ The $E_2^{i,j}$-term has the same weights for $T$ as the $T$-module $${\text{\rm H}}^i(U_r/U_1,k)\otimes {\text{\rm H}}^j(U_1,k)\cong
{\text{\rm H}}^i(U_{r-1},k)^{[1]}\otimes {\text{\rm H}}^j(U_1,k).$$ The weights on the left hand tensor factor are $p(3ip^{r-1})=
3ip^r$-small. On the right hand side, the weights are $3jp$-small. Adding these together, the worst case occurs for $i=m$, and the lemma follows.
The following is immediate:
\[Cor1\] Suppose the weight $\lambda$ is $b$-small. Then the weights of ${\text{\rm H}}^m(B_r,\lambda)\cong
({\text{\rm H}}^m(U_r,k)\otimes\lambda)^{T_r}$ are $(3mp^r+b)$-small. Moreover, the weights of ${\text{\rm H}}^m(B_r,\lambda)^{[-r]}$ are $(3m +[b/p^r])$-small, where $[\,\,\,]$ denotes the greatest integer function.
\[MsmallToHsmall\] Let $m$ be a non-negative integer, and let $r,b$ be positive integers. Let $M$ be a $b$-small $G$-module. Then the $G$-module ${\text{\rm H}}^m(G_r,M)^{[-r]}$ is $(3m+[b/p^r])$-small.
In particular, if $M$ is $p^r$-restricted, then ${\text{\rm H}}^m(G_r,M)^{[-r]}$ is $(3m +[(h-1)(p^r-1)/p^r])\leq (3m+h-2)$-small.
We will show the statement holds when $M$ is an induced module with highest $b$-small weight $\lambda$; thence we deduce that the statement holds for $L(\lambda)$, so the statement follows for all $M$, since it holds for its composition factors.
By [@Jantzen II.12.2], there is a first quadrant spectral sequence $$E_2^{i,j}:=R^i{\operatorname{ind}}_B^G{\text{\rm H}}^j(B_r,\lambda)^{[-r]}\implies {\text{\rm H}}^{i+j}(G_r,{\text{\rm H}}^0(\lambda))^{[-r]}.$$ Any weight in ${\text{\rm H}}^m(G_r,{\text{\rm H}}^0(\lambda))^{[-r]}$ is a weight of $E_2^{i,j}$ for some $i,j$ with $i+j=m$, hence a weight of ${\text{\rm H}}^i(\mu)$ for some $\mu\in {\text{\rm H}}^j(B_r,\lambda)^{[-r]}$. So it suffices to show that any weight of ${\text{\rm H}}^i(\mu)$ for $\mu\in {\text{\rm H}}^j(B_r,\lambda)^{[-r]}$ is $(3m+[b/p^r])$-small.
By Corollary \[Cor1\], a weight $\mu$ of ${\text{\rm H}}^j(B_r,\lambda)^{[-r]}$ is $(3j+[b/p^r])$-small; hence it is also $b':=(3m+[b/p^r])$-small. Choose $w\in W$ so that $w\cdot\mu\in X^+-\rho$. If $w\cdot\mu$ is not in $X^+$ then $R^i{\text{\rm Ind}}_B^G\mu=0$. Hence we may assume that $w\cdot\mu\in X^+$. Now, $w\cdot\mu=w(\mu)+w\rho-\rho\leq w(\mu)$. Since $w\cdot\mu\in X^+$, $w\cdot\mu$ is $b'$-small if and only if $(w\cdot\mu,\alpha_0^\vee)\leq b'$. But $$(w\cdot\mu,\alpha_0^\vee)\leq(w(\mu),\alpha_0^\vee)=(\mu,w^{-1}\alpha_0^\vee)\leq b'.$$
Now if $L(\nu)$ is a composition factor of $R^i{\operatorname{ind}}_B^G \mu$, the strong linkage principle [@Jantzen II.6.13] implies $\nu\uparrow w\cdot\mu$ and is in particular $b'$-small.
Thus we have proved the statement in the case $M={\text{\rm H}}^0(\lambda)$.
For the general case, we apply induction on $m$. We have a short exact sequence $0\to L(\lambda)\to {\text{\rm H}}^0(\lambda)\to N\to 0$ where the $G$-module $N$ has composition factors whose high weights are less than $\lambda$ in the dominance order and are therefore $b$-small. Associated to this sequence is a long exact sequence of which part is $${\text{\rm H}}^{m-1}(G_r,N)^{[-r]}\to {\text{\rm H}}^m(G_r,L(\lambda))^{[-r]}\to {\text{\rm H}}^m(G_r,{\text{\rm H}}^0(\lambda))^{[-r]}$$ so that any $G$-composition factor of the middle term must be a $G$-composition factor of one of the outer terms. Now, the composition factors of the rightmost term are $(3m +[b/p^r])$-small by the discussion above. Since $N$ has composition factors with high weights less than $\lambda$ in the dominance order, these weights are $(3(m-1) +[b/p^r])$-small by induction, and are in particular $(3m+[b/p^r])$-small. Thus the weights of the middle term are also $(3m +[b/p^r])$-small.
This proves the statement in the case $M=L(\lambda)$. The case for all $b$-small modules $M$ now follows since it is true for each of its composition factors.
For the last statement, we use Lemma \[LenLem\](b).
The following corollary for general $m$ follows from the previous theorem. For $m=1$, it is proved in [@BNP04-Frob Prop. 5.2].
\[extForGr\]Let $\lambda,\mu\in X^+_r$. For any $r\geq 1$, the weights of ${{\text{\rm Ext}}}^m_{G_r}(L(\lambda),L(\mu))^{[-r]}$ are $(3m+2h-3)$-small. If $m=1$, the weights of ${{\text{\rm Ext}}}^1_{G_r}(L(\lambda),L(\mu))^{[-r]}$ are $(h-1)$-small.
\(a) The result [@BNP04-Frob Prop 5.2] quoted above also gives, for $m>1$, an integer $b$ such that the weights in ${{\text{\rm Ext}}}^m_{G_r}(L(\lambda),L(\mu))^{[-r]}$ are $b$-small. However, $b$ is multiplicative in $h$ and $m$, and so is weaker than Corollary \[extForGr\] for large $m$ and $h$ (for instance if $m,h\geq 4$). For $p>h$ and $m\geq 2$ note that our bound coincides with that given in [@BNP04-Frob Prop. 5.3] using the improvements from (c) below.
\(b) It is interesting to ask when ${{\text{\rm Ext}}}^m_{G_r}(L(\lambda),L(\mu))^{[-r]}$ for $\lambda,\mu\in X^+_r$ has a good filtration. Even for $r=1$, there are examples due to Peter Sin (for instance, see [@Sin94 Lem. 4.6]) showing that for small $p$ this question can have a negative answer. Obviously, if $p\geq 3m+3h-4$ (or $p\geq 2h-2$ in case $m=1$), then ${{\text{\rm Ext}}}^m_{G_r}(L(\lambda),L(\mu))^{[-r]}$ has highest weights in the lowest $p$-alcove, so it trivially has a good filtration.[^10]
\(c) The reader may check that many of the results in this section can be improved under certain mild conditions. For instance, if $\Phi$ is not of type $G_2$, its roots are all $2$-small. In this instance, wherever we have ‘$3m$’ it can be replaced with ‘$2m$’. In addition, if $p>2$ the last sentence of the proof of Lemma \[Lem2\] shows that one can replace $m$ with $[m/2]$. The same statement follows for most formulas in the remainder of the paper, however, we will not elaborate further in individual cases.
Relating $G(q)$-cohomology to $G$-cohomology
============================================
Inspired by work [@BNP01], [@BNP04], [@BNP02], [@BNP06], and [@BNP11], Theorem \[existsAnF\] establishes an important procedure for describing $G(q)$-cohomology in terms of $G$-cohomology. This result will be used in §5 to prove the digit bounding results mentioned in the Introduction.
Before stating the theorem, we review some elementary results. The coordinate algebra $k[G]$ of $G$ is a left $G$-module ($f\mapsto g\cdot f, x\mapsto f(xg)$, $x,g\in G$, $f\in k[G]$) and a right $G$-module ($f\mapsto f\cdot g, x\mapsto f(gx)$, $x,g\in G$,$g\in k[G]$). Given a closed subgroup $H$ of $G$ and a rational $H$-module, the induced module ${\operatorname{ind}}_H^G(V):={\text{\rm Map}}_H(G,V)$ consists of all morphisms $f:G\to V$ (i. e., morphisms of the algebraic variety $G$ into the underlying variety of a finite dimensional subspace of $V$), which are $H$-equivariant in the sense that $f(h.g)=h.f(g)$ for all $g,h\in G$. If $x\in G, f\in{\operatorname{ind}}_H^GV$, $x\cdot f\in
{\operatorname{ind}}_H^GV$, making ${\operatorname{ind}}_H^GV$ into a rational $G$-module (characterized by the property that ${\operatorname{ind}}_H^G$ is the right adjoint of the restriction functor ${\operatorname{res}}^G_H:G$–mod $\to$ $H$–mod). If $G/H$ is an affine variety (e. g., if $H$ is a finite subgroup), ${\operatorname{ind}}_H^G$ is an exact functor [@CPS77 Thm. 4.3], which formally takes injective $H$-modules to injective $G$-modules. Thus, ${\text{\rm H}}^\bullet(H,V)\cong{\text{\rm H}}^\bullet(G,{\operatorname{ind}}_H^GV)$ for any rational $H$-module.
Let $q=p^r$ for some prime integer $p$ and positive integer $r$, and let $m$ be a fixed non-negative integer which will serve as the cohomological degree. As in §2, let $G$ be the simple, simply connected algebraic group defined and split over ${\mathbb F}_p$ with root system $\Phi$.
\[Kop\] ([@Kop84]) Consider the coordinate algebra $k[G]$ of $G$ as a rational $(G\times G)$-module with action $$((g,h)\cdot f)(x)=f(h^{-1}xg),\,\,g,h,x\in G,
f\in k[G].$$ Then $k[G]$ has an increasing $G\times G$-stable filtration $0\subset {{\mathfrak F}}'_0\subset{{\mathfrak F}}'_1\subset \cdots$ in which, for $i\geq 1$, ${{\mathfrak F}}'_i/{{\mathfrak F}}'_{i-1}\cong
\nabla(\gamma_i)\otimes \nabla(\gamma^*_i)$, $\gamma_i\in X^+$, and $\cup_i{{\mathfrak F}}'_i=k[G]$. Each dominant weight $\gamma\in X^+$ appears exactly once in the sequence $\gamma_0,\gamma_1,\gamma_2, \cdots$.
\[filtration\] ([@BNP11 Prop. 2.2 & proof]) (a) The induced module ${\operatorname{ind}}_{G(q)}^G(k)$ is isomorphic to the pull-back of the $G\times G$-module $k[G]$ above through the map $G\to G\times G$, $g\mapsto (g,F^r(g))$.
\(b) In this way, ${\operatorname{ind}}_{G(q)}^G k$ inherits an increasing $G$-stable filtration $0\subset {{\mathfrak F}}_0\subset {{\mathfrak F}}_1\subset \cdots$ with $\bigcup {{\mathfrak F}}_i
={\operatorname{ind}}_{G(q)}^Gk$, in which, for $i\geq 1$, ${{\mathfrak F}}_i/{{\mathfrak F}}_{i-1}\cong\nabla(\gamma_i)\otimes\nabla(\gamma_i^*)^{[r]}$. Moreover, each dominant weight $\gamma\in X^+$ appears exactly once in the sequence $\gamma_0,\gamma_1,\cdots$.
Following [@BNP11], put ${{\mathcal G}}_r(k):={\operatorname{ind}}_{G(q)}^Gk$, with $q=p^r$. The filtration ${{\mathfrak F}}_\bullet$ of the rational $G$-module ${{\mathcal G}}_r(k)$ arises from the increasing $G\times G$-module filtration ${{\mathfrak F}}'_\bullet$ of $k[G]$ with sections $\nabla(\gamma)\otimes\nabla(\gamma^*)$. Since these latter modules are all co-standard modules for $G\times G$, their order in ${{\mathfrak F}}'_\bullet$ can be manipulated, using the fact that $$\label{vanishing}
{{\text{\rm Ext}}}^1_{G\times G}(\nabla(\gamma)\otimes\nabla(\gamma^*),\nabla(\mu)\otimes\nabla(\mu^*))=0,\quad
{\text{\rm unless $\mu<\gamma$ (and $\mu^*<\gamma^*$).}}$$ Thus, for any non-negative integer $b$, there is a (finite dimensional) $G$-submodule ${{\mathcal G}}_{r,b}(k)$ of ${{\mathcal G}}_r(k)$ which has an increasing $G$-stable filtration with sections precisely the $\nabla(\gamma)\otimes
\nabla(\gamma^*)^{[r]}$ and $(\gamma,\alpha_0^\vee)\leq b$, and each such $\gamma$ appearing with multiplicity 1. Such a submodule may be constructed from a corresponding (unique) $G\times G$–submodule of $k[G]$ with corresponding sections $\nabla(\gamma)\otimes\nabla(\gamma^*)$. With this construction, the quotient ${{\mathcal G}}_r(k)/{{\mathcal G}}_{r,b}(k)$ has a $G$-stable filtration with sections $\nabla(\gamma)\otimes\nabla(\gamma^*)^{[r]}$, $(\gamma,\alpha_0^\vee)>b$. More precisely, using (\[vanishing\]) again and setting ${{\mathcal G}}_{r,-1}(k)=0$, we have the following result.
\[filtrationlemma\] For $b\geq 0$, $${{\mathcal G}}_{r,b}(k)/{{\mathcal G}}_{r,b-1}(k)\cong\bigoplus_{\lambda\in X^+, (\lambda,\alpha^\vee_0)=b}\nabla(\lambda)\otimes
\nabla(\lambda^*)^{[r]}.$$ Also, $\bigcup_{b\geq 0}{{\mathcal G}}_{r,b}(k) ={{\mathcal G}}_r(k).$
We will usually abbreviate ${{\mathcal G}}_r(k)$ to ${{\mathcal G}}_r$ and ${{\mathcal G}}_{r,b}(k)$ to ${{\mathcal G}}_{r,b}$ for $b\geq -1$. We remark that ${{\mathcal G}}_r(k)$ is in some sense already an abbreviation, since it depends on the characteristic $p$ of $k$.
For $\lambda,\mu\in X_r^+$, set ${{\text{\rm Ext}}}^m_{G(q)}(L(\lambda),L(\mu))$. Because the induction functor ${\operatorname{ind}}_{G(q)}^G$ is exact from the category of $kG(q)$-modules to the category of rational $G$-modules, $$\label{induction} {{\text{\rm Ext}}}^m_{G(q)}(L(\lambda)\otimes L(\mu^*),k)\cong {{\text{\rm Ext}}}^m_G(L(\lambda)\otimes L(\mu^*),{{\mathcal G}}_r),$$ where $q$ is $p^r$.
The following result provides an extension beyond the $m=1$ case treated in [@BNP02 Thm. 2.2]. A similar result in the cohomology case is given with a bound on $p$ (namely, $p\geq (2m+1)(h-1)$) for any $m$ by [@BNP01 proof of Cor. 7.4]. Our result, for ${{\text{\rm Ext}}}^m$, does not require any condition on $p$.
\[existsAnF\] Let $b\geq 6m+6h-8$, independently of $p$ and $r$, or, more generally, $$b\geq b(\Phi,m,p^r):=\left[\frac{3m+3h-4}{1-1/p^r}\right],$$ when $p$ and $r$ are given. Then, for any $\lambda,\mu\in X^+_r$, we have $$\label{decomposition}
{{\text{\rm Ext}}}^m_{G(q)}(L(\lambda),L(\mu))\cong{{\text{\rm Ext}}}^m_G(L(\lambda),L(\mu)\otimes {{\mathcal G}}_{r,b}).$$ (Recall $q=p^r$.) In addition, $$\label{vanishing}
{{\text{\rm Ext}}}^n_G(L(\lambda),L(\mu)\otimes\nabla(\nu)\otimes\nabla(\nu^*)^{[r]})=0,\quad\forall n\leq m, \forall\nu\in X^+
\,\,{\text{\rm satisfying}}\, (\nu,\alpha_0^\vee)>b.$$
It suffices to show that $$\label{vanishingresult} b\geq b(\Phi,m,p^r)\implies {{\text{\rm Ext}}}^n_G(L(\lambda),L(\mu)\otimes
{{\mathcal G}}_r/{{\mathcal G}}_{r,b})=0,\,\forall n\leq m.$$ Suppose that (\[vanishingresult\]) fails. Then for some $\nu$ with $(\nu,\alpha^\vee_0)>b$ and some non-negative integer $n\leq m$, we must have ${{\text{\rm Ext}}}^n_G(L(\lambda),L(\mu)\otimes\nabla(\nu)\otimes\nabla(\nu^*)^{[r]})\not=0$. (That is, (\[vanishing\]) fails.) For some composition factor $L(\xi)\cong L(\xi_0)\otimes L(\xi')^{[r]}$ ($\xi_0\in X^+_r$, $\xi'\in
X^+$) of $\nabla(\nu)$, we obtain $$\label{first}{{\text{\rm Ext}}}^n_G(\Delta(\nu)^{[r]}\otimes L(\lambda), L(\mu)\otimes L(\xi_0)\otimes L(\xi')^{[r]})\not=0$$ by rearranging terms. Here we use the fact that $\nabla(\nu^*)^*\cong\Delta(\nu)$, the Weyl module of highest weight $\nu$. To compute the left-hand side of (\[first\]) we use a Lyndon-Hochschild-Serre spectral sequence involving the normal subgroup $G_r$. The $E_2$-page is given by $$\label{spectralsequence} E_2^{i,j}={{\text{\rm Ext}}}_G^i(\Delta(\nu),{{\text{\rm Ext}}}_{G_r}^j(L(\lambda)\otimes L(\mu^*), L(\xi_0))^{[-r]}\otimes
L(\xi')),\quad i+j=n.$$
Using Lemma \[LenLem\](b)(c), we see that the module $L(\lambda^*)\otimes L(\mu)\otimes L(\xi_0)$ is $3(h-1)(p^r-1)$-small; thus by Theorem \[MsmallToHsmall\] we have ${{\text{\rm Ext}}}^j_{G_r}(L(\lambda)\otimes L(\mu^*),L(\xi_0))^{[-r]}={{\text{\rm Ext}}}^j_{G_r}(k,L(\lambda^*)\otimes
L(\mu)\otimes L(\xi_0))^{[-r]}$ is $(3j+3h-4)$-small, and, in particular, is $(3m+3h-4)$-small.
Put $x=(\nu,\alpha_0^\vee)>b$. Then, as $L(\xi)$ is a composition factor of $\nabla(\nu)$, $\xi=\xi_0+p^r\xi'$ is $x$-small. Clearly $p^r\xi'$ is $x$-small also, thus $\xi'$ is $[x/p^r]$-small. So, the composition factors of ${{\text{\rm Ext}}}^j_{G_r}(L(\lambda)\otimes L(\mu^*),L(\xi_0))^{[-r]}\otimes L(\xi')$ are $([x/p^r]+3m+3h-4)$-small. Recall the fact, for general $\nu,\omega\in X^+$, that ${{\text{\rm Ext}}}^\bullet_G(\Delta(\nu),L(\omega))\not=0$ implies that $\nu\leq \omega$. For $\nu$ as above and $L(\omega)$ a composition factor of ${{\text{\rm Ext}}}^j_{G_r}(L(\lambda)\otimes L(\mu^*),L(\xi_0))^{[-r]}\otimes L(\xi')$, it follows that $\nu\leq \omega$. Hence, $x=(\nu,\alpha_0^\vee)\leq (\omega,\alpha_0^\vee)\leq 3m +3h-4+[x/p^r]$. Rearranging this gives $$x\leq\left[\frac{3m+3h-4}{1-1/p^r}\right]\leq b,$$ a contradiction. This proves (\[decomposition\]) and (\[vanishing\]).
For the remaining part of the theorem, just note that the smallest value of $p^r$ is $2$. Hence, the largest value of $v(\Phi,m,p^r)$ is $6m+6h-8$.
Digits and cohomology
=====================
Any $\lambda\in X^+$ has a $p$-adic expansion $\lambda=\lambda_0+p\lambda_1+\dots+p^r\lambda_r+\dots$ where each $\lambda_i$ is $p$-restricted. We refer to each pair $(i,\lambda_i)$ as a digit of $\lambda$. We say the $i$th digit of $\lambda$ is $0$ if $\lambda_i=0$. Clearly $\lambda$ has finitely many nonzero digits. Let also $\mu\in X^+$. We say $\lambda$ and $\mu$ agree on a digit if there is a natural number $i$ with $\lambda_i=\mu_i$. We say $\lambda$ and $\mu$ differ on $n$-digits if $|\{i:\lambda_i\neq\mu_i\}|=n$.
The theorem below requires the following result.
\[tensors\]Let $\lambda,\mu\in X_r^+$ and $M$ a finite dimensional rational $G$-module whose (dominant) weights are $(p^r-1)$-small. Then $${\text{\rm Hom}}_{G_r}(L(\lambda),L(\mu)\otimes M)= {\text{\rm Hom}}_G(L(\lambda),L(\mu)\otimes M).$$ Hence, the left-hand side has trivial $G$-structure.
We can now prove the following “digit bounding" theorem. It both answers the open question [@SteSL3 Question 3.10] in a strong way, and paves the way for for this rest of this section.
\[digits\] Given an irreducible root system $\Phi$ and a non-negative integer $m$, there is a constant $\delta=\delta(\Phi,m)$, so that if $\lambda,\mu, \nu\in X^+$, and $\nu$ is $(3m+2h-2)$-small, then
1. $${{\text{\rm Ext}}}_G^n(L(\lambda),L(\nu)\otimes L(\mu))\neq 0$$ for some $n\leq m$ implies $\lambda$ and $\mu$ differ in at most $\delta$ digits; and
2. $${{\text{\rm Ext}}}_G^m(L(\lambda),L(\mu))\neq 0$$ implies $\lambda$ and $\mu$ differ in at most $\delta-\phi$ digits,
where $\phi=[\log_p(3m+2h-2)]+1$.
We prove both statements together by induction on $m$. Set $b:= 3m+2h -2$ and $u:=\phi$. Thus, $u=[\log_p b]+1$. Let $\lambda^\circ
:=\lambda_0+\cdots + p^{u-1}\lambda_{u-1}$, so that $\lambda=\lambda^\circ + p^u\lambda'$, for $\lambda'\in X^+$; write $\mu$ similarly. By Lemma \[LenLem\], the $b$-small weight $\nu$ is $p^u$-restricted. In fact, $b\leq p^u-1$.
The case $m=0$ follows easily from Lemma \[tensors\], with $\delta(\Phi,0):=\phi$:
For statement (i), we have $${\text{\rm H}}:={\text{\rm Hom}}_G(L(\lambda),L(\mu)\otimes L(\nu))={\text{\rm Hom}}_G(L(\lambda'),{\text{\rm Hom}}_{G_u}(L(\lambda^\circ),L(\mu^\circ)\otimes L(\nu))^{[-u]}\otimes L(\mu'))$$ with ${\text{\rm H}}$ assumed to be nonzero. As $\nu$ is $(p^u-1)$-small, Lemma \[tensors\] implies that the module ${\text{\rm Hom}}_{G_u}(L(\lambda^\circ),L(\mu^\circ)\otimes L(\nu))\cong {\text{\rm Hom}}_{G}(L(\lambda^\circ),L(\mu^\circ)\otimes L(\nu))$ and so has a trivial $G$-structure. Thus, $${\text{\rm H}}={\text{\rm Hom}}_G(L(\lambda'),L(\mu'))\otimes {\text{\rm Hom}}_{G}(L(\lambda^\circ),L(\mu^\circ)\otimes L(\nu)).$$ If this expression is nonzero, then $\lambda'=\mu'$ and $\lambda^\circ$, $\mu^\circ$ can differ in at most all of their $u=\phi$ places, so that $\lambda$ and $\mu$ can differ in at most $\phi$ places. Statement (ii) is trivial. This completes the $m=0$ case.
Assume we have found $\delta(\Phi,i)$ for all $i<m$ such that the theorem holds when $i$ plays the role of $m$ and $\delta(\Phi,i)$ plays the role of $\delta(\Phi,m)$. We claim that the theorem holds at $m$ if we set $\delta=\delta(\Phi,m):=2\phi+\max_{i<m}\delta(\Phi,i)$.
Suppose otherwise. Let $b=(3m+2h-2)$. Then either (i) fails with $n=m$, namely, $$\label{failureofi}\begin{cases}
{{\text{\rm Ext}}}^m_G(L(\lambda),L(\nu)\otimes L(\mu))\not=0,\,\,{\text{\rm for some $\lambda,\mu,\nu\in X^+$,}}\\
{\text{\rm with $\nu$ $b$-small and $\lambda$ and $\mu$ differing in more than $\delta$-digits,}}\end{cases}$$ or (ii) fails, namely, $$\label{failureofii}
{{\text{\rm Ext}}}^m_G(L(\lambda),L(\mu))\not=0,\,\,{\text{\rm $\lambda$ and $\mu$ differing in more than $\delta-\phi$ digits.}}$$ Let $\lambda,\mu,\nu\in X^+_s$ be a such a counterexample with $s$ minimal (where in (\[failureofii\]), we take $\nu=0$ and, in (\[failureofi\]), $\nu$ is $b$-small). We continue to write $\lambda=\lambda^\circ+p^u \lambda'$, where $\lambda^\circ\in X^+_{u}$, $\lambda'\in X^+$. Similarly for $\mu$.
We investigate $\lambda$ and $\mu$ using the Lyndon–Hochschild–Serre spectral sequence for the normal (infinitesimal) group $G_{u}\triangleleft G$. First, suppose that $$E_2^{m-i,i}:={{\text{\rm Ext}}}^{m-i}_{G}(L(\lambda'),{{\text{\rm Ext}}}_{G_u}^{i}(L(\lambda^\circ),L(\nu)\otimes L(\mu^\circ))^{[-u]}\otimes L(\mu'))\not=0,$$ for some positive integer $0<i\leq m$. Then ${{\text{\rm Ext}}}^{m-i}_G(L(\lambda'), L(\tau)\otimes L(\mu'))\not=0$ for some composition factor $L(\tau)$ of ${{\text{\rm Ext}}}_{G_u}^{i}(L(\lambda^\circ),L(\nu)\otimes L(\mu^\circ))^{[-u]}$. By Lemma \[LenLem\](b),(c), all composition factors of $L(\nu)\otimes L(\mu^\circ)\otimes L(\lambda^\circ)^*$ are $(p^u-1)(2h-2)+b\leq (p^u-1)(2h-1)$-small. (Note that $\nu$ is $(p^u-1)$-small.) Thus, by Theorem \[MsmallToHsmall\], $\tau$ is $(3i+2h-2)\leq (3(m-1)+2h-2)$-small.
By induction, $\mu'$ differs from $\lambda'$ in at most $\delta(\Phi,m-i)$ digits. (Apply (i) with $n=m-i$ and $m-1$ playing the role of $m$.) So the number of digits where $\lambda$ differs from $\mu$ is at most $\phi+\delta(\Phi,m-i)\leq \delta-\phi$ digits. This is a contradiction to (\[failureofi\]) or to (\[failureofii\]) in the $\nu=0$ case, as $\lambda$ was assumed to differ from $\mu$ by more than $\delta-\phi$ digits. Hence, we may assume that the terms $E_2^{m-i,i}=0$, for all positive integers $i\leq m$.
By assumption, ${{\text{\rm Ext}}}^m_G(L(\lambda),L(\nu)\otimes L(\mu))\neq 0$, so, $$E_2^{m,0}\cong {{\text{\rm Ext}}}_G^{m}(L(\lambda'),{\text{\rm Hom}}_{G_r}(L(\lambda^\circ),L(\nu)\otimes L(\mu^\circ))^{[-r]}\otimes L(\mu'))\neq 0.$$ Now by Lemma \[tensors\], $${\text{\rm Hom}}_{G_r}(L(\lambda^\circ),L(\nu)\otimes L(\mu^\circ))\cong {\text{\rm Hom}}_G(L(\lambda^\circ),L(\nu)\otimes L(\mu^\circ))$$ has trivial $G$-structure, and $$E_2^{m,0}\cong {{\text{\rm Ext}}}_G^{m}(L(\lambda'), L(\mu')^{\oplus t})\cong ({{\text{\rm Ext}}}_G^{m}(L(\lambda'), L(\mu')))^{\oplus t}\neq 0,$$ where $t=\dim {\text{\rm Hom}}_G(L(\lambda^\circ),L(\nu)\otimes L(\mu^\circ))$ and $\lambda',\mu'\in X_{s-u}^+$.
By minimality of $s$ we have that $\lambda'$ and $\mu'$ differ in at most $\delta-\phi$ places. But $\mu^\circ$ and $\lambda^\circ$ differ in at most all their $u=\phi$ digits. So $\lambda$ and $\mu$ differ in at most $\delta$ places.
This is a contradiction when $\nu\neq 0$. So we may assume $\nu=0$ and $$E_2^{m,0}\cong {{\text{\rm Ext}}}_G^{m}(L(\lambda'),{\text{\rm Hom}}_{G_u}(L(\lambda^\circ), L(\mu^\circ))^{[-u]}\otimes L(\mu'))\neq 0.$$ The non-vanishing forces $\lambda^\circ=\mu^\circ$. Since $u=\phi>0$, we have, by minimality of $s$, that $${{\text{\rm Ext}}}_G^{m}(L(\lambda'),L(\mu'))\neq 0 \implies \lambda' {\text{\rm and $\mu'$ differ in at most $\delta-u$ places}}.$$ But as $\lambda$ and $\mu$ agree on their first first $u$ places, so $\lambda$ and $\mu$ differ in at most $\delta-u=\delta-\phi$ places.
This is a contradiction, and completes the proof of the theorem.
\[special\] The proof of the theorem implies that if ${{\text{\rm Ext}}}^1_G(L(\lambda),L(\mu))\neq 0$, then $\lambda$ and $\mu$ differ in at most $2+2[\log_p(h-1)]$ digits; indeed, following the proof carefully, one sees these digits can be found in a substring of length $2+2[\log_p(h-1)]$:
Let $r=[\log_p(h-1)]+1$. Write $\lambda=\lambda^\circ+p^r\lambda'=\lambda^\circ+p^r\lambda'^\circ+p^{2r}\lambda''$, with $\lambda^\circ,\lambda'^\circ\in X_r^+$, $\lambda',\lambda''\in X_r^+$ and take a similar expression for $\mu$. If ${{\text{\rm Ext}}}^1_G(L(\lambda),L(\mu))\neq 0$ then either: $${{\text{\rm Ext}}}^1_G(L(\lambda'),{\text{\rm Hom}}_{G_r}(L(\lambda^\circ),L(\mu^\circ))^{[-r]}\otimes L(\mu'))\neq 0,$$ which implies $\lambda^\circ=\mu^\circ$ (and we are done by induction, say on the maximum number of digits of $\lambda$ and $\mu$). Or, the space ${\text{\rm Hom}}_G(L(\lambda'),{{\text{\rm Ext}}}^1_{G_r}(L(\lambda^\circ),L(\mu^\circ))^{[-r]}\otimes L(\mu'))\neq 0$. Now the weights of ${{\text{\rm Ext}}}^1_{G_r}(L(\lambda^\circ),L(\mu^\circ))^{[-r]}$ are $(h-1)$-small by Corollary \[extForGr\]. By Lemma \[LenLem\](a), $h-1\leq p^r-1$. By Lemma \[tensors\], the $G$-structure on ${\text{\rm Hom}}_{G_r}(L(\lambda'^\circ),L(\tau)\otimes L(\mu'^\circ))$ is trivial for a composition factor $L(\tau)$ of ${{\text{\rm Ext}}}^1_{G_r}(L(\lambda_0),L(\mu_0))^{[-r]}$. Hence, we must have ${\text{\rm Hom}}_G(L(\lambda''),L(\mu''))\neq 0$ and we can identify $\lambda''$ and $\mu''$. Thus, $\lambda$ and $\mu$ differ in their first $2r$ digits, as required.
\[digitsForG(q)\] For every $m\geq 0$ and irreducible root system $\Phi$, choose any constant $m'$ so that $3m'+2h+2\geq 6m+6h-8$, and let $d=d(m)=d(\Phi,m)$ be an integer $\geq \delta(\Phi,m')$. Then for all prime powers $q=p^r$, and all $\lambda,\mu\in X_r^+$, $${{\text{\rm Ext}}}^m_{G(q)}(L(\lambda),L(\mu))\not=0$$ implies that $\lambda$ and $\mu$ differ in at most $d$ digits.
Let $b=6m+6h-8$. By Theorem \[existsAnF\], $$E={{\text{\rm Ext}}}^m_{G(q)}(L(\lambda),L(\mu))\cong {{\text{\rm Ext}}}^m_{G}(L(\lambda),L(\mu)\otimes {{\mathcal G}}_{r,b}).$$ Also, ${{\mathcal G}}_{r,b}$ has composition factors $L(\zeta)\otimes L(\zeta')^{[u]}$ with $\zeta, \zeta'$ in the set $\Xi$ of $b$-small weights.
Then we have ${{\text{\rm Ext}}}^m_{G}(L(\lambda)\otimes L(\zeta')^{[r]},L(\mu)\otimes L(\zeta))\neq 0$ for some $\zeta,\zeta'\in \Xi$. By our choice of $m'$, $m'\geq m$ and the weight $\zeta$ is $(3m'+2h-2)$-small. The result now follows from Theorem \[digits\].
Next, define some notation in order to quote the main result of [@CPSK77]:
1. Let $t$ be the torsion exponent of the index of connection $[X:{\mathbb Z}\Phi]$.
2. For a weight $\lambda$, define $\bar\lambda=t\lambda$. Also let $t(\lambda)$ be the order of $\lambda$ in the abelian group $X/\Phi$. Let $t_p(\lambda)$ be the $p$-part of $t(\lambda)$. Of course one has $t_p(\lambda)\leq t(\lambda)\leq [X:{\mathbb Z}\Phi]$.
3. Let $c(\mu)$ for $\mu$ in the root lattice be the maximal coefficient in an expression of $\mu$ as a sum of simple roots; for $\mu=2\rho$ these values can be read off from [@Bourb82 Planches I–IX].
4. Let $c=c(\tilde\alpha)$ where $\tilde\alpha$ is the highest long root. One can also find the value of $c$ from \[[*loc. cit.*]{}\].
5. For any $r\in {{\mathbb Z}}$ and any prime $p$, define $e(r)=[(r-1)/(p-1)]$; clearly $e(r)\leq r-1$.
6. For any $r\in {{\mathbb Z}}$ and any prime $p$, let $f(r)=[\log_p(|r|+1)]+2$; clearly $f(r)\leq [\log_2(|r|+1)]+2$.
Then the main result of [@CPSK77] is as follows:
\[CPSK\] Let $V$ be a finite dimensional rational $G$-module and let $m$ be a non-negative integer. Let $e$, $f$ be non-negative integers with $e\geq e(ctm)$, $f\geq f(c(\bar\lambda))$ for every weight $\lambda$ of $T$ in $V$. If $p\neq 2$ assume also $e\geq e(ct_p(\lambda)(m-1))+1$.
Then for $q=p^{e+f}$, the restriction map ${\text{\rm H}}^n(G,V^{[e]})\to {\text{\rm H}}^n(G(q),V)$ is an isomorphism for $n\leq m$ and an injection for $n=m+1$.
We alert the reader that this result will be applied by first determining $e_0,f_0$ satisfying the inequalities required of $e$ and $f$ above, respectively, and then checking $e\geq e_0$ and $f\geq f_0$, respectively, for actual values of $e$ and $f$. which arise in our applications.
\[adjustment\] (a) As pointed out in [@CPSK77 Remark 6.7(c)], it is not necessary to check the numerical conditions in the theorem for each weight $\lambda$ of $V$. It is sufficient to check these conditions for the highest weights of the composition factors of $V$.
\(b) Let $e, f$ be as in Theorem \[CPSK\]. For any $e'\geq e$, twisting induces a semilinear isomorphism ${\text{\rm H}}^m(G,V^{[e]})\overset\sim\to {\text{\rm H}}^m(G,V^{[e']})$. In fact, the theorem implies there are isomorphisms $$\begin{CD}
{\text{\rm H}}^m(G(p^{e'+f}),V^{[e]}) @<\sim<< {\text{\rm H}}^m(G, V^{[e]}) @>\sim>> {\text{\rm H}}^m(G(p^{e+f}),V^{[e]}),\end{CD}$$ where on the left-hand side we write $e'+f=e+f'$, $f'=f+e'-e$. Now consider the diagram (of semilinear maps) $$\begin{CD}
{\text{\rm H}}^m(G, V^{[e']}) @>\sim>> {\text{\rm H}}^m(G(p^{e'+f}), V^{[e']})\\
@AA^\text{twisting} A
\\
{\text{\rm H}}^m(G,V^{[e]}) @>\sim>> {\text{\rm H}}^m(G(p^{e+f}),V^{[e]})\end{CD}$$ The left-hand vertical map is an injection [@CPS83], so since the two cohomology groups on the right-hand side have the same dimension (notice $V^{[e']}\cong V^{[e]}\cong V$ as $G(q)$-modules for any $q$), it must be an isomorphism. Compare [@CPSK77 Cor. 3.8].
In particular, $H^m(G,V^{[e]})\cong {\text{\rm H}}_{\text{\rm gen}}^m(G,V)$ (essentially, by the definition of generic cohomology.)
\[coarseCPSK\]
1. If $\lambda\in X^+_{r'}$, then $f(c(\bar\lambda))\leq r'+[\log_p(tc(2\rho)/2)]+2$. In particular, setting $f_0=f_0(\Phi)=\log_2(tc(2\rho)/2)+2$, we have $f(c(\bar\lambda))\leq r'+f_0$.
2. Set $e_0=e_0(\Phi,m)=ctm$. Then, for $\lambda\in X^+$, $e_0\geq e(ctm)$ and, for all $p\neq 2$, $e_0\geq e(ct_p(\lambda)(m-1))+1$.
\(a) Note that $c(\bar\lambda)= c(t\lambda)
\leq c(t(p^{r'}-1)\rho)
=t(p^{r'}-1)c(2\rho)/2$ and $c(\bar\lambda)+1\leq tp^{r'}c(2\rho)/2$. Thus, $f(c(\bar\lambda))=[\log_p(c(\bar\lambda)+1)]+2\leq r'+[\log_p(tc(2\rho)/2)]+2$. The second statement is clear.
\(b) Note that $t_p(\lambda)\leq t$. We leave the details to the reader.
As an overview for the proof of the following theorem, which becomes quite technical, let us outline the basic strategy. We show that if there is a non-trivial $m$-extension between two $L(\lambda)$ and $L(\mu)$ which are $q=p^r$-restricted, one can insist that $r$ is so big that Theorem \[digitsForG(q)\] implies that the digits of the weights of the two irreducible modules must agree on a large contiguous string of zero digits. Since the cohomology for a finite Chevalley group is insensitive to twisting (as noted above), one can replace the modules with Frobenius twists. The resulting modules are still simple; so wrapping the resulting non-$p^r$-restricted factors to the beginning we may assume they are $p^r$-restricted high weights $\lambda'$ and $\mu'$, respectively. In particular we can arrange that a large string of zero digits occurs at the end of $\lambda'$ and $\mu'$. This forces $\lambda'$ and $\mu'$ to be bounded away from $q$. The result is that we can apply Theorem \[CPSK\] above.
We recall some notation from the introduction. Let $q=p^r$ be a $p$-power. For $e\geq 0$ and $\lambda\in X$, there is a unique $\lambda'\in X^+_r$ such that $\lambda'|_{T(q)}=p^e\lambda|_{T(q)}$. Denote $\lambda'$ by $\lambda^{[e]_q}$.
\[shiftedGeneric\] Let $\Phi$ be an irreducible root system and let $m\geq 0$ be given.
\(a) There exists a non-negative integer $r_0=r_0(\Phi,m)$ such that, for all $r\geq r_0$ and $q=p^r$ for any prime $p$, if $\lambda\in
X^+_r$, then, for some $e\geq 0$, there are isomorphisms $${\text{\rm H}}^n(G(q),L(\lambda))\cong{\text{\rm H}}(G(q), L(\lambda)^{[e]})\cong {\text{\rm H}}^n(G,L(\lambda')),\quad n\leq m,$$ where $\lambda'=\lambda^{[e]_q}$. (The first isomorphism is semilinear.) In addition, these isomorphisms can be factored as $${\text{\rm H}}^n(G(q),L(\lambda))\cong{\text{\rm H}}^n(G(q),L(\lambda'))\cong {\text{\rm H}}^n_\text{{\text{\rm gen}}}(G,L(\lambda'))\cong {\text{\rm H}}^n(G,L(\lambda')).$$ Also, for any $\ell\geq 0$, the restriction maps $${\text{\rm H}}^n(G(p^{r+\ell}),L(\lambda'))
\to {\text{\rm H}}^n(G(q),L(\lambda'))\quad n\leq m$$ are isomorphisms.
\(b) More generally, given a non-negative integer $\epsilon$, there is a constant $r_0= r_0(\Phi,m,\epsilon)
\geq \epsilon$ such that, for all $r\geq r_0$, if $\lambda\in X^+_r$ and $\mu\in X^+_\epsilon$, there exists an $e\geq 0$ and a semilinear isomorphism $${{\text{\rm Ext}}}^n_{G(q)}(L(\mu),L(\lambda))\cong {{\text{\rm Ext}}}^n_G(L(\mu'),L(\lambda')), \quad n\leq m,$$ where $\lambda'=\lambda^{[e]_q}$, $\mu'=\mu^{[e]_q}$. In addition,
$$\begin{aligned}
{{\text{\rm Ext}}}^n_{G(q)}(L(\mu),L(\lambda)) &\cong {{\text{\rm Ext}}}^n_{G(q)}(L(\mu)^{[e]},L(\lambda)^{[e]})
\cong {{\text{\rm Ext}}}^n_{G(q)}(L(\mu'),L(\lambda'))\\
&\cong {{\text{\rm Ext}}}^n_{G,{\text{\rm gen}}}(L(\mu'),L(\lambda'))
\cong {{\text{\rm Ext}}}^n_{G}(L(\mu'),L(\lambda'))\\ \end{aligned}$$
for all $n\leq m$. Also, for any $\ell\geq 0$, the restriction map $${{\text{\rm Ext}}}^n_{G(p^{r+\ell})}(L(\mu'),L(\lambda'))
\to {{\text{\rm Ext}}}^n_{G(q)}(L(\mu'),L(\lambda'))$$ is an isomorphism.
Clearly, (a) is a special case of (b), so it suffices to prove (b).
By Theorem \[digitsForG(q)\], there is a constant $d=d(\Phi,n)$ so that, given $\lambda,\mu\in X^+_r$, then $\lambda$ and $\mu$ differ in at most $d$ digits if ${{\text{\rm Ext}}}_{G(q)}^n(L(\mu),L(\lambda))\not=0$. We may take $d(\Phi,n)\geq \delta(\Phi,n)$, where the latter constant satisfies Theorem \[digits\], so that $\lambda$ and $\mu$ differ in at most $d$ digits if ${{\text{\rm Ext}}}_G^n(L(\mu),L(\lambda))\not=0$.[^11] Of course, we may also take $d\geq\epsilon$. If $e'>0$, these comments apply equally well to $p^{e'}\lambda$ and $p^{e'}\mu$. So $\lambda$ and $\mu$, whose digits correspond naturally to those of $p^{e'}\lambda$ and $p^{e'}\mu$, differ in at most $d$ digits if ${{\text{\rm Ext}}}^n_G(L(\mu)^{[e']}, L(\lambda)^{[e']})\not=0$. Therefore, if $\lambda$ and $\mu$ differ in more than $d=d(\Phi,n)$ digits, the claims of (b) hold.
In the same spirit, $\lambda$ and $\mu$ differ in at most $d$ digits, if ${{\text{\rm Ext}}}^n_{G(q')}(L(\mu),L(\lambda))
\not=0$ for some larger power $q'$ of $p$. Otherwise, the relevant ${{\text{\rm Ext}}}$-groups all vanish, the isomorphism of (b) hold with $\lambda=\lambda'$, $\mu=\mu'$ and $e=0$.
Put $d'=\max_{n\leq m} d(\Phi,n)$. By the discussion above, we can assume that $\lambda$ and $\mu$ differ by at most $d'$ digits. Recall also the constants $e_0=e_0(\Phi,m),f_0=f_0(\Phi)$ from Lemma \[coarseCPSK\]. Set $$r_0:=r_0(\Phi,m,\epsilon)=(d'+1)(e_0+f_0+g+1)+\epsilon-1,$$ where $g=g(\Phi):=[\log_2(h-1)]+2\geq [\log_p2(h-1)]+1$. We claim that (b) holds for $r_0$. The hypothesis of (b) guarantees that $\mu\in X^+_\epsilon$, and $r\geq r_0$. Observe $r_0\geq\epsilon$, so that $\mu\in X_\epsilon^+$ is $p^r$-restricted. Also, $\lambda\in X^+_r$ under the hypothesis of (b). In particular, every composition factor $L(\tau)$ of $L(\mu^*)\otimes L(\lambda)$ is $p^{r+g}$-restricted by Lemma \[LenLem\].
For the remainder of the proof, we may assume that $\lambda$ and $\mu$ differ by at most $d'$ digits. By the “digits" of $\lambda$ and $\mu$, we will mean just the first $r$ digits, the remainder being zero. By hypothesis, $\mu\in X^+_\epsilon$, so all of its digits after the first $\epsilon$ digits are zero digits. We claim that $\lambda$ and $\mu$ have a common string of at least $(r-\epsilon+1)/(d'+1)-1$ zero digits. To see this, let $x$ denote the longest string (common to both $\lambda$ and $\mu$) of zero digits after the first $\epsilon$ digits. Our claim is equivalent (by arithmetic) to the assertion that $$\label{equivalent}
(x+1)(d'+1)\geq r-\epsilon+1.$$ To see (\[equivalent\]), call a digit position which is not zero for one of $\lambda$ or $\mu$ an exceptional position. Thus, every digit position past the first $\epsilon$ positions is either one of the (at most) $d'$ exceptional digits, or occurs in a common string in $\lambda$ and $\mu$ of at most $x$ zero digits, either right after an exceptional position, or right before the first exceptional position (after the $\epsilon-1$ position). So, after the first $\epsilon-1$ positions, there are at most $d'+1$ strings of common zero digits, each of length at most $x$. Hence, $(d'+1)x + d'\geq r-\epsilon$. That is, $(d'+1)(x+1)\geq r-\epsilon+1$. This proves the inequality (\[equivalent\]) and, thus, the claim.
Also, $x\geq (r-\epsilon+1)/(d'+1)-1\geq f_0+e_0+g_0$ since, by hypothesis, $r\geq r_0=(d'+1)(f_0+e_0+g+1)+\epsilon-1$.
We can take Frobenius twists $L(\lambda)^{[s]}$ and $L(\mu)^{[s]}$, with $s$ a non-negative integer, so that, up to the $r$th digit, the last $e_0+f_0+g$ digits of $\lambda^\circ:=\lambda^{[s]_q}$ and $\mu^\circ:=\mu^{[s]_q}$ are zero. In particular, $\lambda^\circ$ and $\mu^\circ$ both belong to $X^+_{r-e_0-f_0-g}$. We are going to use for $e=e(\lambda,\mu)$ in (b) the integer $s+e_0$. We have $${{\text{\rm Ext}}}^n_{G(q)}(L(\mu^\circ),L(\lambda^\circ))\cong {\text{\rm H}}^n(G(q),L(\mu^\circ)^*\otimes L(\lambda^\circ)),$$ and, by Lemma \[LenLem\] again, the composition factors of $M=L(\mu^\circ)^*\otimes L(\lambda^\circ)$ are in $X^+_{r-e_0-f_0-g+g}=X^+_{r-e_0-f_0}$. Let $L(\nu)$ be a composition factor of $M$. Then at least the last $e_0+f_0$ digits of $\nu$ are zero. Now, using Lemma \[coarseCPSK\], the weights of $M$ satisfy the hypotheses of Theorem \[CPSK\], and, thus, we have ${\text{\rm H}}^n(G(q),M)\cong {\text{\rm H}}^n(G,M^{[e]})$ for all $n\leq m$. The same isomorphism holds if $q$ is replaced by any larger power of $p$, so ${\text{\rm H}}^n(G, M^{[e_0]})\cong{\text{\rm H}}^n_{\text{\rm gen}}(G,M^{[e_0]})$.
Observe that $L(\lambda^{[e]_q})=L(\lambda^\circ)^{[e_0]}$, with a similar equation using $\mu$. From the definition of $M$ above, $${\text{\rm H}}^n(G,M^{[e_0]})\cong {{\text{\rm Ext}}}_G^n(L(\lambda^\circ)^{[e_0]},L(\mu^\circ
)^{[e_0]})\cong{{\text{\rm Ext}}}^n_G(L(\lambda^{[e]_q}), L(\mu^{[e]_q})),\quad\forall n, 0\leq n\leq m,$$ and similar isomorphisms hold for $G(q)$-cohomology and ${{\text{\rm Ext}}}^n_{G(q)}$-groups. We now have most of the isomorphisms needed in (b), with the remaining ones obtained from group automorphisms on $G(q)$.
This completes the proof.
\[needATwist\] [@BNP06 Thm. 5.6] shows that when $m=1$, $r\geq 3$, $p^r\geq h$, then, with $e=[(r-1)/2]$, we have ${{\text{\rm Ext}}}^1_{G(q)}(L(\lambda),L(\mu))\cong {{\text{\rm Ext}}}^1_{G}(L(\lambda)^{[e]_q},L(\mu)^{[e]_q})$.
It is tempting to think, as suggested in [@SteSL3 Question 3.8], that one might have similar behavior for higher values of $m$ for some integer $e\geq 0$ under reasonable conditions. Unfortunately, for $p$ sufficiently large, this is never true:
In [@SteUnb Thm. 1] the third author gives an example of a module $L_{r-1}$ for $SL_2$ over $\bar F_p$ with $p>2$ with the property that the dimension of ${{\text{\rm Ext}}}^2_G(L_{r-1},L_{r-1})=r-1$. Specifically, $L_r$ is $L(1)\otimes L(1)^{[1]}\otimes\dots \otimes L(1)^{[r]}$. So $L_{r-1}$ is $q=p^r$-restricted; it is self-dual, since this is true for all simple $SL_2$-modules; it also has the property that $L_{r-1}^{[e]_q}=L_{r-1}$ for any $e\in{{\mathbb N}}$. Note that as $L(2)$ is isomorphic to the adjoint module for $p>2$, we have $\dim {\text{\rm H}}^2(G(q),L(2)^{[i]})\geq 1$ for any $i\geq 0$. Set $D=\dim{{\text{\rm Ext}}}^2_{G(q)}(L_{r-1},L_{r-1})$. We show $D>\dim{{\text{\rm Ext}}}^2_G(L_{r-1},L_{r-1})=r-1$.
We have $$\begin{aligned}
D&\cong {{\text{\rm Ext}}}^2_{G(q)}(k,L_{r-1}\otimes L_{r-1})\\
&\cong {\text{\rm H}}^2(G(q), L_{r-1}\otimes L_{r-1})\\
&\cong {\text{\rm H}}^2(G(q), (L(1)\otimes L(1))\otimes (L(1)\otimes L(1))^{[1]}\otimes\dots\otimes (L(1)\otimes L(1))^{[r-1]})\\
&\cong {\text{\rm H}}^2(G(q), (L(2)\oplus L(0))\otimes (L(2)\oplus L(0))^{[1]}\otimes \dots\otimes(L(2)\oplus L(0))^{[r-1]})\\
&\geq {\text{\rm H}}^2(G(q), L(2)\oplus L(2)^{[1]}\oplus\dots\oplus L(2)^{[r-1]})\\
&\geq r>r-1,\end{aligned}$$ as required.
Indeed, [@SteUnb Rem. 1.2] gives a recipe for cooking up such examples for simple algebraic groups having any root system—one simply requires $p$ large compared to $h$.
Essentially the problem as found above can be described by saying that ${{\text{\rm Ext}}}^2_G(L_{r-1},L_{r-1})$ is not rationally stable. One does indeed have $\dim {{\text{\rm Ext}}}^2_G(L_{r-1}^{[1]},L_{r-1}^{[1]})=D$.
Motivated by the above example, we ask the following question, a modification of [@SteSL3 Question 3.8]
\[conjecture\] Let $e_0=e_0(\Phi,m):=ctm$. Does there exist a constant $r_0=r_0(\Phi,m)$, such that for all $r\geq r_0$, the following holds:
For $q=p^r$, if $\lambda,\mu\in X^+_r$, then there exists a non-negative integer $e=e(\lambda,\mu)$ such that
$$\begin{aligned}
{{\text{\rm Ext}}}^n_{G(q)}(L(\lambda),L(\mu)) &\cong {{\text{\rm Ext}}}^n_{G(q)}(L(\lambda)^{[e]},L(\mu)^{[e]})
\cong {{\text{\rm Ext}}}^n_{G(q)}(L(\lambda^{[e]_q}),L(\mu^{[e]_q}))\\
&\cong {{\text{\rm Ext}}}^n_{G,\text{\rm gen}}(L(\lambda^{[e]_q}),L(\mu^{[e]_q}))
\cong {{\text{\rm Ext}}}^n_{G}(L(\lambda^{[e]_q})^{[e_0]},L(\mu^{[e]_q})^{[e_0]})\\ \end{aligned}$$
for $n\leq m$?
We make the simple observation that in the theorems above one needs the weights $\lambda$ and $\mu$ to be $p^r$-restricted, hence, these weights determine simple modules for $G(q)$. For instance, with the notation of the previous remark, again with $G=SL_2$ and $p>2$, ${\text{\rm H}}^0(G,L_{2n-1})=0$, but $${\text{\rm H}}^0(G(p),L_{2n-1})\cong{\text{\rm H}}^0(G(p),(L(2)+L(0))^{\otimes n})\not=k,$$ with a similar phenomenon occurring for larger values of $q$. So even the $0$-degree cohomology of $G$ will not agree with that of $G(q)$ on general simple $G$-modules.
Using results of [@BNP01], we draw the following striking corollary from the main result of this section (and paper). Let $W_p=W\ltimes p{\mathbb Z}\Phi$ be the affine Weyl group and $\widetilde W_p=W\ltimes p{\mathbb Z}X$ be the extended affine Weyl group for $G$. Both groups act on $X$ by the “dot" action: $w\cdot\lambda= w(\lambda+\rho)-\rho$.
\[summary\] For a given non-negative integer $m$ and irreducible root system $\Phi$, there is, for all but finitely many prime powers $q=p^r$, an isomorphism $$\label{iso}{\text{\rm H}}^m(G(q), L(\mu))\cong {\text{\rm H}}^m(G,L(\mu')), \quad \mu\in X_r^+,$$ for some constructively given dominant weight $\mu'$. If $r$ is sufficiently large, we can take $\mu'\in X^+_r$ to be a $q$-shift of $\mu$. If $p$ is sufficiently large, and if $\mu$ is $\widetilde W_p$-conjugate to 0, then we can take $\mu'=\mu$.)
In particular, there is a bound $C=C(\Phi,m)$ such that $\dim {\text{\rm H}}^m(G(q), L(\mu))\leq C$ for all values of $q$ and $q$-restricted weights $\mu$.
We first prove the assertions in the first paragraph. It suffices to treat the case $m>0$. If $r$ is sufficiently large, then (5.4) holds by Theorem \[shiftedGeneric\] for some $q$-shift $\mu'\in X^+_r$ of $\mu$. On the other hand, suppose that $p\geq (4m+1)(h-1)$. We can assume that $\mu$ is $p$-regular, otherwise [@BNP01 Cor. 7.4] tells us that ${\text{\rm H}}^m(G(q),L(\mu))\cong{\text{\rm H}}^m(G,L(\mu))=0$, and there is nothing to prove.
Suppose $\mu=u\cdot \nu$ for some $\nu\in X^+_r$ satisfying $(\nu,\alpha_0^\vee)\leq 2m(h-1)$, and some $u\in \widetilde W_p$. Then [@BNP01 Thm. 7.5] states that $${\text{\rm H}}^m(G(q),L(\mu))\cong{\text{\rm H}}^m(G,L(u\cdot 0+p^r\nu)),$$ so that (\[iso\]) holds in this case. If $\mu$ does not have the form $\mu=u\cdot \nu$ as above, set $\mu'=\mu$. The first paragraph of the proof of [@BNP01 Thm. 7.5] shows that ${\text{\rm H}}^m(G(q),L(\mu))=0$. Also, ${\text{\rm H}}^m(G,L(\mu'))={\text{\rm H}}^m(G,L(\mu))=0$, by the Linkage Principle, since $\mu$ is not $W_p$-linked to 0.
It remains to prove the statement in the second paragraph. We have just established that there is a number $q_0$ such that for all prime powers $q=p^r\geq q_0$, for any $\mu\in X^+_r$, there exists a $\mu'\in X^+$ such that ${\text{\rm H}}^m(G(q), L(\mu))\cong {\text{\rm H}}^m(G,L(\mu'))$. By [@PS11 Thm. 7.1], the numbers $\dim {\text{\rm H}}^m(G,L(\mu'))$ are bounded by a constant $c=c(\Phi,m)$ depending only on $m$ and $\Phi$. Let $c'=\max\{\dim {\text{\rm H}}^m(G(q), L(\mu)\}$, the maximum taken over all prime powers $q=p^r<q_0$ and weights $\mu\in X_r^+$; clearly $c'$ is finite. Then $\dim {\text{\rm H}}^m(G(q), L(\mu))\leq \max\{c',c\}$.
The explicit bounds exist for $r$ to be sufficiently large, see Theorem \[shiftedGeneric\]. The explicit bound on $p$ in the proof can be improved using Theorem \[thm6.2\](c). The constructive dependence of $\mu'$ on $\mu$ merely involves the combinatorics of weights and roots.
We can give the following corollary, addressed more thoroughly in §6 below; see Theorem \[thm6.2\](c) and Theorem \[lastcor\].
If $p$ is sufficiently large, depending on $\Phi$ and the non-negative integer $m$, every weight of the form $p\tau$, $\tau\in X^+$, is $m$-generic at $q$, any power $q$ of $p$ for which $p\tau$ is $q$-restricted. In addition, if $\mu\in X^+$ is $q$-restricted, and has a zero digit in its $p$-adic expansion $\mu=\mu_0+p\mu_1+\cdots+p^{r-1}\mu_{r-1}$ ($p^r=q$), then $\mu$ is shifted $m$-generic at $q$. Moreover, in the first case, $${\text{\rm H}}^m(G(q),L(p\tau))\cong{\text{\rm H}}^m(G(q),L(\tau))\cong {\text{\rm H}}^m_{{\text{\rm gen}}}(G),L(\tau))\cong {\text{\rm H}}_{\text{\rm gen}}^m(G,L(p\tau)),$$ and, in the second case, $${\text{\rm H}}^m(G(q),L(\mu'))\cong{\text{\rm H}}^m(G(q),L(\mu))\cong {\text{\rm H}}^m_{{\text{\rm gen}}}(G,L(\mu))\cong {\text{\rm H}}_{\text{\rm gen}}^m(G,L(\mu')).$$
For the first part, observe that $p\tau$ is $\widetilde W_p$-conjugate to 0. So the first part follows from Theorem \[summary\]. For the second part, simply observe that if $\mu$ has a zero digit (among its first $r$ digits), there is some $q$-shift $\mu'$ of $\mu$ with $\mu'=p\tau$ for some dominant $\tau$.
Large prime results
===================
In this section, we give some “large prime" results. Much work has been done on this topic, see [@BNP01] and [@BNP02], as well as earlier papers [@HHA84] and [@FP83].
The following result for $p\geq 3h-3$ is given in [@BNP02 Cor. 2.4].
\[Ext1digits\] Assume $p\geq h$ and let $\lambda,\mu\in X^+_r$
\(a) Suppose ${{\text{\rm Ext}}}^1_G(L(\lambda),
L(\mu))\not=0$. Then $\lambda$ and $\mu$ differ in at most two digits, which must be adjacent.
\(b) Let $q=p^r$. If ${{\text{\rm Ext}}}^1_{G(q)}(L(\lambda),L(\mu))\not=0$, then $\lambda$ and $\mu$ differ in at most 2 digits, which must be either adjacent, or the first and the last digits.
For (a), we just need to apply Remark \[special\] since $2+[\log_p(h-1)]=2$ in this case. Part (b) follows from (a) and [@BNP06 Thm. 5.6]. (Note that both (a) and (b) are trivial unless $r\geq 3$.)
The following result combines $q$-shifting with the themes of [@BNP01 Thms. 7.5, 7.6]. There is some overlap with the latter theorem, which is an analogue for $G(q)$ and all $m$ of Andersen’s well-known $m=1$ formula for ${{\text{\rm Ext}}}^1_G$ [@HHA84 Thm. 5.6]. However, we make no assumption, unlike [@BNP01 Thm. 7.6], that $\mu$ be sufficiently far from the walls of its alcoves. For the $m=1$ case and $r\geq 3$, [@BNP02 Thm. 5.6] gives a much sharper formula, as noted in Remark \[needATwist\].
\[thm6.2\] (a) Assume that $p\geq 6m+7h-9$. Then for $q=p^r$ (any $r$), and $\lambda,\mu\in X^+_r$,
$$\label{thm6.2a}{{\text{\rm Ext}}}^m_{G(q)}(L(\lambda),L(\mu))\cong\bigoplus_{\nu}{{\text{\rm Ext}}}^m_G(L(\lambda)\otimes L(\nu),L(\mu)\otimes L(\nu)^{[r]}),$$
where $\nu\in X^+$ runs over the dominant weights in the closure of the lowest $p$-alcove.
\(b) Assume that $p> 12m +13h-16$, and $\lambda,\mu\in X^+_r$ with $\lambda$ having a zero digit. Then $\lambda, \mu$ can be replaced, maintaining the dimension of the left-hand side of (\[thm6.2a\]), by suitable simultaneous $q$-shifts $\lambda',\mu'$ so that the sum on the right-hand side of (\[thm6.2a\]) collapses to a single summand, $$\label{collapse}
{{\text{\rm Ext}}}^m_{G(q)}(L(\lambda),L(\mu))\cong{{\text{\rm Ext}}}^m_{G(q)}(L(\lambda'),L(\mu'))\cong
{{\text{\rm Ext}}}^m_G(L(\lambda')\otimes L(\nu),L(\mu')\otimes L(\nu^*)^{[r]})$$ for some (constructively determined) $\nu$ in the lowest $p$-alcove. Also, $\lambda'$ can be chosen to be any $q$-shift whose first digit is zero (possibly with different weights $\nu$ for different choices of $\lambda')$. In addition, $L(\lambda')\otimes L(\nu)$ and $L(\mu')\otimes L(\nu^*)^{[r]}$ are irreducible in this case.
\(c) Moreover, again with $p> 12m + 13h -16$ as in (b), assume that $\lambda,\mu\in X^+_r$ have a common 0 digit (among the first $r$ digits, with $q=p^r$). Then simultaneous[^12] $q$-shifts $\lambda',\mu'$ may be chosen so that $$\label{collapse2}
{{\text{\rm Ext}}}^m_{G(q)}(L(\lambda),L(\mu))\cong{{\text{\rm Ext}}}^m_{G(q)}(L(\lambda'),L(\mu'))\cong{{\text{\rm Ext}}}^m_{G,{\text{\rm gen}}}(L(\lambda'),L(\mu'))
\cong {{\text{\rm Ext}}}^m_G(L(\lambda'),L(\mu')).$$ These isomorphisms all hold, for any simultaneous $q$-shifts $\lambda',\mu'$ of $\lambda,\mu$ for which $\lambda',\mu'$ both have a zero first digit.
We first prove (a). Let $p\geq 6m+7h-9$. Put $b:=6m+6h-8$. Then if $\nabla(\nu)\otimes\nabla(\nu^*)^{[r]}$ is a section in ${{\mathcal G}}_{r,b}$, we have $$\label{smallpoint} (\nu+\rho,\alpha_0^\vee)\leq b + (\rho,\alpha_0^\vee)=
b+ h-1=6m+7h-9\leq p.$$ Therefore, ${{\mathcal G}}_{r,b}$ is completely reducible as a rational $G$-module with summands $L(\nu)\otimes L(\nu^*)^{[r]}$, in which $\nu\in X^+$ is in the closure of the lowest $p$-alcove.[^13] (Of course, $\nabla(\nu)=L(\nu)$ for $\nu$ in the closure of the lowest $p$-alcove.) For $p>6m+7h-9$, there are dominant weights $\nu$ in the closure of the lowest $p$-alcove which do not satisfy $(\nu,\alpha_0^\vee)\leq b.$ For such $\nu$, (\[vanishing\]) gives that ${{\text{\rm Ext}}}^m_G(L(\lambda),L(\mu)\otimes L(\nu)\otimes L(\nu^*)^{[r]})=0$. Thus, (a) follows.
To prove (b), assume that $p>12m + 13h-16= 2b + h$. Choose $e$ with $0\leq e<r$, so that $\lambda':=\lambda^{[e]_q}$ has its first digit equal to zero. Put $\mu':=\mu^{[e]_q}$. Then the left-hand isomorphism in (\[collapse\]) holds. In addition, the isomorphism (\[thm6.2a\]) holds.
There is an expression like (\[thm6.2a\]) with $\lambda,\mu$ replaced by $\lambda',\mu'$. If one of the terms indexed by $\nu$ on the right-hand side of (\[thm6.2a\]) (for $\lambda',\mu'$) is nonzero, then $\mu'$ is $\widetilde W_p$-conjugate to $\nu$. To see this, first note that $\lambda'=p\lambda^\dagger$ for some $\lambda^\dagger\in X^+$, so that $L(\lambda')\otimes L(\nu)$ is irreducibile by the Steinberg tensor product theorem. Similarly, $L(\mu)\otimes L(\nu^*)^{[r]}$ is irreducible since $\mu\in X^+_r$. If ${{\text{\rm Ext}}}^m_G(L(\lambda')\otimes L(\nu),L(\mu')\otimes L(\nu^*)^{[r]})\not=0$, then $\nu+p\lambda^\dagger$ is $W_p$-conjugate to $\mu'+p^r\nu^*$. Therefore, $\nu$ and $\mu'$ are $\widetilde W_p$-conjugate. Now it is only necessary to show that any two dominant weights that are $b$-small and $\widetilde W_p$-conjugate are equal. Briefly, suppose that $\nu,\nu'$ are $\widetilde W_p$-dot conjugate dominant weights in the lowest $p$-alcove. Write $\nu'=w.\cdot\nu+ p\tau$, for $w\in W$, $\tau\in X$. If $\tau=0$, then $\nu=\nu'$ because both weights are dominant. Hence, $\tau\not=0$. Thus, for $\alpha\in\Pi$, $|(\nu'-w\cdot\nu,\alpha^\vee)|
\leq |(\nu'+\rho,\alpha^\vee)| + |(w(\nu+\rho),\alpha^\vee)| \leq 2b+h$ by the first part of (\[smallpoint\]). But $2b+ h< p$. Choose $\alpha\in\Pi$ such that $(\tau,\alpha^\vee)\not=0$. It follows then that $|(\tau,\alpha^\vee)|=1$, an evident contradiction. This completes the proof of (b).[^14]$^,$ [^15]
Finally, to prove (c), we continue the proof of (b), taking $\mu'$ to also have a zero first digit. Put $\mu'=p\mu^\dagger$, for some $\mu^\dagger\in X^+$. So $\mu'$, and thus $\nu$ above, must be $\widetilde W_p$-conjugate to 0. Therefore $\nu=0$, giving all the isomorphisms in (\[collapse2\]) except the ones involving ${{\text{\rm Ext}}}^m_{G,{\text{\rm gen}}}$. We observe that the $\widetilde W_p$-conjugacy of $\nu$ to $0$ still holds passing from $r$ to $r+e_0$ for any integer $e_0\geq 0$. Thus, ${{\text{\rm Ext}}}^m_{G(q)}(L(\lambda'),L(\mu'))
\cong {{\text{\rm Ext}}}^m_{G}(L(\lambda'),L(\mu'))$ and ${{\text{\rm Ext}}}^m_{G(p^{r+e_0})}(L(\lambda'),L(\mu'))
\cong{{\text{\rm Ext}}}^m_G(L(\lambda'), L(\mu'))$, so $${{\text{\rm Ext}}}^m_G(L(\lambda'),L(\mu'))\cong{{\text{\rm Ext}}}^m_{G,{\text{\rm gen}}}(L(\lambda'),L(\mu')).$$ This proves (c).
\[rem6.3\] (a) In the situation of (b), suppose it is $\lambda$ itself that has first digit 0, and put $\lambda'=\lambda$ and $\mu'=\mu$. Assume that ${{\text{\rm Ext}}}^m_G(L(\lambda),L(\mu))\not=0$. Then the conclusion of (c) holds. To see this, note that the non-vanishing implies $\lambda$ and $\mu$ are $W_p$-conjugate, and thus $\widetilde W_p$-conjugate. Now, we can use the proofs of (b) and (c).
\(b) The requirements in (b) and (c) on the existence of a simultaneous zero cannot be dropped. An example is provided in Remark \[needATwist\].
\(c) The sum on the right-hand side of (\[thm6.2a\]) always involves a bounded number of summands, independent of $p$, namely, those in which $(\nu,\alpha_0^\vee)\leq 6m+6h-8$.
Taking $\nu=0$ in (\[thm6.2a\]) yields the following result.
\[cor6.4\] If $p\geq 6m+7h-9$, then restriction ${{\text{\rm Ext}}}^m_{G}(L(\lambda),L(\mu))\to {{\text{\rm Ext}}}^m_{G(q)}( L(\lambda),
L(\mu))$ is an injection for every $\lambda,\mu\in X^+_r$.
Suppose that $\lambda$ and $\mu$ are $q$-restricted weights. Let us call a pair $(\lambda',\mu')$ a $q$-shift of $(\lambda,\mu)$ if it is obtained by a simultaneous $q$-shift $\lambda'=\lambda^{[e]_q}$, $\mu'=\mu^{[e]_q}$. Also, we say the pair $(\lambda,\mu)$ of $q$-restricted weights is $m$-generic at $q$ if $${{\text{\rm Ext}}}^m_{G(q)}(L(\lambda),L(\mu))\cong{{\text{\rm Ext}}}^m_G(L(\lambda),L(\mu)).$$ Similarly, we say the pair $(\lambda,\mu)$ is shifted $m$-generic if $(\lambda',\mu')$ is $m$-generic at $q$ for a $q$-shift $(\lambda',\mu')$ of $(\lambda,\mu)$. Theorem \[thm6.2\](c) asserts that for $p$ large, pairs $(\lambda,\mu)$ are shifted $m$-generic at $q$. We improve this in the case that a zero digit occurs as the last digit. At the same time, we give an improvement, in the large prime case, to the limiting procedure of [@CPSK77] in the result below. Part (a) is already implicit in [@CPSK77] with a different bound, but (b) and (c) are new and at least theoretically interesting, since they give the best possible value for the increase required in $q$ to obtain stability; see the examples in Remark \[lastremark\] which follows.
\[lastcor\] Assume that $p>12m + 13h-16$, and let $\lambda,\mu\in X^+_r$. Let $q=p^r$.
\(a) The semilinear map $${{\text{\rm Ext}}}^m_G(L(\lambda)^{[1]},L(\mu)^{[1]})\to{{\text{\rm Ext}}}^m_G(L(\lambda)^{[e]},L(\mu)^{[e]})$$ is an isomorphism for every $e\geq 1$. In particular, $${{\text{\rm Ext}}}^m_{G,{\text{\rm gen}}}(L(\lambda),L(\mu))\cong {{\text{\rm Ext}}}^m_G(L(\lambda)^{[1]},L(\mu)^{[1]}).$$
\(b) Put $q'=p^{r+1}$. Then the $q'$-shifts $\lambda'=\lambda^{[1]_{q'}}$, and $\mu'=\mu^{[1]_{q'}}$ satisfy $${{\text{\rm Ext}}}^m_{G(q')}(L(\lambda),L(\mu))\cong{{\text{\rm Ext}}}^m_{G(q')}(L(\lambda'),L(\mu'))\cong{{\text{\rm Ext}}}^m_{G,{\text{\rm gen}}}(L(\lambda'),L(\mu'))\cong{{\text{\rm Ext}}}^m_G(L(\lambda'),L(\mu')).$$ In addition, for any $s\geq 1$, the map $${{\text{\rm Ext}}}^m_{G(p^{r+s})}(L(\lambda'),L(\mu'))\to{{\text{\rm Ext}}}^m_{G(q')}(L(\lambda'),L(\mu'))$$ is an isomorphism as is $${{\text{\rm Ext}}}^m_{G(p^{r+s})}(L(\lambda),L(\mu))\to{{\text{\rm Ext}}}^m_{G(q')}(L(\lambda),L(\mu)).$$ In particular, the pair $(\lambda,\mu)$ is always $m$-generic at $q'=p^{r+1}$.
\(c) Let $M,N$ be finite dimensional rational $G$-modules whose composition factors are all $p^r$-restricted. Then, if $s\geq 1$, the natural restriction map $${{\text{\rm Ext}}}^n_{G,{\text{\rm gen}}}(M,N)\to {{\text{\rm Ext}}}^n_{G(p^{r+s})}(M,N)$$ is an isomorphism for $n\leq m$ and an injection for $n=m+1$.
We begin by remarking that (c) follows from (a) and (b): Note that Corollary \[cor6.4\] applies with $m$ replaced by $m+1$, checking the required condition on $p$. Applied to $(p\lambda,p\mu)$ and assuming (a), Corollary \[cor6.4\] gives an injection $${{\text{\rm Ext}}}^{m+1}_{G,{\text{\rm gen}}}(L(p\lambda),L(p\mu))\to{{\text{\rm Ext}}}^{m+1}_{G(q')}(L(p\lambda),L(p\mu)).$$ It follows that there is a corresponding injection with $(p\lambda,p\mu)$ are replaced by $(\lambda,\mu)$. Now (c) follows from this latter injection and the last isomorphism in (b), valid also with $m$ replaced by any smaller integer. (This is a well-known 5-lemma argument needing only the injectivity for the degree $m+1$-maps.)
So it remains to prove (a) and (b). The first display in (b) follows from Theorem 6.2(c). We get a similar string of isomorphisms with $q'$ replaced by $p^{r+s}$. Note that $\lambda^{[1]_{p^{r+s}}}
=\lambda^{[1]_{q'}}=\lambda'$, with a similar equation for $\mu'$. (We use here the fact that $\lambda,\mu$ are $q$-restricted.) Now consider the the following commutative diagram, where the vertical maps are restriction maps: $$\begin{CD}
{{\text{\rm Ext}}}^m_{G(p^{r+s})}(L(\lambda),L(\mu)) @>\sim>> {{\text{\rm Ext}}}^m_{G(p^{r+s})}(L(\lambda'),L(\mu'))@>\sim>>
{{\text{\rm Ext}}}^m_{G,{\text{\rm gen}}}(L(\lambda),L(\mu))\\
@VVV @VVV @|\\
{{\text{\rm Ext}}}^m_{G(q')}(L(\lambda),L(\mu)) @>\sim>> {{\text{\rm Ext}}}^m_{G(q')}(L(\lambda'),L(\mu'))@>\sim>>{{\text{\rm Ext}}}^m_{G,{\text{\rm gen}}}(L(\lambda),L(\mu))
\end{CD}$$ It follows easily that two vertical maps (on the left) are isomorphisms, as required. This proves (b).
To prove (a), note that $\dim{{\text{\rm Ext}}}^m_G(L(\lambda)^{[1]},L(\mu)^{[1]})=\dim{{\text{\rm Ext}}}^m_{G,{\text{\rm gen}}}(L(\lambda'),L(\mu'))$ (by the first isomorphism in (b)) which equals $\dim{{\text{\rm Ext}}}^m_{G,{\text{\rm gen}}}(L(\lambda),L(\mu))$ by definition. However, the map $${{\text{\rm Ext}}}^m_G(L(\lambda)^{[1]}, L(\mu)^{[1]})\to{{\text{\rm Ext}}}^m_{G,{\text{\rm gen}}}(L(\lambda),L(\mu))$$ is injective by [@Dect]. So, by dimension considerations, it must be an isomorphism. Part (a) follows easily.
\[lastremark\] For an example of a pair $(\lambda,\mu)$ of $q$-restricted weights that is not $m$-generic or shifted $m$-generic at $q$ with $m=2$, see Remark \[needATwist\]. Even when $p$ is large, there are examples for fixed $\Phi$ (of type $A_1$) and fixed $m$ ($=2$) for arbitrarily large $r$. Of course, these examples have no zero digits in common.
[**Appendix**]{}
In this brief appendix, we consider the large prime generic cohomology results of [@FP86 §3] from the point of view of Theorem \[lastcor\] and the other results of this paper.
Suppose $q=p$ is prime and let $\mu\in X^+_1$. Assume $p> 12m + 13h- 16=2b+h$, as in Theorem \[thm6.2\](b). Taking $\lambda=0$, we thus get $$\label{appendixequal}
{\text{\rm H}}^m(G(p), L(\mu))\cong{{\text{\rm Ext}}}^m_G(L(\nu),L(\mu)\otimes L(\nu^*)^{[1]}),$$ for some $b$-small $\nu\in X^+$.
Necessarily $\mu+p\nu^*$ lies in the Jantzen region. If $\mu+p\nu^*\not\in W_p\cdot\mu$, then (\[appendixequal\]) vanishes. Otherwise, take $p$ larger, if necessary, so that the Lusztig character formula holds for $G$. Then the dimension of (\[appendixequal\]) equals the coefficient of a Kazhdan-Lusztig polynomial, since $L(\nu)\cong \Delta(\nu)$; see [@CPS93 §3].[^16] The Lusztig character formula is known to hold when $p\gg h$ [@AJS94], and is conjectured to be true for $p\geq h$.
In [@FP86 Prop. 3.2], it was observed that ${\text{\rm H}}^m(G(p), L(\mu))\cong H^m_{\text{\rm gen}}(G,L(\mu))$ if $p$ is sufficiently large, depending on the highest weight $\mu$ and the integer $m$. More precisely, in this case, taking $\mu$ to lie in the closure of the lowest $p$-alcove, [@FP86 Thm. 3.3, Cor. 3.4] gives a dimension formula, valid when $p$ is large, $$\label{formula}
\dim{\text{\rm H}}^m(G(p),L(\mu))=\begin{cases} 0\,\,\, m\,{\text{\rm odd},}\\
\sum_{w\in W} \det(w){\mathfrak p}_{m/2}(w\cdot\mu)\,\,\, m\,\,{\text{\rm even}}.\end{cases}$$ Here $\mathfrak p$ is the Kostant partition function. It is interesting to rederive this result in our present context, and compare it with (\[appendixequal\]).
In addition to assuming that $p> 2b+h$, also assume that $p\geq(\mu,\alpha_0^\vee)
+h-1$. (The last condition just says $\mu$ lies in the closure of the lowest $p$-alcove, so that $L(\mu)\cong\nabla(\mu)$.) Theorem \[lastcor\](a) shows that ${\text{\rm H}}^m_{\text{\rm gen}}(G,L(\mu))\cong{\text{\rm H}}^m(G,L(\mu)^{[1]})\cong{\text{\rm H}}^m(G,\nabla(\mu)^{[1]})$. A formula for the dimension of the latter module is derived in [@CPS09 Prop. 4.2], which is precisely that given in the right-hand side of (\[formula\]).
Under the same conditions on $p$ as in the previous paragraph, but perhaps enlarging $p$ further, we claim there is an identification of ${\text{\rm H}}^m_{\text{\rm gen}}(G,L(\mu))$ with ${\text{\rm H}}^m(G(p^r),L(\mu))$, for all positive integers $r$. For $r\geq 2$, this claim follows from Theorem \[lastcor\](c). For the case $r=1$, we also require $\mu$ to be $b$-small, a condition on $p$ when $\mu$ is fixed.
Return now to (\[appendixequal\]) in the case of a $b$-small $\mu\in X^+$. By the argument for Theorem \[thm6.2\](b), there is no other $b$-small dominant weight $\widetilde W_p$-conjugate of $\mu$. Thus, we can assume $\nu=\mu$. Next, consider ${\text{\rm H}}^m_{\text{\rm gen}}(G,L(\mu))\cong{\text{\rm H}}^m(G,L(\mu)^{[1]})$ by Theorem \[lastcor\](a). If this generic cohomology is non-zero, then $\mu$ belongs to the root lattice. If translation to the principal block is applied to $L(\mu)\otimes L(\mu^*)^{[1]}$, using [@CPS09 Lemma 3.1], we obtain an irreducible module $L(\tau)\otimes L(\mu^*)^{[1]}$ with $\tau$ in lowest $p$-alcove, and with highest weight also in $W_p\cdot 0$. Therefore, $\tau=0$. Clearly, translation to the principal block takes $L(\mu)$ to $L(0)$. Thus, $$\begin{aligned} {\text{\rm H}}^m(G(p),L(\mu))& \cong{{\text{\rm Ext}}}^m_G(L(\mu),L(\mu)\otimes L(\mu^*)^{[1]})\\
&\cong{{\text{\rm Ext}}}^m_G(L(0),L(\mu^*)^{[1]})\\
& \cong {\text{\rm H}}^m(G,L(\mu^*)^{[1]})\\
& \cong {\text{\rm H}}^m(G, L(\mu)^{[1]})\\ &\cong {\text{\rm H}}^m_{\text{\rm gen}}(G, L(\mu))\end{aligned}$$ in this case. If ${\text{\rm H}}^m_{\text{\rm gen}}(G,L(\mu))=0$, we claim that $H^m(G(p),L(\mu))=0$ also. Otherwise, $${{\text{\rm Ext}}}^m_G(L(\mu),L(\mu)\otimes L(\mu^*)^{[1]})\not=0.$$ Thus, $\mu$ and $\mu+p\mu^*$ belong to the same $W_p$-orbit, forcing $\mu^*$ to belong to the root lattice, again giving (by translation arguments) ${\text{\rm H}}^m(G(p),L(\mu))={\text{\rm H}}^m_{\text{\rm gen}}(G,L(\mu))=0$, a contradiction. This completes the proof of the claimed identification.
Finally, observe the answer we obtained for $\dim{\text{\rm H}}^m(G(p),L(\mu))$, for our rederivation of (\[formula\]), is, when $\mu$ is $b$-small and lies in the root lattice, a Kazhdan-Lusztig polynomial coefficient. (The dimension is independent of $p$ and has the form $\dim{{\text{\rm Ext}}}^m_G(\Delta(0),L(p\mu))$ with $p\mu$ in the Jantzen region.) In this case, the Kazhdan-Lusztig polynomial coefficient that gives the right-hand side of (\[appendixequal\]) is associated to ${{\text{\rm Ext}}}^m_G(\Delta(0),L(p\mu^*))$, which is the same coefficient. (Apply a graph automorphism.) When $p\mu$ is not in the root lattice, $\mu+p\mu^*$ is not in $W_p\cdot\mu$, and so the right-hand side of (\[appendixequal\]) is zero, as is $\dim{{\text{\rm Ext}}}^m_G(\Delta(0), L(p\mu))$. Consequently, the combinatorial determination of (\[appendixequal\]) is, for $b$-small $\mu$, the same as that for (\[formula\]), if Kazhdan-Lusztig polynomial coefficients are used in both cases.
But the determination of the dimension of (\[appendixequal\]) as a Kazhdan-Lusztig polynomial coefficient or zero applies for all restricted $\mu$, if $p\gg0$, not just for those that are $b$-small or lie in the closure of the bottom $p$-alcove. Thus, in some sense, the discussion above may be viewed as giving a generalization of the determination in [@FP86] of ${\text{\rm H}}^m(G(p),L(\mu))$ for $p$ sufficiently large, depending on $m$ and $\mu$.
[^1]: Research supported in part by the National Science Foundation
[^2]: The lower bound on $r$ here depends only on the root system $\Phi$ of $G$ and the cohomological degree $m$, and not on $p$ or $\lambda$. Moreover, this bound can be recursively determined.
[^3]: We ask in Question \[conjecture\] below if the ${{\text{\rm Ext}}}$-analog of the first isomorphism holds for all $q$-restricted $\lambda$ and $\mu$, though we know conditions on $\mu$ are needed for the second.
[^4]: As far as we know, the [@BNP06 Thm. 5.6] is the first use of $q$-shifted weights in a general homological theorem. However, this shifting (or wrapping) behavior for $SL_2$ had been observed much earlier: see [@AJL83 Cor. 4.5].
[^5]: The first proof that such a bound exists is unpublished, part of joint work in progress on homological bounds for finite groups of Lie type by the three authors of this paper together with C. Bendel, D. Nakano, and C. Pillen.
[^6]: In practice, this often happens when the stable limit defining the generic cohomology has already occurred at $q$; however, we do not make this part of the definition.
[^7]: The assumption that $\Phi$ is irreducible is largely a convenience. The reader can easily extend to the general case, i. e., when the group $G$ below is only assumed to be semisimple.
[^8]: The reader should keep in mind that ${{\text{\rm Ext}}}^m_G(L(\lambda),L(\mu))\cong {{\text{\rm Ext}}}^m_G(L(\mu),L(\lambda))$ and a similar statement holds for $G(q)$. Often we write the $L(\mu)$ on the left, because $\mu$ sometimes plays a special role (with restrictions of some kind), and taking $\mu=0$ gives ${\text{\rm H}}^m(G,L(\lambda))$. But we are not always consistent, as in some places where it is more convenient to have $L(\mu)$ on the right.
[^9]: Similarly, $\nabla_\zeta(\lambda)$ will denote the standard module of highest weight $\lambda$ in a category of modules for a quantum enveloping algebra $U_\zeta$; see Lemma \[filtrationlemma\].
[^10]: The first two authors expect to prove in a later paper that a good filtration of ${{\text{\rm Ext}}}^m_{G_r}(L(\lambda),L(\mu))^{[-r]}$ always exists for restricted regular weights when $r=1$, $p\geq 2h-2$ and the Lusztig character formula holds for all irreducible modules with restricted highest weights.
[^11]: For convenience, we are quoting Theorems \[digitsForG(q)\] and \[digits\] with $\lambda$ and $\mu$ in reverse order. This does not cause any problem.
[^12]: See the discussion above Theorem \[lastcor\] for the precise definition of a simultaneous $q$-shift.
[^13]: For larger $p$, $\nu$ is actually in the interior of the lowest $p$-alcove. In particular, this applies to parts (b) and (c) of the theorem.
[^14]: Apart from the use of $q$-shifts and different numerical bounds, the argument in this paragraph parallels that of [@BNP01 Thm. 7.5].
[^15]: Conceptually, the lowest $p$-alcove $C:=\{\lambda\in {\mathbb R}\otimes X\,|\, 0\leq(\lambda+\rho,\alpha_0^\vee)<p\}$ is the union of closed simplices conjugate under the finite subgroup $N$ of $\widetilde W_p$ stabilizing $C$. The group $N$ acts transitively and regularly on the interiors of these simplices. With our assumptions on the sizes of the various $(\nu,\alpha_0^\vee)$, the relevant dominant $\nu$ all belong to the interior of the “lowest" simplex—the one containing 0.
[^16]: As noted in [@BNP01 Thm. 7.5] and its proof, which also give the above isomorphism for large $p$, the right-hand side can be converted to a cohomology group (of an irreducible module) via a translation taking $\nu$ to $0$. Thus, the relevant Kazhdan-Lusztig polynomial has the form $P_{w_0,w_0w}$, where $l(w_0w)=l(w_0) + l(w)$, $w\in W_p$.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider the problem of modeling the dependence among many time series. We build high dimensional time-varying copula models by combining pair-copula constructions (PCC) with stochastic autoregressive copula (SCAR) models to capture dependence that changes over time. We show how the estimation of this highly complex model can be broken down into the estimation of a sequence of bivariate SCAR models, which can be achieved by using the method of simulated maximum likelihood. Further, by restricting the conditional dependence parameter on higher cascades of the PCC to be constant, we can greatly reduce the number of parameters to be estimated without losing much flexibility. We study the performance of our estimation method by a large scale Monte Carlo simulation. An application to a large dataset of stock returns of all constituents of the Dax 30 illustrates the usefulness of the proposed model class.'
address:
- 'Georges Lemaitre Centre for Earth and Climate Science, Earth and Life Institute, Université catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium.'
- 'Department of Mathematical Statistics, Technische Univerität München, Germany.'
- 'Department of Social and Economic Statistics, University of Cologne, 50937 Cologne, Germany.'
author:
- Carlos Almeida
- Claudia Czado
- Hans Manner
- 'Carlos Almeida, Claudia Czado and Hans Manner'
bibliography:
- 'MVSCAR\_bibliography.bib'
title: 'Modeling high dimensional time-varying dependence using D-vine SCAR models'
---
Stock return dependence, time-varying copula, D-vines, efficient importance sampling, sequential estimation
*JEL Classification:* C15, C51, C58
Introduction
============
The modeling of multivariate distributions is an important task for risk management and asset allocation problems. Since modeling the conditional mean of financial assets is rather difficult, if not impossible, much research has focused on modeling conditional volatilities and dependencies. The literature on multivariate GARCH [@BLR06] and stochastic volatility models [@HRS94; @YM06] offers many approaches to extend univariate volatility models to multivariate settings. However, usually the resulting multivariate model makes the assumption of (conditional) multivariate normality. Multivariate models based on copulas offer a popular alternative in that non-elliptical multivariate distributions can be constructed in a tractable and flexible way. The advantage of using copulas to construct multivariate volatility models is that one is free with the choice of the marginal model, i.e. the univariate volatility model, and that it is possible to capture possibly asymmetric dependencies in the tails of different pairs of the distributions. In particular, lower tail dependence often needs to be accounted for when measuring financial risks. Among many others, @P09 or @CLV04, and references therein, give an overview of copula based models in financial applications.
Two major drawbacks of most of the early applications of copula based models are that most studies focus on bivariate copulas only, limiting the potential for real world applications, and that the dependence parameter is assumed to be time-constant. This is in contrast to the empirically observed time-varying correlations. Each of these issues individually has been addressed in the literature in recent years. Larger dimensional copulas other than Gaussian or Student copulas have become available through the introduction of hierarchical Archimedean copulas by @ST10 and @OOS09, factor copula models by @OP11, or the class of pair copula constructions by @ACFB09. In particular the latter class, also called vine copula constructions, has become extremely popular because of its flexibility and because of the possibility of estimating the large number of parameters sequentially. Examples of financial applications of vine copula models are, e.g., @CHV08, @dissmann-etal, or @BrechmannCzado2011. Copulas with time-varying parameters have been introduced by @P06a to model changing exchange rate dependencies. Since then a number of studies have proposed different ways to specify time-varying copulas. For example, @DE04 test for structural breaks in copula parameters, @GHS09 use a sequence of breakpoint tests to identify intervals of constant dependence, @GT11 and @stoeber2011c use a regime-switching model for changing dependencies, @HR10 treat the copula parameter as a smooth function of time and estimate it by local maximum likelihood, whereas @HM10 and @AC10 proposed a model where the copula parameter is a transformation of a latent Gaussian autoregressive process of order one. An overview and comparison of time-varying copula models is given in @MR10. Only very few papers allow for time-varying parameters in larger dimensions. @CHV08 estimate a regime-switching vine copula and @HV08 allow the parameter of a vine copula to be driven by a variation of the DCC model by @E02.
The contribution of this paper is to extend the stochastic autoregressive copula (SCAR) model by @HM10 and @AC10 to practically relevant dimensions using D-vines. We discuss how the proposed model can be estimated sequentially using simulated maximum likelihood estimation. We also address how a number of conditional copulas can be restricted to be time-constant or independence copulas without restricting the flexibility of the model too much. Furthermore, we perform a large scale Monte Carlo study investigating the behavior of the proposed estimator and find that it shows an acceptable performance. In our empirical study we apply our model to the returns of 29 constituents of the German DAX 30 and find that the model performs quite well.
The remainder of the paper is structured as follows. The next section introduces copulas in general, SCAR models, D-vines copulas and shows how they can be combined to obtain the flexible class of D-vine SCAR models. Section \[Sec:Estimation\] treats the estimation of the proposed model, Section \[Sec:Simulations\] presents the results of our simulation study, Section \[Sec:Application\] contains the empirical application and Section \[Sec:Conclusion\] gives conclusions and outlines further research.
D-vine based SCAR models {#Sec:DVineSCAR}
========================
We are interested in modeling the joint (conditional) distribution of a d-dimensional time series $\bmy_t=(y_{1,t},...,y_{d,t})$ for $t=1,...,T$. We assume that each variable $y_{i,t}$ for $i=1,...,d$ follows an ARMA(p,q)-GARCH(1,1) process, i.e. $$\label{ARMA-GARCH}
y_{i,t} = \beta_{i,0} + \sum_{j=1}^p \beta_{i,j} y_{i,t-j} + \sum_{k=1}^q \delta_{i,k} \sigma_{i,t-k} \varepsilon_{i,t-k} + \sigma_{i,t} \varepsilon_{i,t}$$ with $$\sigma^2_{i,t}=\alpha_{i,0} + \alpha_{i,1} \varepsilon^2_{i,t-1} + \gamma_i \sigma^2_{i,t}.$$ The usual stationarity conditions are assumed to hold. Denote the joint distribution of the standardized innovations $\varepsilon_{i,t}$ by $G(\varepsilon_{1,t},...,\varepsilon_{d,t})$ and let their marginal distributions be $F_i(\varepsilon_{1,t}),...,F_d(\varepsilon_{d,t})$, respectively. Then by Sklar’s theorem there exists a copula $C$ such that $$\label{full-model}
G(\varepsilon_{1,t},...,\varepsilon_{d,t})=C(F_i(\varepsilon_{1,t}),...,F_d(\varepsilon_{d,t})).$$
Since all the marginal behavior is captured by the marginal distribution, the copula captures the complete contemporaneous dependence of the distribution. Let $u_{i,t}=F_i(\varepsilon_{i,t})$ be the innovations transformed to $U(0,1)$ random variable and define $\bmu_t:=(u_{1,t},...,u_{d,t})$.
In the remainder of this section we first show how time-varying dependence can be incorporated in bivariate copula models. Then we discuss how these models can be extended to arbitrary dimensions, for which we need the notion of vines.
Bivariate SCAR Copula Models
----------------------------
For now, consider the bivariate time series process $(u_{i,t}, u_{j,t})$ for $t=1,...,T$. We assume that its distribution is given by $$\label{biscarden}
(u_{i,t}, u_{j,t}) \sim C(\cdot,\cdot;\theta^{ij}_t)$$ with $\theta^{ij}_t \in \Theta$ the time-varying parameter of the copula C. In order to be able to compare copula parameters that have different domains, the copula can equivalently be parameterized in terms of Kendall’s $\tau \in (-1,1)$. This follows from the fact that for all copulas we consider there exists a one-to-one relationship between the copula parameter and Kendall’s $\tau$, which we express by $\theta_t^{ij}=r(\tau_t^{ij})$. We assume that $\tau^{ij}_t$ is driven by the latent Gausian AR(1) process $\lambda_t^{ij}$ given by $$\lambda^{ij}_t=\mu_{ij}+\phi_{ij} (\lambda^{ij}_{t-1}-\mu_{ij})+\sigma_{ij} z_{ij,t},$$ where $z_{ij,t}$ are independent standard normal innovations. We further assume $|\phi_{ij}|<1$ for stationarity and $\sigma_{ij}>0$ for identification. Due to the fact that $\lambda^{ij}_t$ takes values on the real line we apply the inverse Fisher transform to map it into $(-1,1)$, the domain of $\tau^{ij}_t$: $$\label{eq:Inv_Fisher}
\tau^{ij}_t=\frac{exp(2 \lambda^{ij}_t)-1}{{exp(2 \lambda^{ij}_t)+1}}=:
\psi(\lambda^{ij}_t).$$
D-vine Distributions and Copulas
--------------------------------
While copulas are recognized as a very powerful tool to construct multivariate distributions, in the past only the class of bivariate copulas (e.g. [@J97] and [@Nelsen]) was flexible enough to accommodate asymmetric and/or tail dependence without placing unrealistic restrictions on the dependence structure. Recently pair copula constructions (PCC) are found to be very useful to construct flexible multivariate copulas. Here a multivariate copula is built up with bivariate copula terms modeling unconditional and conditional dependencies. The first such construction was proposed in @Joe96. It was subsequently significantly extended to more general settings in @BC02, @Bedford2 and @vinebook. They called the resulting distributions regular (R) vines and explored them for the case of Gaussian pair copulas. The backbone is a graphical representation in form of a sequence of linked trees identifying the indices which make up the multivariate copula. In particular, they proved that the specification of the corresponding pair copula densities make up a valid multivariate copula density. Further properties, estimation, model selection methods and their use in complex modeling situations can be found in @kuro:joe:2010.
@ACFB09 recognized the potential of this construction for statistical inference and developed a sequential estimation (SE) procedure, which can be used as starting values for maximum likelihood estimation (MLE). @BC02 identified two interesting subclasses of regular vines called D-vines and canonical (C)-vines. In the case of D-vines the sequence of vine trees consist of pair trees, while for C-vines they are starlike with a central node. This shows that C-vines are more useful for data situations where the importance of the variables can be ordered. This is not the case for the application we will present later; therefore we concentrate on D-vines. However, we would like to note that multivariate SCAR models can also be constructed based on C-vines and more generally on R-vines.
Notably, C- and D-vines can be introduced from first principles (e.g. [@cza:2010]). For this let $(X_1,...,X_d)$ be a set of variables with joint distribution $F$ and density $f$, respectively. Consider the recursive decomposition $$\begin{aligned}
f(x_1, \ldots,x_d)& = & \prod_{k=2}^d f(x_k|x_1,\ldots,x_{k-1}) \times
f(x_1). \label{decomp}\end{aligned}$$ Here $F(\cdot|\cdot)$ and later $f(\cdot|\cdot)$ denote conditional cdf’s and densities, respectively. As a second ingredient we utilize Sklar’s theorem for dimension $d=2$ to express the conditional density of $X_1$ given $X_2=x_2$ as $$\label{sklarcon}
f(x_1|x_2) = c_{1 2 }(F_1(x_1), F_2(x_2)) \times f_1(x_1),$$ where $ c_{1 2 }$ denotes an arbitrary bivariate copula density. For distinct indices $i,j,i_1,\cdots,i_k$ with $ i < j $ and $i_1<\cdots
<i_k$ we now introduce the abbreviation $$\label{abbrev}
c_{i,j|D}:=c_{i,j|D}(F(x_i|{{\bf x}}_D), F(x_j|{{\bf x}}_D)),$$ where $D:=\{i_1,\cdots,i_k\}$ and ${{\bf x}}_D:=(x_{i_1},
\ldots, x_{i_k})$. Using (\[sklarcon\]) for the conditional distribution of $(X_1,X_k)$ given $X_2=x_2,\ldots X_{k-1}=x_{k-1}$ we can express $f(x_k|x_1,\cdots,x_{k-1})$ recursively as $$\begin{aligned}
f(x_k|x_{1},\ldots,x_{k-1}) & = & c_{1,k|2:k-1}
\times f(x_{k}|x_2,\ldots, x_{k-1})\nonumber\\
& = & [\prod_{s=1}^{k-2}c_{s,k|s+1:k-1}]
\times c_{(k-1),k} \times f_k(x_k) \label{condichte},\end{aligned}$$ where $r:s:=(r,r+1,\ldots,s)$ for integers $r$ and $s$ with $r<s$. Using (\[condichte\]) in (\[decomp\]) and $s=i, k=i+j$ it follows that $$\begin{aligned}
f(x_1,\ldots,x_d)
& = &[ \prod_{j=1}^{d-1} \prod_{i=1}^{d-j}
c_{i,i+j|i+1:i+j-1}] \cdot[\prod_{k=1}^d
f_k(x_k)] \label{dvine}\end{aligned}$$ If the marginal distribution of $X_k$ are uniform for all $k=1,\cdots,d$, then we call the corresponding density in (\[dvine\]) a D-vine copula density and the corresponding distribution function a D-vine copula.
For illustration we consider a five dimensional D-vine, its density then given by $$\begin{aligned}
f(x_1,\cdots, x_5) & = & [\prod_{k=1}^5 f_k(x_k)] \cdot
c_{12} \cdot c_{23} \cdot c_{34} \nonumber \\
&\times &
c_{45}
\cdot c_{13|2}
\cdot c_{24|3} \cdot c_{35|4}
\cdot c_{14|23} \cdot c_{25|34} \cdot c_{15|234},
\label{dvine5}\end{aligned}$$ with the corresponding vine tree representation identifying the utilized indices given in Figure \[dvinetree\]. In particular the indices in Tree $T_1$ indicate the unconditional pair copulas, while Trees $T_2$ to $T_4$ correspond to conditional pair copulas, where the set of conditioning variables has size 1 to 3, respectively.
![A D-vine tree representation for $d=5$. []{data-label="dvinetree"}](dvine)
If $c_{i,i+j|i+1:i+j-1}$ models the dependence between the rv’s $F(X_i|{{\bf x}}_{i+1:i+j-1})$ and $F(X_{i+j}|{{\bf x}}_{i+1:i+j-1})$ we implicitly assume that the copula density $c_{i,i+j|i+1:i+j-1}(\cdot,\cdot)$ does not depend on the conditioning variables ${{\bf x}}_{i+1:i+j-1}$ other than through the arguments $F(X_i|{{\bf x}}_{{i+1}:
{i+j-1}})$ and $F(X_{i+j}|{{\bf x}}_{{i+1}:{i+j-1}})$. This is a common assumption and @haff-etal call this a simplified vine. They showed that this restriction is not severe by examining several examples.
In the D-vine representation given in (\[dvine\]) we also need a fast recursive way to compute conditional cdf’s which enter as arguments. For this @Joe96 showed that for $v \in D$ and $D_{-v}:=D
\setminus v$ $$\label{condDist}
F(x_j|{{\bf x}}_D) =
\frac{\partial\,C_{j,v|D_{-v}}(F(x_j|{{\bf x}}_{D_{-v}}),
F(x_v|{{\bf x}}_{D_{-v}}))}{\partial F(x_v|{{\bf x}}_{D_{-v}})}.$$ For the special case of $D=\{v\}$ it follows that $$F(x_j|x_v) =
\frac{\partial\,C_{j,v}(F(x_j),
F(x_v))}{\partial F(x_v)}.$$ In the case of uniform margins $u_j=F_j(x_j)$, for a parametric copula cdf $C_{jv}(u_j,u_v)=
C_{jv}(u_j,u_v;{\mbox{\boldmath$\theta$}}_{jv})$ this further simplifies to $$h(u_j|u_v,{\mbox{\boldmath$\theta$}}_{jv}):=\frac{\partial\,C_{j,v }(u_j,
u_v ;{\mbox{\boldmath$\theta$}}_{jv})}{\partial u_v}.
\label{hfunc}$$ With this notation we can express $F(x_j|{{\bf x}}_{D})$ as $$\begin{aligned}
F(x_j|{{\bf x}}_{D})
&=& h(F(x_j|{{\bf x}}_{D_{-v}})|F(x_v|{{\bf x}}_{D_{-v}}), {\mbox{\boldmath$\theta$}}_{jv|D_{-v}}).\end{aligned}$$ This allows the recursive determination of the likelihood corresponding to (\[dvine\]). Furthermore, the inverse of the $h$-functions is used to facilitate sampling from D- and C-vines (see for example [@ACFB09] and [@kurowicka-csda-2007]). They are also used for sampling from the more general R-vine model (see [@stoeberczado2011]).
D-vine based multivariate SCAR models
-------------------------------------
We now combine bivariate SCAR models and D-vines to formulate a multivariate D-vine SCAR model. For this we use a bivariate SCAR copula model as the pair copula model in a D-vine copula. This gives rise to the following definition of a D-vine SCAR copula density $$\label{mscar-den}
c(u_1,\cdots,u_d; \bm \theta_t):=
\prod_{j=1}^{d-1} \prod_{i=1}^{d-j}
c(F(u_i|\bm u_{i+1:i+j-1};\bm \theta^{l(i,j)}_t),
F(u_{i+j}|\bm u_{i+1:i+j-1});\bm \theta^{l(i,j)}_t),$$ where $l(i,j):=i,i+j|i+1:i+j-1$ and $\bm \theta_t:=
\{\bm \theta^{l(i,j)}_t ; j=1,\cdots, d-1,
i=1,\cdots,d-j\}$ is the time-varying copula parameter vector. Here $c(\cdot,\cdot;\bm \theta^{l(i,j)}_t)$ is the bivariate copula density corresponding to the bivariate SCAR copula given in (\[biscarden\]), where $\bm \theta^{l(i,j)}_t$ satisfies $$\label{mscartheta}
\bm \theta^{l(i,j)}_t= r(\tau^{l(i,j)}_t)
=r(\psi(\lambda^{l(i,j)}_t))$$ for the latent Gaussian AR(1) process $\lambda^{l(i,j)}_t$ with $$\lambda^{l(i,j)}_t=\mu_{l(i,j)}+
\phi_{l(i,j)} (\lambda^{l(i,j)}_{t-1}
-\mu_{l(i,j)})+\sigma_{l(i,j)} z^{l(i,j)}_t.$$ Here $z_{t}^{l(i,j)}$ are independent standard normal innovations for $j=1,\cdots, d;
i=1,\cdots,d-j$. As for the bivariate case, we assume $|\phi_{l(i,j)}| <1$ and $\sigma_{l(i,j)} >0$ for stationarity and identification, respectively. The bivariate copula family corresponding to $l(i,j)$ can be chosen arbitrarily and independent of any other index $l(r,s)$.
The copula in (\[mscar-den\]) can be used in (\[full-model\]) to specify the joint distribution of the innovations in (\[ARMA-GARCH\]).
Parameter estimation in D-vine SCAR models {#Sec:Estimation}
==========================================
We are interested in estimating the parameters of both the marginal models and the stochastic copula models. The joint density of our model is given by the product of the marginal and the copula densities $$g(\varepsilon_{1,t},...,\varepsilon_{d,t})=c(F_i(\varepsilon_{1,t}),...,F_d(\varepsilon_{d,t}))\cdot
f_i(\varepsilon_{1,t})\cdot ... \cdot f_d(\varepsilon_{d,t}),$$ where $g$, $c$ and $f$ denote the densities of the joint distribution, the copula and the marginal distributions, respectively. Taking logarithms, we can see that the joint log-likelihood is the sum the marginal and the copula log-likelihood function. For estimation we ultilize a two-step approach common in copula based models. In this approach first the marginal parameters are estimated separately and standardized residuals are formed. These are transformed using either a parametric (see [@Joe2005]) or nonparametric probability integral transformation (see [@Genest1995]) to get an independent sample from a multivariate copula. These transformations do not change the dependence structure among the standardized residuals. This approach allows us to perform the estimation of the marginal and copula parameters separately. If the marginal models are chosen carefully, as we will do, then a parametric probability transformation is a good approximation to the true copula data $u_{i,t}=F_i(\varepsilon_{i,t})$. Problems only occur if the marginal models are grossly misspecified (see [@Kim2007]).
Furthermore, we saw above that the density of a D-vine copula is the product of bivariate (conditional) copulas. Therefore, instead of estimating all copula parameters of our model in one step, which is computationally infeasible due to the large number of parameters, we are able to estimate the copula parameters sequentially.
In Section \[bi\_SCAR\_est\] the estimation of bivariate SCAR copula models by simulated maximum likelihood (SML) using efficient importance sampling (EIS) is reviewed, Section \[vine\_est\] presents the sequential estimation of vine copula models and in Section \[MV\_SCAR\_est\] we discuss how the sequential estimation of D-vine SCAR copula models can be achieved.
Estimation of bivariate SCAR copula models {#bi_SCAR_est}
------------------------------------------
For the moment, we are interested in estimating the copula parameter vector $\bm \omega:=(\mu, \phi, \sigma)$. For notational convenience we decided to drop the indices $i$ and $j$ whenever no ambiguity arises. Denote $\bm u_{i}=\{u_{i,t}\}_{t=1}^T$, $\bm u_{j}=\{u_{j,t}\}_{t=1}^T$ and $\bm
\Lambda=\{\lambda_{t}\}_{t=1}^T$ and let $f(\bm u_{i},\bm u_{j},\bm \Lambda;\bm \omega)$ be the joint density of the observable variables $(\bm u_i,\bm
u_j)$ and the latent process $\bm\Lambda$. Then the likelihood function of the parameter vector $\bm \omega$ can be obtained by integrating the latent process $\bm \Lambda$ out of the joint likelihood,
$$\begin{aligned}
L(\bm \omega;\bm u_{i},\bm u_{j})=\int f(\bm u_{i},\bm u_{j},\bm \Lambda;\bm \omega)d \bm\Lambda.\end{aligned}$$
We can alternatively write this as a product of conditional densities $$L(\bm \omega;\bm u_{i},\bm u_{j})=\int \prod_{t=1}^T f(u_{i,t}, u_{j,t}, \lambda_t|\lambda_{t-1},\bm \omega) d \bm \Lambda.$$
This is a T-dimensional integral that cannot be solved by analytical or numerical means. It can, however, be solved efficiently by Monte Carlo integration using a technique called efficient importance sampling introduced by @RZ07. The idea is to make use of an auxiliary sampler $m(\lambda_t;\lambda_{t-1},\bma_t)$ that utilizes the information on the latent process contained in the observable data. Note that it depends on the auxiliary parameter vector $\bma_t=(a_{1,t},a_{2,t})$. Multiplying and dividing by $m(\cdot)$, the likelihood can then be rewritten as $$\label{eq:EIS_LL_0}
L(\bm \omega;\bm u_{i},\bm u_{j})=\int \prod_{t=1}^T \left[\frac{f(u_{i,t}, u_{j,t}, \lambda_t|\lambda_{t-1},\bm \omega)}
{m(\lambda_t;\lambda_{t-1},\bma_t)}\right] \prod_{t=1}^T m(\lambda_t;\lambda_{t-1},\bma_t) d\bm \Lambda.$$ Drawing $N$ trajectories $\tilde{\bm \Lambda}^{(i)}$ from the importance sampler[^1] the likelihood can be estimated by $$\label{eq:EIS_LL}
\tilde{L}(\bm \omega;\bm u_{i},\bm u_{j})=\frac{1}{N}\sum_{i=1}^N \left(\prod_{t=1}^T \left[\frac{f(u_{i,t}, u_{j,t},
\tilde{\lambda}_t^{(i)}|\tilde{\lambda}_{t-1}^{(i)},\bm \omega)}{m(\tilde{\lambda}_t^{(i)};\tilde{\lambda}_{t-1}^{(i)},\bma_t)}\right]\right).$$ This leaves the exact choice of the importance sampler $m(\cdot)$ to be determined, which ideally should provide a good match between the numerator and the denominator of (\[eq:EIS\_LL\]) in order to minimize the variance of the likelihood function. It is chosen to be $$m(\lambda_t;\lambda_{t-1}, \bma_t) = \frac{k(\lambda_t, \lambda_{t-1};\bma_t)}{\chi(\lambda_{t-1};\bma_t)},$$ where $$\chi(\lambda_{t-1};\bma_t) = \int k(\lambda_t, \lambda_{t-1};\bma_t) d \lambda_t$$ is the normalizing constant of the auxiliary density kernel $k(\cdot)$. Furthermore, the choice $$k(\lambda_t, \lambda_{t-1};\bma_t) = p(\lambda_t| \lambda_{t-1},\bm \omega) \zeta(\lambda_t,\bma_t),$$ with $p(\lambda_t|\lambda_{t-1},\bm \omega)$ the conditional density of $\lambda_t$ given $\lambda_{t-1}$ and $\zeta(\lambda_t,\bma_t)= \exp(a_{1,t}\lambda_t + a_{2,t}\lambda_{t}^2)$ turns out to simplify the problem considerably. Noting that $f(u_{i,t}, u_{j,t}, \lambda_t|\lambda_{t-1},\bm \omega)=
c(u_{i,t}, u_{j,t}; \lambda_t) p(\lambda_t| \lambda_{t-1}, \bm \omega)$, the likelihood expression (\[eq:EIS\_LL\_0\]) can be rewritten as $$\label{EIS_LL2}
L(\bm \omega;\bm u_{i},\bm u_{j})=\int \prod_{t=1}^T \left[\frac{c(u_{i,t},
u_{j,t}; \lambda_t)\chi(\lambda_{t};\bma_{t+1})}{\exp(a_{1,t}\lambda_t +
a_{2,t}\lambda_{t}^2)}\right] \prod_{t=1}^T m(\lambda_t;\lambda_{t-1},\bma_t) d\bm\Lambda,$$ where we have used the fact that $\chi(\cdot)$ can be transferred back one period, because it does not depend on $\lambda_t$. Defining $\chi(\lambda_T;\bma_{T+1}) \equiv 1$ and given a set of trajectories $\tilde{\Lambda}^{(i)}$ for $i=1,\ldots,N$, minimizing the sampling variance of the quotient in the likelihood function is equivalent to solving the following linear least squares problem for each period $t=T,\ldots,1$, $$\label{eq:LS_problem}
\log c(u_{i,t}, u_{j,t}; \tilde{\lambda}_t^{(i)}) + \log \chi(\lambda_{t};\bma_{t+1}) = c_t + a_{1,t} \tilde{\lambda}_t^{(i)} + a_{1,t} [\tilde{\lambda}_t^{(i)}]^2 + \eta_t^{(i)}.$$ This problem can be solved by OLS with $c_t$ the regression intercept and $\eta_t^{(i)}$ the error term. Then the procedure works as follows. First, draw $N$ trajectories $\tilde{\Lambda}^{(i)}$ from $p(\lambda_t| \lambda_{t-1}, \omega)$ and estimate the auxiliary parameters $\hat{\bma}_t$ for $t=T,\ldots,1$ by solving (\[eq:LS\_problem\]). Next, draw $N$ trajectories $\tilde{\Lambda}^{(i)}$ from the importance sampler $m(\lambda_t;\lambda_{t-1}, \hat{\bma}_t)$ and re-estimate the auxiliary parameters $\{\hat{\bma}_t\}_{t=1}^T$. Iterate this procedure until convergence of $\{\hat{\bma}_t\}_{t=1}^T$ and use $N$ draws from the importance sampler to estimate the likelihood function (\[eq:EIS\_LL\]). This likelihood function can then be maximized to obtain parameter estimates $\hat{\bm \omega}$. Note that throughout the same random numbers have to be used in order to ensure convergence of $\{\hat{\bma}_t\}_{t=1}^T$ and smoothness of the likelihood function.
Although the parameter vector $\bm \omega$ driving the latent process is of some interest, ultimately one wishes to get estimates of the latent process $\bm \Lambda$ and transformations thereof. In particular, we are interested in estimating $\tau_t=\psi(\lambda_t)$ for $t=1,\ldots,T$, where $\psi(\cdot)$ denotes the inverse Fisher transform given in (\[eq:Inv\_Fisher\]). Smoothed estimates of $\psi(\lambda_t)$ given the entire history of the observable information $\bm u_i$ and $\bm u_j$ can be computed as $$\label{eq:EIS_psi}
E[\psi(\lambda_t)|\bm u_i,\bm u_j]=\frac{\int \psi(\lambda_t) f(\bm u_{i},\bm u_{j},\bm \Lambda;\bm \omega)d\bm \Lambda}{\int f(\bm u_{i},\bm u_{j},
\bm \Lambda;\omega)d\bm\Lambda}.$$ Note that the denominator in (\[eq:EIS\_psi\]) corresponds to the likelihood function and both integrals can be estimated using draws from the importance sampler $m(\lambda_t;\lambda_{t-1}, \hat{\bma}_t)$. Filtered estimates of $\psi(\lambda_t)$ given information until time $t-1$ can be computed in a similar way and details are given in @LR03.
Sequential estimation of D-vine copula parameters {#vine_est}
-------------------------------------------------
The form of the D-vine density given in (\[dvine\]) allows for a sequential parameter estimation approach starting from the first tree until the last tree. This was first proposed by @ACFB09 for D-vines and shown in detail for C-vines in @schepsmeier. First estimate the parameters corresponding to the pair-copulas in the first tree using any method you prefer. For the copula parameters identified in the second tree, one first has to transform the data with the $h$ function in (\[hfunc\]) required for the appropriate conditional cdf using estimated parameters to determine pseudo realizations needed in the second tree. Using these pseudo observations the parameters in the second tree are estimated, the pseudo data is again transformed using the $h$ function and so on.
For example we want to estimate the parameters of copula $c_{{13|2}}$. First transform the observations $\{u_{1,t},u_{2,t},u_{3,t}, t=1,\cdots,n\}$ to $u_{1|2,t}:=h(u_{1,t}|u_{2,t},\hat{\theta}_{12})$ and $u_{3|2,t}:=h(u_{3,t}|u_{2,t},\hat{\theta}_{23})$, where $\hat{\theta}_{12}$ and $\hat{\theta}_{23}$ are the estimated parameters in the first tree. Now estimate $\theta_{13|2}$ based on $\{u_{1|2,t},u_{3|2,t}; t=1,\cdots,n\}$. Continue sequentially until all copula parameters of all trees are estimated. For trees $T_i$ with $i \geq 2 $ recursive applications of the $h$ functions is needed. Asymptotic normality of the SE has been established by @haff. However, the asymptotic covariance of the parameter estimates is very complex and one has to resort to bootstrapping to estimate the standard errors. SE is often used in large dimensional problems, e.g. @mendes, @heinen:2008, @brechmann-etal and @BrechmannCzado2011. A joint MLE of all parameters in (\[dvine\]) requires high dimensional optimization. Therefore SE’s are often used as starting values as, e.g., in @ACFB09 and @schepsmeier.[^2]
A final issue to be discussed is model selection for D-vine copula models. For non Gaussian pair copulas, permutations of the ordering of the variables give different D-vine copulas. In fact, there are $d!/2$ different D-vine copulas when a common bivariate copula is used as pair copula type. Often the bivariate Clayton, Gumbel, Gauss, t, Joe and Frank copula families are utilized as choices for pair copula terms. However, in this study we restrict the attention to the Gauss, Gumbel, Clayton and rotated versions thereof.
Estimation of D-vine SCAR models {#MV_SCAR_est}
--------------------------------
In principle, estimation of the D-vine SCAR model given in (\[mscar-den\]) works the same way as for static D-vine copulas. There are, however, two important differences. First of all, given that the bivariate SCAR models in the first tree have been estimated, it is not possible to apply the $h$ function given in (\[hfunc\]) directly to obtain the pseudo observations that are needed to obtain the parameters on the second tree. The reason is that one only obtains parameter estimates of the hyper-parameters $(\mu, \phi, \sigma)$, but not of the latent (time-varying) copula parameters $\theta_t$. We do, however, have $N$ simulated trajectories $\tilde{\theta}^{(i)}_t$ from the importance sampler. With these we can calculate the pseudo observations by $$u_{j|v,t}=\frac{1}{N}\sum_{i=1}^N h(u_{j,t}|u_{v,t}, \tilde{\theta}_t^{(i)}),$$ where we suppress the dependence of $\tilde{\theta}$ on the variable indices $j$ and $v$ for notational reasons.[^3] The second difference to the static D-vine copula model is that one-step estimation by MLE is computationally not feasible for the time-varying model, because each bivariate likelihood function needs to be computed by simulation.
For a $d$ dimensional dataset the D-vine SCAR copula one has to estimate $3d(d+1)/2$ parameters. Fortunately, we can reduce the number of parameters to be estimated by placing a number of restrictions. Similar to tail properties of D-vines studied in @joe2010 the choice of time-varying pair copulas in the first tree propagates to the whole distribution, in particular all pairs of variables have an induced time-varying Kendall’s tau. We expect estimation errors to increase for parameters as the corresponding tree increases because of the sequential nature of the estimation procedure. Therefore we allow for D-vine SCAR copula models where the pair copulas are time-varying only in lower trees, while the pair copulas are time-constant for higher trees. Such models will also be investigated in our simulation study.
A second useful restriction is to allow for the possibility of truncating the D-vine copula, which means that we set all pair copulas beyond a certain tree equal to the independence copula. This is empirically justified, since the dependence in the lower trees seems to capture most of the overall dependence in the data and the conditional dependence in higher trees is hardly visible. Note that this also allows the estimation of our model in arbitrarily large dimensions, as we will only need to estimate (bivariate) models up to a certain dimension and can truncate the model thereafter. For static models this has been followed by @brechmann-etal and includes tests at which level to truncate.
In order to decide which copula family to use and whether to use time-varying, time-constant or independence copulas at certain levels we compare the Bayesian Information Criterion (BIC) for all competing models. We decided for this information criterion, because it favors parsimonious models. Given the high flexibility of the D-vine SCAR model and the difficulty to estimate the parameters at higher level, we believe that parsimony is crucial.
Computational issues
--------------------
Estimation of a bivariate SCAR model by simulated maximum likelihood programmed in MATLAB can take up to several minutes on a normal computer. As we consider a Monte Carlo simulation and an application with 29 variables this is much too slow. However, the problem at hand offers itself to parallel computing. On each tree one has to estimate a large number of bivariate models independently of each other. Therefore, given a sufficient number of processing cores we can estimate all models on one tree at the same time and proceed to the next tree once all models are estimated. Depending on the dimension of the problem, this can lead to immense increases in computing speed. The computations for the Monte Carlo study in the next section and our empirical application were performed on a Linux cluster computer computer with 32 processing cores (Quad Core AMD Opteron, 2.6Mhz), The most demanding computational task, the estimation of the log-likelihood function by EIS, is implemented in C++, which resulted in our code being about 20-30 times faster compared to MATLAB code. The maximization of the likelihood and the parallel computation within levels is implemented in R (version 2.12.1) by using the `optim` function and the multicore library.
Simulation study {#Sec:Simulations}
================
To investigate the performance of the sequential estimation utilizing EIS for D-vine SCAR copula models we conducted an extensive simulation study.[^4] For this we chose a four dimensional setup. We simulated 1000 four dimensional SCAR D-vine copula data sets of length 1000 under several scenarios. Then we estimated the relative bias and relative MSE for the parameters of the latent AR(1) processes corresponding to each of the six bivariate copula terms of the D-vine copula. Standard errors are also estimated and given in parentheses in the tables.
We expect that the stationary latent AR(1) signal-to-noise ratio given by $sn:=\frac{\mu}{\sigma (1-\phi^2)^{-1/2}}$ and the stationary variance $avar:= \frac{\sigma^2}{1-\phi^2}$ will influence the performance. Therefore we include these quantities as well. The copula families we consider are the Gaussian, Clayton and Gumbel copulas.
In the following we present the small sample performance results for scenarios involving a single bivariate copula family for all pair copula terms and a common time-varying parameter structure for all pair copula terms (Section \[Sec:sim1\]), a single bivariate copula family for all pair copula terms and a common time-varying parameter structure for only pair copula terms in the first tree (Section \[Sec:sim2\]), and different time-varying structures for all or only the first tree pair copulas and/or different copula families (Section \[Sec:sim3\]). In Section \[Sec:sim4\] we draw conclusions from our simulation study relevant for the subsequent application of our model.
Common copula families and common time-varying structure for all pair copulas {#Sec:sim1}
-----------------------------------------------------------------------------
The results in Table \[sim-common\] show satisfactory results for all latent AR(1) parameters of unconditional pair copula terms, i.e. the terms with indices $12, 23, 34$. The mean parameters $\mu$ and the standard error parameters $\sigma$ of the pair copulas in trees 2 to 3 are also well estimated in scenarios where the asymptotic AR(1) signal-to-noise ratio $sn$ is large. The estimation of the persistence parameter $\phi$ is less effected by the value of the signal-to-noise ratio. By the sequential nature of the estimation procedure we expect the performance to deteriorate for parameters with increasing conditioning set. This behavior is also visible. In particular, the estimation on the second tree is still reasonably precise for most cases, but on the third level the results become significantly worse. Comparing the results across different copula families one can generally state that for the Gaussian copula the results are best, closely followed by the Gumbel copula. For the Clayton copula the estimation is most imprecise both in terms of relative bias and MSE.
--------- ------- -------- ---------- -------- ------- -------- ------- -------- ------- ------- ------- ------- ------- ------- ------- ------ ------
Index sn avar
$\mu$ $\phi$ $\sigma$
[\ .50 .95 .15 .0298 .0071 -.0116 .0008 -.0761 .0033 .0503 .0041 .0008 .0001 .0166 .0009 1.04 0.23
]{}12
23 .50 .95 .15 .0361 .0075 -.0123 .0009 -.0743 .0035 .0565 .0048 .0009 .0001 .0179 .0011 1.04 0.23
34 .50 .95 .15 .0335 .0077 -.0108 .0008 -.0800 .0034 .0603 .0059 .0008 .0001 .0176 .0008 1.04 0.23
13$|$2 .50 .95 .15 -.1036 .0051 -.0061 .0006 -.2112 .0036 .0362 .0016 .0004 .0000 .0573 .0016 1.04 0.23
24$|$3 .50 .95 .15 -.0997 .0054 -.0064 .0006 -.2154 .0034 .0385 .0016 .0003 .0000 .0581 .0016 1.04 0.23
14$|$23 .50 .95 .15 -.4090 .0042 -.0215 .0011 -.3230 .0050 .1846 .0035 .0017 .0001 .1286 .0031 1.04 0.23
[\ .50 .95 .15 .0580 .0084 -.0092 .0011 -.0372 .0040 .0682 .0103 .0012 .0001 .0162 .0011 1.04 0.23
]{}12
23 .50 .95 .15 .0743 .0085 -.0096 .0011 -.0384 .0041 .0723 .0095 .0013 .0002 .0172 .0014 1.04 0.23
34 .50 .95 .15 .0592 .0084 -.0086 .0010 -.0394 .0040 .0684 .0094 .0011 .0001 .0165 .0012 1.04 0.23
13$|$2 .50 .95 .15 -.1915 .0047 -.0110 .0008 -.2585 .0040 .0572 .0022 .0007 .0001 .0818 .0022 1.04 0.23
24$|$3 .50 .95 .15 -.1956 .0048 -.0115 .0007 -.2615 .0038 .0596 .0022 .0006 .0000 .0816 .0021 1.04 0.23
14$|$23 .50 .95 .15 -.5676 .0036 -.0379 .0019 -.4283 .0059 .3338 .0039 .0048 .0007 .2148 .0046 1.04 0.23
[\ .50 .95 .15 .0210 .0063 -.0036 .0006 -.0220 .0036 .0393 .0024 .0004 .0001 .0133 .0007 1.04 0.23
]{}12
23 .50 .95 .15 .0258 .0064 -.0045 .0006 -.0238 .0039 .0417 .0025 .0004 .0001 .0154 .0015 1.04 0.23
34 .50 .95 .15 .0142 .0064 -.0038 .0006 -.0267 .0037 .0409 .0032 .0004 .0000 .0143 .0006 1.04 0.23
13$|$2 .50 .95 .15 -.1196 .0053 -.0045 .0006 -.2187 .0040 .0424 .0024 .0004 .0000 .0635 .0018 1.04 0.23
24$|$3 .50 .95 .15 -.1168 .0052 -.0050 .0006 -.2217 .0038 .0405 .0018 .0004 .0000 .0633 .0018 1.04 0.23
14$|$23 .50 .95 .15 -.4553 .0040 -.0220 .0012 -.3703 .0052 .2234 .0036 .0020 .0002 .1643 .0037 1.04 0.23
[\ .50 .95 .05 -.0023 .0022 -.0177 .0017 .0410 .0112 .0050 .0003 .0033 .0005 .1325 .0097 3.12 0.03
]{}12
23 .50 .95 .05 .0014 .0023 -.0169 .0020 .0459 .0107 .0055 .0003 .0043 .0021 .1227 .0081 3.12 0.03
34 .50 .95 .05 .0029 .0023 -.0164 .0016 .0506 .0109 .0057 .0003 .0030 .0005 .1274 .0105 3.12 0.03
13$|$2 .50 .95 .05 .0057 .0023 -.0192 .0022 .0704 .0111 .0058 .0003 .0053 .0031 .1354 .0081 3.12 0.03
24$|$3 .50 .95 .05 .0013 .0022 -.0214 .0020 .0692 .0123 .0053 .0003 .0047 .0010 .1636 .0144 3.12 0.03
14$|$23 .50 .95 .05 -.0740 .0023 -.0310 .0029 .0417 .0138 .0109 .0006 .0100 .0036 .2024 .0154 3.12 0.03
[\ .50 .95 .05 -.0028 .0022 -.0163 .0016 .0465 .0105 .0049 .0003 .0028 .0005 .1161 .0086 3.12 0.03
]{}12
23 .50 .95 .05 .0011 .0022 -.0155 .0014 .0586 .0104 .0052 .0002 .0021 .0003 .1170 .0091 3.12 0.03
34 .50 .95 .05 .0030 .0023 -.0141 .0012 .0531 .0098 .0054 .0003 .0017 .0002 .1029 .0067 3.12 0.03
13$|$2 .50 .95 .05 -.0365 .0021 -.0162 .0013 -.0321 .0099 .0060 .0003 .0021 .0002 .1037 .0061 3.12 0.03
24$|$3 .50 .95 .05 -.0415 .0021 -.0157 .0014 -.0528 .0103 .0061 .0003 .0024 .0004 .1134 .0069 3.12 0.03
14$|$23 .50 .95 .05 -.2252 .0019 -.0727 .0043 .1145 .0175 .0544 .0009 .0249 .0043 .3317 .0199 3.12 0.03
[\ .50 .95 .05 -.0021 .0023 -.0227 .0021 .0929 .0139 .0053 .0003 .0050 .0009 .2073 .0182 3.12 0.03
]{}12
23 .50 .95 .05 .0015 .0024 -.0237 .0026 .1005 .0136 .0060 .0004 .0075 .0024 .2022 .0198 3.12 0.03
34 .50 .95 .05 .0020 .0025 -.0222 .0022 .1045 .0137 .0063 .0006 .0054 .0013 .2043 .0170 3.12 0.03
13$|$2 .50 .95 .05 -.0084 .0024 -.0190 .0017 .0459 .0127 .0062 .0004 .0032 .0005 .1695 .0131 3.12 0.03
24$|$3 .50 .95 .05 -.0131 .0023 -.0205 .0021 .0332 .0137 .0056 .0003 .0048 .0011 .1940 .0177 3.12 0.03
14$|$23 .50 .95 .05 -.1324 .0022 -.0245 .0021 -.0670 .0137 .0227 .0008 .0053 .0013 .1980 .0145 3.12 0.03
--------- ------- -------- ---------- -------- ------- -------- ------- -------- ------- ------- ------- ------- ------- ------- ------- ------ ------
: Estimated relative bias and relative MSE with estimated standard errors for four dimensional D-vine SCAR copulas with common bivariate copula family and common time-varying structure for all pair copulas.[]{data-label="sim-common"}
Common copula families and common time-varying structure for pair copula terms in the first tree only {#Sec:sim2}
-----------------------------------------------------------------------------------------------------
The small sample performance for scenarios where only the copula parameter for pair copulas in the first tree is time-varying, reported in Table \[sim-common-first\], is quite satisfactory for all models except for the mean parameters $\mu$ in the third tree. This was to be expected by the results from the previous section. Note, however, that mean dependence, measured by the parameter $\mu$, in the second and third tree is lower than in the above scenario. We can therefore conclude that the precision in estimating $\mu$ decreases when the degree of dependence decreases. When the overall dependence is comparable, as is the case for the pair 13$|$2 in the second tree, we see that the parameter $\mu$ is estimated much more precisely than for the time-varying case. Furthermore, note that the average relative bias on the higher trees is positive for the copulas with lower $\mu$ parameter.
--------- ------- -------- ---------- -------- ------- -------- ------- -------- ------- ------- ------- ------- ------- ------- ------- ------ ------
Ind sn avar
$\mu$ $\phi$ $\sigma$
[\ .50 .95 .15 .0395 .0083 -.0109 .0009 -.0827 .0035 .0689 .0075 .0009 .0001 .0192 .0011 1.04 0.23
]{}12
23 .50 .95 .15 .0291 .0070 -.0127 .0009 -.0721 .0033 .0493 .0034 .0010 .0001 .0158 .0007 1.04 0.23
34 .50 .95 .15 .0263 .0078 -.0113 .0008 -.0763 .0034 .0598 .0055 .0008 .0001 .0171 .0009 1.04 0.23
13$|$2 .50 .00 .00 -.0156 .0012 - - - - .0016 .0001 - - - - Inf 0
24$|$3 .30 .00 .00 .0416 .0022 - - - - .0067 .0003 - - - - Inf 0
14$|$23 .20 .00 .00 .2185 .0039 - - - - .0624 .0017 - - - - Inf 0
[\ .50 .95 .15 .0781 .0088 -.0077 .0011 -.0403 .0041 .0760 .0095 .0011 .0002 .0166 .0011 1.04 0.23
]{}12
23 .50 .95 .15 .0712 .0088 -.0110 .0012 -.0326 .0041 .0751 .0115 .0014 .0002 .0162 .0012 1.04 0.23
34 .50 .95 .15 .0544 .0091 -.0091 .0011 -.0347 .0045 .0777 .0103 .0012 .0002 .0192 .0014 1.04 0.23
13$|$2 .50 .00 .00 -.0152 .0012 - - - - .0015 .0001 - - - - Inf 0
24$|$3 .30 .00 .00 .0424 .0023 - - - - .0067 .0003 - - - - Inf 0
14$|$23 .20 .00 .00 .2237 .0043 - - - - .0671 .0020 - - - - Inf 0
[\ .50 .95 .15 .0282 .0063 -.0043 .0007 -.0259 .0038 .0404 .0022 .0005 .0001 .0153 .0007 1.04 0.23
]{}12
23 .50 .95 .15 .0243 .0065 -.0055 .0007 -.0193 .0040 .0421 .0023 .0005 .0001 .0161 .0010 1.04 0.23
34 .50 .95 .15 .0114 .0069 -.0039 .0006 -.0251 .0039 .0473 .0055 .0004 .0000 .0157 .0013 1.04 0.23
13$|$2 .50 .00 .00 -.0070 .0011 - - - - .0013 .0001 - - - - Inf 0
24$|$3 .30 .00 .00 .0500 .0022 - - - - .0074 .0003 - - - - Inf 0
14$|$23 .20 .00 .00 .2346 .0038 - - - - .0696 .0018 - - - - Inf 0
[\ .50 .95 .05 -.0012 .0025 -.0168 .0020 .0465 .0112 .0064 .0007 .0045 .0024 .1331 .0105 3.12 0.03
]{}12
23 .50 .95 .05 .0032 .0023 -.0179 .0017 .0575 .0108 .0054 .0003 .0034 .0009 .1266 .0087 3.12 0.03
34 .50 .95 .05 .0037 .0022 -.0160 .0018 .0455 .0106 .0052 .0002 .0036 .0014 .1191 .0079 3.12 0.03
13$|$2 .50 .00 .00 .0059 .0012 - - - - .0015 .0001 - - - - Inf 0
24$|$3 .30 .00 .00 .0295 .0021 - - - - .0053 .0002 - - - - Inf 0
14$|$23 .20 .00 .00 .2502 .0033 - - - - .0740 .0017 - - - - Inf 0
[\ .50 .95 .05 -.0014 .0024 -.0191 .0023 .0631 .0107 .0060 .0008 .0060 .0023 .1233 .0132 3.12 0.03
]{}12
23 .50 .95 .05 .0020 .0022 -.0155 .0013 .0573 .0099 .0051 .0002 .0019 .0002 .1061 .0068 3.12 0.03
34 .50 .95 .05 .0041 .0022 -.0135 .0012 .0461 .0096 .0052 .0002 .0016 .0002 .0994 .0070 3.12 0.03
13$|$2 .50 .00 .00 .0061 .0012 - - - - .0015 .0001 - - - - Inf 0
24$|$3 .30 .00 .00 .0284 .0021 - - - - .0052 .0002 - - - - Inf 0
14$|$23 .20 .00 .00 .2482 .0034 - - - - .0740 .0018 - - - - Inf 0
[\ .50 .95 .05 -.0010 .0026 -.0211 .0022 .0902 .0136 .0069 .0007 .0056 .0023 .2006 .0163 3.12 0.03
]{}12
23 .50 .95 .05 .0036 .0023 -.0247 .0026 .1116 .0135 .0054 .0002 .0079 .0031 .2035 .0144 3.12 0.03
34 .50 .95 .05 .0044 .0024 -.0223 .0023 .0964 .0133 .0058 .0004 .0061 .0020 .1935 .0161 3.12 0.03
13$|$2 .50 .00 .00 .0042 .0012 - - - - .0014 .0001 - - - - Inf 0
24$|$3 .30 .00 .00 .0262 .0021 - - - - .0051 .0002 - - - - Inf 0
14$|$23 .20 .00 .00 .2472 .0034 - - - - .0730 .0018 - - - - Inf 0
--------- ------- -------- ---------- -------- ------- -------- ------- -------- ------- ------- ------- ------- ------- ------- ------- ------ ------
: Estimated relative bias and relative MSE with estimated standard errors for four dimensional D-vine SCAR copulas with common bivariate copula family and common time-varying structure for all first tree pair copulas.[]{data-label="sim-common-first"}
Different time-varying structure and mixed or common pair copula families {#Sec:sim3}
-------------------------------------------------------------------------
Since many scenarios are possible in this setup, we restrict ourselves to a three dimensional setup assuming all pair copulas time-varying with common pair copula family or different pair copula families. In particular, we vary the persistence parameter, which is now lower for one copula on the first tree and the copula on the second tree. Again we report estimated relative bias and relative MSE together with the signal-to-noise and stationary variance of the latent AR(1) process in Table \[sim-mixed\]. As before, the results are based on 1000 simulated data sets of length 1000.
The results in Table \[sim-mixed\] show only a moderate influence of different time-varying structures or different pair copula families. It is notable that lower persistence leads to more precise estimation of $\mu$ and $\phi$ and less precision for $\sigma$, but that effect can mainly be explained by the changed signal-to-noise ratio. Again parameters of the higher tree are less well estimated compared to the ones in the first tree.
-------- ------- -------- ---------- -------- ------- -------- ------- -------- ------- ------- ------- ------- ------- ------- ------- ------ ------
index sn avar
$\mu$ $\phi$ $\sigma$
[\ .50 .85 .15 -.0064 .0023 .0035 .0015 -.0864 .0045 .0057 .0003 .0023 .0001 .0280 .0013 1.76 0.08
]{}12
23 .50 .95 .15 .0403 .0073 -.0117 .0009 -.0774 .0035 .0567 .0050 .0009 .0001 .0190 .0010 1.04 0.23
13$|$2 .50 .85 .15 -.0379 .0021 -.0022 .0020 -.1801 .0049 .0059 .0003 .0041 .0003 .0577 .0020 1.76 0.08
[\ .50 .85 .15 -.0026 .0023 -.0014 .0014 -.0541 .0044 .0053 .0002 .0021 .0001 .0221 .0010 1.76 0.08
]{}12
23 .50 .95 .15 .0696 .0075 -.0091 .0010 -.0335 .0041 .0608 .0079 .0011 .0002 .0179 .0015 1.04 0.23
13$|$2 .50 .85 .15 -.1193 .0018 -.0067 .0022 -.2900 .0049 .0176 .0005 .0048 .0003 .1085 .0029 1.76 0.08
[\ .50 .85 .15 -.0034 .0024 -.0053 .0017 -.0410 .0054 .0058 .0003 .0030 .0002 .0312 .0014 1.76 0.08
]{}12
23 .50 .95 .15 .0245 .0061 -.0038 .0006 -.0254 .0037 .0393 .0021 .0004 .0001 .0145 .0008 1.04 0.23
13$|$2 .50 .85 .15 -.0588 .0020 .0036 .0020 -.2254 .0054 .0078 .0003 .0043 .0003 .0806 .0026 1.76 0.08
[\ .50 .85 .15 -.0018 .0023 -.0013 .0014 -.0545 .0043 .0053 .0002 .0021 .0001 .0222 .0010 1.76 0.08
]{}12
23 .50 .95 .15 .0290 .0064 -.0031 .0006 -.0270 .0037 .0430 .0025 .0004 .0001 .0146 .0007 1.04 0.23
13$|$2 .50 .85 .15 -.0372 .0020 -.0038 .0022 -.1854 .0050 .0057 .0002 .0049 .0005 .0606 .0020 1.76 0.08
-------- ------- -------- ---------- -------- ------- -------- ------- -------- ------- ------- ------- ------- ------- ------- ------- ------ ------
: Estimated relative bias and relative MSE together with estimated standard errors in parentheses for selected senarios of three dimensional SCAR D-vine copulas assuming different time-varying structures and common or different pair copula families.[]{data-label="sim-mixed"}
Conclusions from the simulation study {#Sec:sim4}
-------------------------------------
Overall we saw that the estimation becomes worse on higher trees and that this effect is stronger when we allow for time variation. The relative imprecision also increases with lower overall dependence. For our application we can conclude that it may be sensible to restrict the time variation to the first few trees and restrict the copula parameter to be constant beyond. Furthermore, placing the pairs with the largest dependence on the first tree as is commonly done for D-vine copula models is expected to provide the most precise estimates for the time variation in dependence, which is what we are mainly interested in here.
Application {#Sec:Application}
===========
In this section we provide an empirical illustration of the D-vine SCAR model. The dataset we consider are daily returns from 29 stocks listed in the DAX30 index during the period from the 1$^{st}$ of January 2007 to the 14$^{th}$ of June 2011, giving a total of 1124 observations. A list of the included companies is given in the Appendix. We decided for this dataset to find a balance between demonstrating the possibility of high dimensional modeling and the ability to still present the main results. Nevertheless, one could in principle consider much larger dimensions, which we leave for future research.
The conditional mean of the returns was modeled using an ARMA(1,1) model for all stock separately. The Ljung-Box statistics for the residuals revealed no significant remaining autocorrelation. For the conditional variance we considered GARCH(1,1) models with Student t innovations. Although one may argue that GARCH models that allow for the leverage effect such as the GJR-GARCH are appropriate for many individuals stocks, preliminary results suggested that the results for the dependence modeling are not affected by this choice. Results for the marginal models are not reported here for brevity but are available upon request.
Next, we estimated the D-vine SCAR model on the transformed standardized residuals. The ordering of the variables was done by maximizing the overall pairwise dependence measured by Kendall’s tau. In particular, first choose the pair of variables with the highest empirical Kendall’s $\tau$. Second connect the next variable which has highest pairwise Kendall’s $\tau$ with one of the previously chosen variables and proceed in a similar fashion until all variables are connected. This is the common strategy for D-vine copulas and is also motivated by our simulation results. In particular, we expect to capture the overall time variation of the dependence as good as possible with this choice, as it turns out that time variation is most relevant on the first tree. For each bivariate (conditional) copula model we then face two important choices, namely whether dependence is time-varying or static, and which copula family to use. We automatically select the model by first estimating time-varying and constant copulas from the following families: Gumbel (G), survival Gumbel (SG), Clayton (C), survival Clayton (SC), Normal (N) and the independence copula (I)[^5]. We then select the best fitting copula family from these 11 candidate models by the Bayesian information criterion (BIC). Given the size and complexity of our model, as well as the difficulty to estimate parameters precisely on higher trees, we decided to rely on the BIC to find more parsimonious model specifications and to minimize the estimation error. Additionally, we considered the restriction of allowing potential time variation only on a limited (small) number of trees. We considered this restriction from 1 to 12 trees, but we report only a subset of these models since results are identical or at least very similar for many of those cases. Specifically, it turned out that making this restriction beyond the 6$^{th}$ tree is irrelevant, since there is no evidence of time variation on higher trees. A further possible restriction that may be made is to truncate the vine beyond a certain tree, meaning that all conditional copulas are set to the independence copula. In the current application we did not make this restriction because the independence copula is included in the set of admissible models and the automatic selection by the BIC in practice leads to a truncation. Nevertheless, for large dimensional applications truncation of the vine should definitely be considered.
--------------- -------- -------- -------- -------- -------- --------
M0 M1 M2 M3 M4 M6
time-varying - 1 1-2 1-3 1-4 1-6
time-constant 1-27 2-27 3-27 4-27 5-27 7-27
up to tree 1 -11491 -12177 -12177 -12177 -12177 -12177
up to tree 2 -13657 -14313 -14346 -14346 -14346 -14346
up to tree 3 -14893 -15482 -15517 -15525 -15525 -15525
up to tree 4 -15653 -16207 -16239 -16265 -16265 -16265
up to tree 5 -16068 -16538 -16563 -16583 -16584 -16584
up to tree 6 -16326 -16774 -16787 -16796 -16802 -16804
up to tree 7 -16466 -16882 -16892 -16892 -16909 -16914
up to tree 20 -17358 -17621 -17599 -17580 -17585 -17594
total -17366 -17626 -17604 -17587 -17593 -17603
--------------- -------- -------- -------- -------- -------- --------
: Partial BIC (up to trees 1-7 and 20) and total BIC values for Models with no (M0), first tree (M1), first and second (M2) , first three (M3), first four (M4) and first six trees (M6) time-varying[]{data-label="tab:daxbic"}
[r|rr|rrrrrr|r]{}\
Tree & & & \# par\
& N & SG & I & N & C & G & SC & SG &\
1 & - & - & 0 & 7 & 0 & 4 & 0 & 17 & 28\
2 & - & - & 0 & 10 & 0 & 12 & 0 & 5 & 27\
3 & - & - & 2 & 14 & 0 & 7 & 0 & 3 & 24\
4 & - & - & 2 & 15 & 0 & 4 & 1 & 3 & 23\
5 & - & - & 4 & 15 & 0 & 4 & 0 & 1 & 20\
\
1 & 23 & 4 & 0 & 0 & 0 & 0 & 0 & 1 & 82\
2 & - & - & 0 & 4 & 0 & 8 & 0 & 15 & 27\
3 & - & - & 1 & 9 & 0 & 5 & 1 & 10 & 25\
4 & - & - & 2 & 13 & 0 & 7 & 0 & 3 & 23\
5 & - & - & 5 & 16 & 0 & 2 & 1 & 0 & 19\
\
1 & 23 & 4 & 0 & 0 & 0 & 0 & 0 & 1 & 82\
2 & 1 & 2 & 0 & 4 & 0 & 8 & 0 & 12 & 33\
3 & - & - & 1 & 8 & 0 & 5 & 1 & 11 & 25\
4 & - & - & 2 & 14 & 0 & 6 & 0 & 3 & 23\
5 & - & - & 5 & 15 & 0 & 2 & 1 & 1 & 19\
\
1 & 23 & 4 & 0 & 0 & 0 & 0 & 0 & 1 & 82\
2 & 1 & 2 & 0 & 4 & 0 & 8 & 0 & 12 & 33\
3 & 3 & 0 & 1 & 7 & 0 & 5 & 1 & 9 & 31\
4 & - & - & 2 & 10 & 0 & 7 & 0 & 6 & 23\
5 & - & - & 5 & 14 & 1 & 2 & 1 & 1 & 19\
\
1 & 23 & 4 & 0 & 0 & 0 & 0 & 0 & 1 & 82\
2 & 1 & 2 & 0 & 4 & 0 & 8 & 0 & 12 & 33\
3 & 3 & 0 & 1 & 7 & 0 & 5 & 1 & 9 & 31\
4 & 1 & 0 & 2 & 9 & 0 & 7 & 0 & 6 & 25\
5 & - & - & 5 & 14 & 1 & 2 & 1 & 1 & 19\
\
1 & 23 & 4 & 0 & 0 & 0 & 0 & 0 & 1 & 82\
2 & 1 & 2 & 0 & 4 & 0 & 8 & 0 & 12 & 33\
3 & 3 & 0 & 1 & 7 & 0 & 5 & 1 & 9 & 31\
4 & 1 & 0 & 2 & 9 & 0 & 7 & 0 & 6 & 25\
5 & 0 & 0 & 5 & 14 & 1 & 2 & 1 & 1 & 19\
6 & 1 & 0 & 11 & 6 & 1 & 1 & 0 & 3 & 14\
7 & - & - & 8 & 4 & 1 & 3 & 0 & 6 & 14\
The results of our estimation presented in Tables \[tab:daxbic\] and \[tab:daxcopula\] report the estimation results when allowing time variation in no, in trees 1 to 4 and in tree 6, respectively. For trees 5 and 7-28 no time-varying dependence was found. Table \[tab:daxcopula\] reports the number of time-varying copulas found on each tree and how often each copula family was selected. In addition, the number of parameters estimated in each tree is given. In Table \[tab:daxbic\] partial and total BIC values are reported. We note that time variation is very important, when modeling the dependence in the first tree. Here 27 out of 28 copulas are chosen to be time-varying. Furthermore, the vast majority of the time-varying copulas is selected to be the Gaussian copula, which is in line with the findings @HM10. An explanation for this finding is given in @MS11, who show that the Gaussian copula with random correlations has much larger dependence in the tails than the static Gaussian copula. This is in line with the stylized fact that financial returns are characterized by tail dependence. Beyond the first tree, however, only very few pairs have time-varying dependence justifying the restrictions. Among the copulas with constant parameters all families are chosen, but Gaussian, survival Gumbel and independence copulas are selected more often than the other families. Especially on higher trees the independence copula dominates, indicating that a truncation after a certain level (say 15-20) would come almost without any costs in terms of model fit. The overall fit of the models measured by the BIC for the total model turns out to be best when allowing time variation only for on the first level. This can be explained by the strong penalty on additional parameters by the BIC and the better fit on the higher trees.
\
\
\
\
Finally, we are interested in estimating the path of the pairwise dependence parameters. Smoothed estimates for the time-varying Kendall’s $\tau$ based on the model with time-variation only on the first tree are presented in Figures \[fig:tv\_tau1\] and \[fig:tv\_tau2\] for the pairwise dependence of the five companies that have the largest weight in the DAX index. The companies are (with their corresponding node in the D-vine in parenthesis) Allianz (13), Bayer (9), E.ON (12), SAP (27) and Siemens (19). This choice covers the interesting situations that we have companies that are neighboring in the vine, i.e. their dependence is modeled directly, that companies are close in the vine and that they lie quite far apart from each other. In the latter situation the implied pairwise time-varying dependence is computed conditional on the dependence between all variables in between. Note that in this case the dependence parameter at each point in time has to be computed using Monte Carlo simulation, for which we used 400 Monte Carlo replications. The dynamics of the dependence parameter are clearly visible for all situations and we have strong evidence of dependence changing over time. For example, the dependence parameters have decreased strongly in 2009 for most pairs. It also stands out that the dependence parameters involving the company Bayer have much less pronounced movements in dependence, which is mostly negative or close to independence.
Conclusions and further research {#Sec:Conclusion}
================================
In light of the current financial crisis including the discussion of understanding systemic risk the need to understand time-varying effects not only within individual financial products but also among groups of financial variables has been increasing. The developed D-Vine SCAR models are aiming to fill this demand. From a statistician’s point of view such a model is demanding. First a very flexible multivariate dependency model is required such as the class of vine copulas and secondly an appropriate model for the time dependency of the copula parameters has to found. For parameter interpretability we chose to follow a parameter driven instead of an observation driven approach and thus allowed for a stochastic autoregressive structure for the copula parameters to model time-varying dependence among a large group of variables.
While this approach leads to a relatively straightforward model formulation the development of efficient estimation procedures is much more difficult. Especially in high dimensions maximum likelihood is infeasible, since it would require the maximization over integrals of size equal to the data length. In the application the data length was 1125. These integrals occur since we need to integrate over the latent variable process to express the joint likelihood. This problem already occurs when we consider bivariate SCAR models. One solution to this is to use efficient importance sampling [@RZ07]. In addition, the pair copula construction approach of @ACFB09 for multivariate copulas allows to express the likelihood in bivariate copula terms in addition to a sequential formulation over the vine tree structure. This makes it feasible to develop and implement efficient importance sampling for the D-vine SCAR model. In a simulation study we validated our estimation procedure and the application to joint dependency modeling of 29 stocks in the DAX showed that time-varying dependence structures can be observed. One interesting feature of this application is that nonnormal pair copulas with constant parameters were replaced by normal pair copulas when time-varying copula parameters are allowed. This was observed when the strength of the dependence was moderate to large.
In this paper we followed some first approaches to model selection. We first restricted to a known dependency structure given by a D-vine, but allowed the copula family of each pair copula to be chosen among a prespecified class of copula families in addition to the choice if a pair copula has time-varying parameters or not. For this we used BIC, however more sophisticated criteria might be necessary. We also restricted the use of time-varying pair copula parameters to a prespecified number of top trees. Here the approach of truncated vines as developed in @brechmann-etal might be a good starting point to choose this number in a data driven manner. As already mentioned it is feasible to extend the class of D-vine SCAR models to include R-vines as copula models. More research is also needed to investigate the effects of time-varying copula models on economic entities such as portfolio returns and value at risk. Finally the model uncertainty introduced by assuming the marginal parameter estimates as true ones in the two-step approach has to assessed. However the simulation results of @Kim2007 might remain valid for a copula model with time-varying parameters.
Stocks in the DAX and their ordering
====================================
[^1]: A good choice for $N$ is about 100.
[^2]: From a practical perspective, the recent R package [CDVine]{} of @CDVine provides easy to use random number generation, and both SE and MLE fitting algorithms for C- and D-vines.
[^3]: Alternatively, we could calculate the pseudo realizations using the smoothed estimates of the latent dependence parameter using (\[eq:EIS\_psi\]). However, averaging over the nonlinear transformation $h$ seems more reasonable than applying the transformation to the (weighted) average.
[^4]: For brevity we only present a part of the results and the outcomes for further parameter and model constellations are available from the authors upon request.
[^5]: Obviously, for the independence copula no parameter needs to be estimated.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We investigate the field-angle-dependent zero-energy density of states for YNi$_2$B$_2$C with using realistic Fermi surfaces obtained by band calculations. Both the 17th and 18th bands are taken into account. For calculating the oscillating density of states, we adopt the Kramer-Pesch approximation, which is found to improve accuracy in the oscillation amplitude. We show that superconducting gap structure determined by analyzing STM experiments is consistent with thermal transport and heat capacity measurements.'
address:
- '$^{1}$Department of Physics, University of Tokyo, Tokyo 113-0033, Japan'
- '$^{2}$CCSE, Japan Atomic Energy Agency, 6-9-3 Higashi-Ueno, Tokyo 110-0015, Japan'
- '$^{3}$CREST(JST), 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan'
- '$^{4}$Department of Basic Science, University of Tokyo, Tokyo 153-8902, Japan'
- '$^{5}$CNR-INFM, CASTI Regional Lab, I-67010 Coppito (L’Aqulia), Italy'
- '$^{6}$Department of Physics, Kobe University, Nada, Kobe 657-8501, Japan'
author:
- 'Yuki Nagai$^{1}$, Nobuhiko Hayashi$^{2,3}$, Yusuke Kato$^{1,4}$, Kunihiko Yamauchi$^{5}$ and Hisatomo Harima$^{6}$'
title: 'Field angle dependence of the zero-energy density of states in unconventional superconductors: analysis of the borocarbide superconductor YNi$_2$B$_2$C'
---
The discovery of the nonmagnetic borocarbide superconductor YNi$_2$B$_2$C [@Cava] has attracted considerable attention because of the growing evidence for highly anisotropic superconducting gap and high superconducting transition temperature 15.5K. In recent years, Maki [*et al*]{}. [@Maki] theoretically suggested that the gap symmetry of this material is $s$+$g$ wave and the gap function has zero points (point nodes) in the momentum space. Motivated by this prediction, field-angle dependence of the heat capacity [@Park] and the thermal conductivity [@Izawa] have been measured on YNi$_{2}$B$_{2}$C. The gap symmetry can be deduced from their oscillating behavior. Those experimental results [@Park; @Izawa] were considered to be consistent with the $s$+$g$-wave gap. However, the present authors [@NagaiJ] recently found that the local density of states (LDOS) around a vortex calculated for the $s$+$g$-wave gap on an isotropic Fermi surface (FS) is not consistent with measurements by scanning tunneling microscopy and spectroscopy (STM/STS) [@Nishimori]. Therefore, we calculated the LDOS around a vortex with the use of a realistic FS of the 17th band obtained by a band calculation [@NagaiY]. We also investigated the density of states (DOS) under zero field and the field-angle dependence of the zero-energy DOS (ZEDOS). Consequently, we proposed alternative gap structure for YNi$_{2}$B$_{2}$C and succeeded in reproducing those experimental observations consistently [@NagaiY]. In this paper, we investigate the field-angle-dependent ZEDOS taking the 18th band into account in addition to the 17th band considered previously. While the so-called Doppler-shift (DS) method was previously utilized in Ref. [@NagaiY], we adopt here a more reliable method [@NagaiLett] for calculating the ZEDOS.
The DOS is the basis for analyzing physical quantities such as the specific heat and the thermal conductivity [@Vorontsov]. For example, the specific heat $C/T$ is proportional to the ZEDOS in the zero temperature limit $T \rightarrow 0$. The DOS is obtained from the regular Green function $g$ within the quasiclassical theory of superconductivity, which is represented by a parametrization with $a$ and $b$ as $g = - (1-a b)/(1+ a b)$. They follow the Riccati equations ($\hbar=1$) [@Schopohl]: $$\begin{aligned}
\Vec{v}_{\rm F} \cdot \Vec{\nabla} a + 2 \tilde{\omega}_n a + a \Delta^{\ast} a - \Delta &=& 0,
\label{eq:ar}\\
\Vec{v}_{\rm F} \cdot \Vec{\nabla} b - 2 \tilde{\omega}_n b - b \Delta b + \Delta^{\ast} &=& 0.
\label{eq:br}\end{aligned}$$ Here, $\Vec{v}_{\rm F}$ is the Fermi velocity, and $i \tilde{\omega}_n = i \omega_n + (e/c) \Vec{v}_{\rm F} \cdot \Vec{A}$ with the Matsubara frequency $\omega_n$ and the vector potential $\Vec{A}$.
To analyze the field-angle-dependent experiments quantitatively, we have developed a method on the basis of the Kramer-Pesch approximation (KPA) [@NagaiLett]. This method enables us to take account of the vortex-core contribution which is neglected in the DS method. By virtue of it, one can achieve quantitative accuracy. We consider a single vortex situated at the origin of the coordinates. To take account of the contributions of a vortex core, in the KPA we expand the Riccati equations (\[eq:ar\]) and (\[eq:br\]) up to first order in the impact parameter $y$ around a vortex and the energy $\omega_n$ [@NagaiJ] ($i\omega_n \to E+i\eta$ and $y$ is the coordinate along the direction perpendicular to $\Vec{v}_{\rm F}$). By this expansion, one can obtain an analytic solution of the Riccati equations around a vortex. We then find the expression for the angular-resolved DOS [@NagaiLett] $$N(E, \alpha_{\rm M}, \theta_{\rm M})
=
\frac{v_{\rm F0} \eta}
{2 \pi^2 \xi_0}
\Biggl{\langle}
\int
\frac{ d S_{\rm F} }
{ |\Vec{v}_{\rm F}| }
\frac{
\lambda
\bigl[ \cosh
(x/\xi_0)
\bigr]^{\frac{-2 \lambda}{\pi h}}
}{(E-E_y)^2 + \eta^2}
\Biggl{\rangle}_{\rm SP}.
\label{eq:dos}$$ Here, the azimuthal (polar) angle of the magnetic field $\Vec{H}$ is $\alpha_{\rm M}$ ($\theta_{\rm M}$) in a spherical coordinate frame fixed to crystal axes, and $d S_{\rm F}$ is an area element on the FS \[e.g., $d S_{\rm F}=k_{\rm F}^2 \sin\theta d\phi d\theta$ for a spherical FS in the spherical coordinates $(k,\phi,\theta)$, and $d S_{\rm F}=k_{{\rm F}ab} d\phi dk_c$ for a cylindrical FS in the cylindrical coordinates $(k_{ab},\phi,k_c)$\]. In the cylindrical coordinate frame $(r,\alpha,z)$ with ${\hat z} \parallel \Vec{ H}$ in the real space, the pair potential is $\Delta \equiv \Delta_0 \Lambda(\kv_{\rm F}) \tanh(r/\xi_0) \exp(i\alpha)$ around a vortex, $\Delta_0$ is the maximum pair amplitude in the bulk, $\lambda = |\Lambda|$, and $\langle \cdots \rangle_{\rm SP}
\equiv \int_0^{r_a} r dr \int_0^{2 \pi} \cdots d \alpha/ (\pi r_a^2)$ is the real-space average around a vortex, where $r_a/\xi_0 = \sqrt{H_{c2}/H}$ \[$H_{c2} \equiv \Phi_0 / (\pi \xi_0)$, $\Phi_0 = \pi r_a^2 H$\]. $x=r\cos(\alpha -\theta_v)$, $y=r\sin(\alpha -\theta_v)$, and $E_y = \Delta_0 \lambda^2 y/(\xi_0 h)$. $\theta_v(\kv_{\rm F},\alpha_{\rm M}, \theta_{\rm M})$ is the angle of $\Vec{v}_{\rm F \perp}$ in the plane of $z=0$, where $\alpha$ and $\theta_v$ are measured from a common axis [@NagaiJ; @NagaiY]. $\Vec{v}_{\rm F \perp}$ is the vector component of $\Vec{v}_{\rm F} (\kv_{\rm F})$ projected onto the plane normal to ${\hat {\Vec{H}}}=(\alpha_{\rm M}, \theta_{\rm M})$. $|\Vec{v}_{\rm F \perp}(\kv_{\rm F},\alpha_{\rm M}, \theta_{\rm M})|
\equiv v_{\rm F0}(\alpha_{\rm M}, \theta_{\rm M}) h(\kv_{\rm F},\alpha_{\rm M}, \theta_{\rm M})$ and $v_{\rm F0}$ is the FS average of $|\Vec{v}_{\rm F \perp}|$ [@NagaiY]. $\xi_0$ is defined as $\xi_0 = v_{{\rm F}0}/(\pi \Delta_0)$. We consider here a clean SC in the type-II limit. The impurity effect can be incorporated through the smearing factor $\eta$.
We calculate the angular dependence of the ZEDOS ($E=0$ in Eq. (\[eq:dos\])) for YNi$_{2}$B$_{2}$C with using the band structure calculated by Yamauchi [*et al*]{}. [@Yamauchi]. When integrating Eq. (\[eq:dos\]), the band structure is reflected in $d S_{\rm F}$ and $\Vec{v}_{\rm F}$ ($\Vec{v}_{\rm F \perp}$, $v_{\rm F0}$, and $h$ are obtained from $\Vec{v}_{\rm F}$). In YNi$_{2}$B$_{2}$C there are three bands crossing the Fermi level, which are called the 17th, 18th, and 19th bands. The electrons of the 17th band predominantly contribute to the superconductivity, since the DOS at the Fermi level for the 17th, 18th, and 19th bands are 48.64, 7.88, and 0.38 states/Ry, respectively [@Yamauchi]. We consider here the FSs of the 17th and 18th bands and neglect the 19th band because the DOS is substantially small on the 19th FS. We use an anisotropic $s$-wave gap structure obtained in our preceding paper (Eq. (27) in Ref. [@NagaiY]) for the 17th FS. This gap structure was determined by analyzing STM observations [@Nishimori]. As for the 18th FS, we assume an isotropic gap with an amplitude equal to the maximum gap on the 17th FS, according to angular-resolved photo emission measurements [@Baba].
![\[fig:1\]Angular dependence of the ZEDOS for YNi$_{2}$B$_{2}$C (a):with use of the 17th band, (b): that of 18th band and (c) that of the 17th and 18th bands. The magnetic field tilts from the $c$ axis by polar angle $\theta_{M}=\pi/2$. $\eta = 0.05 \Delta_{0}$ and $r_{a} = 7 \xi_{0}$.](17thfi.eps){width="14pc"}
![\[fig:1\]Angular dependence of the ZEDOS for YNi$_{2}$B$_{2}$C (a):with use of the 17th band, (b): that of 18th band and (c) that of the 17th and 18th bands. The magnetic field tilts from the $c$ axis by polar angle $\theta_{M}=\pi/2$. $\eta = 0.05 \Delta_{0}$ and $r_{a} = 7 \xi_{0}$.](18thfi.eps){width="14pc"}
![\[fig:1\]Angular dependence of the ZEDOS for YNi$_{2}$B$_{2}$C (a):with use of the 17th band, (b): that of 18th band and (c) that of the 17th and 18th bands. The magnetic field tilts from the $c$ axis by polar angle $\theta_{M}=\pi/2$. $\eta = 0.05 \Delta_{0}$ and $r_{a} = 7 \xi_{0}$.](1718thfis.eps){width="14pc"}
We rotate the applied magnetic field in the basal plane perpendicular to the $c$ axis ($\theta_{\rm M}=\pi/2$) and investigate the azimuthal angle $\alpha_{\rm M}$ dependence of the ZEDOS. First, we show the partial ZEDOS on the 17th FS only. As seen in Fig. \[fig:1\](a), the oscillation amplitude is of the order 9 %. Previously we have calculated the ZEDOS by the DS method under the same condition and obtained the amplitude of the order 30 % [@NagaiY], which was too large in comparison to experimental results [@Park; @Izawa]. The KPA adopted in this paper yields substantial improvement in the amplitude.
Second, we show the partial ZEDOS on the 18th FS only. As seen in Fig. \[fig:1\](b), the cusp-like minima appear. The origin of the cusp-like structure is different from that of the 17th-FS case, since the gap is isotropic on the 18th FS while it is anisotropic on the 17th FS. The cusp-like minima are due to the FS anisotropy on the 18th FS. According to the band calculation (Fig. 2(b) in Ref. [@Yamauchi]), the shape of the 18th FS is like a square as viewed from the $k_{c}$ axis in the momentum space. The sides of this square are parallel to the $k_{a}$ or $k_{b}$ axis. Therefore, the region where the Fermi velocity is parallel to the $k_{a}$ or $k_{b}$ axis has a high proportion of the area on the 18th FS. Now, the quasiparticles with the Fermi velocity parallel to the magnetic field do not contribute to the ZEDOS (e.g., see Ref. [@NagaiLett]). As a result, when the magnetic field is rotated near the directions parallel to those axes, the change in the ZEDOS is drastic, leading to cusp-like minima. Udagawa [*et al*]{}. [@Udagawa] previously obtained cusp-like minima in the field-angle-dependent ZEDOS by using a square-like FS model. The origin of their cusp-like minima is the same as that of the present result for the 18th FS. It should be noted that the DOS ratio $r$ of the 18th FS to the 17th FS is $r = N_{\rm F 18}/N_{\rm F 17} = 7.88/48.64 \sim 0.16$ [@Yamauchi]. The contribution of the 18th FS is not dominant, and therefore the cusp-like minima observed in the experiments [@Park; @Izawa] probably cannot be attributed to the square shape of the FS only.
Finally, we show the total ZEDOS on both the 17th and 18th FSs. As seen in Fig. \[fig:1\](c), the oscillation amplitude is of the order 8 % and the overall behavior is almost similar to that for the 17th FS shown in Fig. \[fig:1\](a). In the thermal conductivity measurements in YNi$_{2}$B$_{2}$C, the oscillation amplitude in the field 1 T and 0.5 T is of the order 2 % [@Izawa] and 4.5 % [@Kamada], respectively, at the same temperature 0.43 K. In the heat capacity measurements [@Park], the oscillation amplitude in 1 T at 2 K is of the order 4.7 %. The KPA is an approximation appropriate in the limit of the low temperature and low magnetic field. Therefore, in terms of the order of the oscillation amplitude, our result could be considered to be consistent with that of those experiments. As for the directions of the minima, the result is also consistent with the experiments. Those mean that the assumed gap structure determined by analyzing STM experiments (Eq. (27) in Ref. [@NagaiY]) is consistent with the heat capacity [@Park] and the thermal transport [@Izawa] observations. So far we have found here that inclusion of the 18th-FS contribution does not drastically alter our previous conclusion [@NagaiY], and the use of the KPA improves the value of the amplitude compared to our result [@NagaiY] by the DS method. We should note, however, that the sharpness of the cusp is rather weak in Fig. \[fig:1\](c) in comparison to the experimental observations [@Park; @Izawa]. This point may be resolved by considering carefully the $k_{c}$ dependence of the gap structure. The gap structure used here was determined to explain the azimuthal dependence of the observations in the basal plane [@NagaiY]. A modification of the gap structure would be necessary to explain the observed polar-angle dependence [@Izawa]. This is left for future studies. Here, it should be noted that such a detailed discussion is almost impossible till the KPA [@NagaiLett] is developed as a new method with high precision.
In conclusion, we calculated the field-angle-dependent ZEDOS on the FSs from 17th and 18th band of YNi$_2$B$_2$C. We adopted the gap structure [@NagaiY] determined by comparison with STM observations [@Nishimori]. The oscillation amplitude was found to be of the order 8 %. This amplitude does not deviate so much from the heat capacity [@Park] and the thermal transport [@Izawa; @Kamada] measurements. As for the directions of the minima, the result is also consistent with those experiments. This suggests that the assumed gap structure on the 17th FS is essentially appropriate for the in-plane properties. Further modification of the gap structure is expected in terms of the $k_c$ dependence. The use of the KPA improved the value of the amplitude compared to our previous result [@NagaiY] by the DS method. The KPA [@NagaiLett] appears to be an efficient method with high precision for analyzing experimental data.
This research was supported by Grant-in-Aid for JSPS Fellows (204840), and was also partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research on Priority Areas, 20029007.
References {#references .unnumbered}
==========
[99]{} Cava R J, Takagi H, Zandbergen H W, Krajewski J J, Peck Jr. W F, Siegrist T, Batlogg B, van Dover R B, Felder R J, Mizuhashi K, Lee J O, Eisaki H and Uchida S 1994 [*Nature*]{} [**367**]{} 252 Maki K, Thalmeier P and Won H 2002 [*Phys. Rev.*]{} B [**65**]{} 140502(R) Park T, Salamon M B, Choi E M, Kim H J and Lee S I 2003 [*Phys. Rev. Lett.*]{} [**90**]{} 177001 Izawa K, Kamata K, Nakajima Y, Matsuda Y, Watanabe T, Nohara M, Takagi H, Thalmeier P and Maki K 2002 [*Phys. Rev. Lett.*]{} [**89**]{} 137006 Nagai Y, Ueno Y, Kato Y and Hayashi N 2006 [*J. Phys. Soc. Japan*]{} [**75**]{} 104701 Nishimori H, Uchiyama K, Kaneko S, Tokura A, Takeya H, Hirata K and Nishida N 2004 [*J. Phys. Soc. Japan*]{} [**73**]{} 3247 Nagai Y, Kato Y, Hayashi N, Yamauchi K and Harima H 2007 [*Phys. Rev.*]{} B [**76**]{} 214514 Nagai Y and Hayashi N [*Preprint*]{} arXiv:0802.1579 Vorontsov A B and Vekhter I 2006 [*Phys. Rev. Lett.*]{} [**96**]{} 237001; 2007 [*Phys. Rev.*]{} B [**75**]{} 224501; 2007 [*Phys. Rev.*]{} B [**75**]{} 224502; Vekhter I and Vorontsov A 2008 [*Physica*]{} B [**403**]{} 958 Schopohl N and Maki K 1995 [*Phys. Rev.*]{} B [**52**]{} 490; Schopohl N arXiv:cond-mat/9804064 Yamauchi K, Katayama-Yoshida H, Yanase A and Harima H 2004 [*Physica*]{} C [**412-414**]{} 225 Baba T 2007 [*Doctor thesis*]{}, University of Tokyo Udagawa M, Yanase Y and Ogata M 2005 [*Phys. Rev.*]{} B [**71**]{} 024511 Kamata K 2003 [*Master thesis*]{}, University of Tokyo
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present observational evidence for “propeller” effects in two X–ray pulsars, GX 1+4 and GRO J1744–28. Both sources were monitored regularly by the [*Rossi X–ray Timing Explorer*]{} (RXTE) throughout a decaying period in the X–ray brightness. Quite remarkably, strong X–ray pulsation became unmeasurable when total X–ray flux had dropped below a certain threshold. Such a phenomenon is a clear indication of the propeller effects which take place when pulsar magnetosphere grows beyond the co-rotation radius as a result of the decrease in mass accretion rate and centrifugal force prevents accreting matter from reaching the magnetic poles. The entire process should simply reverse as the accretion rate increases. Indeed, steady X–ray pulsation was reestablished as the sources emerged from the non-pulsating faint state. These data allow us to directly derive the surface polar magnetic field strength for both pulsars: $3.1\times 10^{13}$ G for GX 1+4 and $2.4\times 10^{11}$ G for GRO J1744-28. The results are likely to be accurate to within a factor of 2, with the total uncertainty dominated by the uncertainty in estimating the distances to the sources. Possible mechanisms for the persistent emission observed in the faint state are discussed in light of the extreme magnetic properties of the sources.'
author:
- Wei Cui
title: 'Evidence for “Propeller” Effects In X-ray Pulsars GX 1+4 And GRO J1744-28'
---
Introduction
============
In accreting X–ray pulsars, the strong magnetic field disrupts accretion flow at several hundred neutron star radii and funnels material onto the magnetic poles (e.g., [@pringle1972]; [@lamb1973]). X–ray emission mostly comes from the “hot spots” formed around one or both poles. Such emission is beamed ([@basko1976]) and thus produces X–ray pulsation by periodically passing through the line of sight as the neutron star rotates, if the magnetic and rotation axes of the neutron star are misaligned.
The strength of the magnetic field in X–ray pulsars can be illustrated by the size of the magnetosphere, which co-rotates with the neutron star. In the process of mass accretion, the ram pressure of the flow is exerted on the magnetosphere and is balanced by the magnetic pressure. Therefore, the magnetospheric radius is determined not only by field strength but also by mass accretion rate. In a bright state, the accretion rate is high, so the magnetosphere is usually small compared to the co-rotation radius at which the angular velocity of Keplerian motion is equal to that of the neutron star. Material is continuously channeled to magnetic poles, so the X–ray emission from “hot spots” pulsates persistently. As the accretion rate decreases, the ram pressure decreases, and thus the magnetosphere expands. As the magnetosphere grows beyond the co-rotation radius, centrifugal force prevents material from entering magnetosphere, and thus accretion onto magnetic poles ceases ([@pringle1972]; [@lamb1973]; [@illarionov1975]). Consequently, no X–ray pulsation is expected. This is commonly known as the “propeller” effect, because accreting matter is likely to be ejected in the presence of a strong magnetic field.
Although the propeller effects were predicted theoretically in the 1970’s, there has been no direct observational evidence for them. If the “standard” pulsar theory is on the right track, such effects should be observable. A positive detection would not only confirm our understanding of X–ray pulsars but would also allow a direct determination of magnetic field strength in these systems. In this [*Letter*]{}, we report the discovery of such effects in two X–ray pulsars, GX 1+4 and GRO J1744-28, based on the RXTE observations.
GX 1+4 is a 2 minute pulsar with a low-mass M giant companion. It is highly variable in X-rays. The observed X-ray flux can vary by 2 orders of magnitude on a time scale of months. It has the hardest X-ray spectrum among known X-ray pulsars. Secular spin-ups and spin-downs have been observed at nearly equal rates (see [@chakrabarty1997b], and references therein). It has been, therefore, postulated that GX 1+4 is near spin equilibrium. Then, according to the “standard” model ([@ghosh1979]), the observed spin-down rate would require a surface magnetic field of a few $\times 10^{13}$ G. Hints for such an unusually strong magnetic field are also provided by the observed hard X–ray spectrum.
GRO J1744-28 is a transient X-ray pulsar with a period of 0.467 s and is thought to have a low-mass companion. It is the only known X–ray pulsar that also produces X–ray bursts and only the second source, after the Rapid Burster, that displays type II bursts ([@lewin1996]). The Rapid burster apparently only has a weak magnetic field. By analogy, GRO J1744-28 may also be a weakly magnetized system, but a strong enough field to be an X–ray pulsar.
Observations and Analysis
=========================
Both GX 1+4 and GRO J1744-28 were monitored regularly by [*RXTE*]{} in 1996. For this study, we only use data from the [*Proportional Counter Array*]{} (PCA). The PCA consists of five nearly identical large area proportional counter units (PCUs). It has a total collecting area of about 6500 $cm^2$, but for a specific observation some PCUs (up to two) may be turned off for safety reasons. It covers an energy range from 2-60 keV with a moderate energy resolution ($\sim 18\%$ at 6 keV). Mechanical collimators are used to limit the field of view to a $1^{\circ}$ FWHM.
X–ray Light Curves
------------------
Starting on 1997 February 17, GX 1+4 was observed roughly monthly by the PCA for $\sim$10 ks for each observation. Initially, it was in a relatively bright state, $\sim$100 mCrab, with a large pulse fraction ($\sim$60%). The X–ray brightness decayed steadily since that measurement, with the pulse fraction remaining roughly constant. By 1996 September 25, it appeared only as a $\sim$2 mCrab source. Importantly, the X–ray pulsation was not measured by using standard techniques such as fast Fourier transformation and epoch-folding (using a detection threshold of $3\sigma$). For a better coverage of the source during this interesting period, we chose to observe it weekly for $\sim$5 ks each. GX 1+4 remained in this faint state ([@chakrabarty1996]; [@cui1996]) until around 1996 November 29, when it brightened up and the pulsation was again detected ([@wilson1997]). Unfortunately, no PCA coverage was possible for more than 2 months around this time because the Sun was too close to the source. Figure 1 summaries the X–ray flux history. Pulse-period folded light curves were obtained from the Standard2 data with a 16 s time resolution. From the folded light curves, we then computed the average pulse fraction, which is defined as the ratio of the difference between the maximum and minimum count rates to the maximum rate. The results are also shown in Fig. 1. Note that no pulsation is detected above 3$\sigma$ in the faint state, so for each observation the light curve is folded around a period that maximizes the residual.
An extensive monitoring campaign on GRO J1744-28 started shortly after its discovery ([@giles1996]). At the beginning, GRO J1744-28 was observed weekly. It appeared as a $\sim$2 Crab persistent source with giant type II bursts (at more than 10 Crab). The sampling rate increased quickly to almost once per day when the source became fainter toward the end of April. By 1996 May 12, GRO J1744-28 entered a period when the X–ray pulsation was only detected intermittently with significance no more than $6\sigma$. The persistent emission flux was about 50 mCrab. Such a faint state lasted for about 7 weeks. For this study, we have selected 13 observations to cover the entire period. In the same way as in Figure 1, the measured X–ray flux and average pulse fraction are shown in Figure 2 for GRO J1744-28 (with the detected X–ray bursts removed). Here, the pulse fraction was derived from the Binned-mode data with a 16-ms time resolution, except for the last observation where the Binned-mode data were not available and the Event-mode data with higher time resolution were used instead.
Spectral Characteristics
------------------------
To derive the total X–ray flux, we modeled the observed spectra with a cut-off power law, which is typical of accreting X–ray pulsars ([@white1983]). For both sources, a Gaussian component is needed to mimic an iron $K_{\alpha}$ line at $\sim$6.4 keV. Such a model fits the observed X–ray spectra of GX 1+4 reasonably well, with reduced $\chi^2$ values in the range of 1-3, except for the first two observations, for which the object was relatively bright. Similarly, for GRO J1744-28, the spectra for the faint state can be fit fairly well by this simple model, although none of the results are formally acceptable in terms of reduced $\chi^2$ values for the data for which the source was relatively bright. By examining the residuals, we found that the significant deviation mostly resides at the very low energy end of the PCA range, where the PCA calibration is less certain. Therefore, the flux determination should not be very sensitive to such deviation for these hard X–ray sources. Note that we did not follow the usual practice of adding systematic uncertainties to the data in the spectral analysis, because it is still not clear how to correctly estimate the magnitude of those uncertainties at different energies. The uncertainty in the flux measurement (as shown in Figures 1 and 2) was derived by comparing the results from different PCUs.
The observed spectrum varies significantly for GX 1+4. The photon index varies in the range of 0.4-1.7, and infered hydrogen column density, in the range of 4 to 15$\times 10^{22}\mbox{ }cm^{-2}$, although the latter is poorly constrained in the faint state. The spectral cutoff occurs at 6-17 keV, with an e-folding energy in the range 11-55 keV. As GX 1+4 approaches the faint state, the spectrum becomes harder; eventually the spectral cutoff becomes unmeasurable. An opposite trend is observed for GRO J1744-28; its spectrum changes only mildly. When it was relatively bright, the photon index is steady (1.3-1.4) and the column density is in the range 3-5$\times 10^{22}\mbox{ }cm^{-2}$. The spectrum is cut off at 16-18 keV, with an e-folding energy in the range 11-19 keV. However, as it enters the faint state, the spectrum turns significantly softer: the photon index varies from 1.5 to 2.5 and the cutoff energy moves down to 5-8 keV with the e-folding energy in the range 15-26 keV. It is interesting to note that the harder spectrum completely recovers when the source brightens up again (as typified by the last observation in Fig. 2).
Magnetic Field
--------------
The absence of the X–ray pulsation when the pulsars were in a low brightness state is a clear indication of propeller effects. For pulsars, the co-rotation radius, $r_{co}$, is derived by equating the Keplerian velocity to the co-rotating Keplerian velocity, i.e., $\Omega r_{co}=(GM/r_{co})^{1/2}$, where $\Omega$ is the angular velocity of the neutron star, and M is the mass. Therefore, $$r_{co} = 1.7\times 10^8\mbox{ }P^{2/3} \left(\frac{M}{1.4M_{\odot}}\right)^{1/3}\mbox{ }cm$$ where P is the neutron star spin period.
In the presence of an accretion disk, it is still uncertain how to determine the outer boundary of the magnetosphere in a pulsar (e.g., [@ghosh1979]; [@arons1993]; [@ostriker1995]; [@wang1996]. To a good approximation, here we define the magnetospheric radius, $r_m$, as where the magnetic pressure balances the ram pressure of a spherically accretion flow. Assuming a dipole field at large distance from the neutron star, we have ([@lamb1973]) $$r_m = 2.7\times 10^8\mbox{ } \left(\frac{L_x}{10^{37}\mbox{ }erg\mbox{ }s^{-1}}\right)^{-2/7} \left(\frac{M}{1.4M_{\odot}}\right)^{1/7} \left(\frac{B}{10^{12}\mbox{ }G}\right)^{4/7} \left(\frac{R}{10^6\mbox{ }cm}\right)^{10/7}\mbox{ }cm,$$ where $L_x$ is the bolometric X-ray luminosity, B is the surface polar magnetic field strength, and R is the neutron star radius.
The mass accretion onto the pulsar magnetic poles ceases when $r_{co}=r_m$. From equations (1) and (2), the magnetic field strength is given by $$B=4.8\times 10^{10}\mbox{ }P^{7/6} \left(\frac{F_x}{1.0\times 10^{-9}\mbox{ }erg\mbox{ }cm^{-2}\mbox{ }s^{-1}}\right)^{1/2} \left(\frac{d}{1\mbox{ }kpc}\right) \left(\frac{M}{1.4M_{\odot}}\right)^{1/3} \left(\frac{R}{10^6\mbox{ }cm}\right)^{-5/2}\mbox{ }G$$ where $F_x$ is the minimum bolometric X-ray flux at which X–ray pulsation is still detectable, and d is the distance to the source.
For GX 1+4 and GRO J1744-28, the observed 2-60 keV X-ray fluxes are $\sim$0.16 and $2.34 \times 10^{-9}\mbox{ }erg\mbox{ }cm^{-2}\mbox{ }s^{-1}$, respectively, when the X–ray pulsation becomes unmeasurable. Correction to the measured 2-60 keV flux at both high and low energies needs to be made in order to derive the bolometric X–ray flux. Because of the spectral cutoff, little flux is expected from above 60 keV for GRO J1744-28. At the low energy end ($<$ 2 keV), extending the power law (with a photon index of about 1.5) does not add much flux either ($<$ 18%). The correction for absorption yields less than 30%. Therefore, the uncertainty lies primarily in the distance measurement which can only vary by a factor of 2 ([@giles1996]). Assuming a distance of 8 kpc ([@giles1996]), $B\simeq 2.4\times 10^{11}$ G. Similarly, for GX 1+4, the bolometric correction below 2 keV is small. At high energies, no spectral cutoff is observed in the faint state, so it must be beyond the PCA passing band. At the beginning of the faint state, the observed photon index is $\sim$1.7, so it is highly unlikely that the bolometric flux is more than a factor of 2 higher than what is measured (requiring a cutoff energy of $\sim$300 keV). However, the spectrum seems to harden in the faint state, and the photon index can drop as low as 0.5. Even so, a spectral cutoff at $\sim$100 keV would be required for a 100% bolometric correction above 60 keV. Assuming a distance of 6 kpc (which can vary by no more than a factor of 2; [@chakrabarty1997a]) and using the measured 2-60 keV flux in equation (3), we have $B\gtrsim 3.1\times 10^{13}$ G for GX 1+4. In conclusion, the derived magnetic-field values are likely to be accurate to within a factor of two for both sources, and the total uncertainty is dominated by the uncertainty in estimating the distances.
Discussion
==========
What is the origin of the persistent emission in the faint state? The different (nearly opposite) spectral characteristics strongly suggest different emission mechanisms for GX 1+4 and GRO J1744-28. GX 1+4 has an M-giant companion star, and a relatively dense, slow stellar wind is expected in the system([@chakrabarty1997a]). In fact, the comparable spin-up and spin-down rates observed seems to suggest the presence of a retrograde accretion disk during the spin-down period ([@chakrabarty1997b]), which is only possible in a wind-fed system (as opposed to the Roche lobe overflow system). Because GX 1+4 has a strong magnetic field, accreting matter may not be able to penetrate the field lines very much and is likely to be flung away at a very high velocity. The velocity can be roughly estimated as $v \simeq (2GM/r_{co})^{1/2}$ ([@illarionov1975]), which is $\sim$2000 km/s. As the material plows through the dense stellar wind at such a high velocity, a shock is bound to form. The observed emission in the faint state may simply be the synchrotron radiation by relativistic particles accelerated by the shock through mechanisms like Fermi acceleration ([@fermi1949]). The nonthermal nature of such emission is consistent with the disappearance of the spectral cutoff in the faint state. This emission mechanism is thought to be responsible for the unpulsed X–ray emission observed from the Be binary pulsar system PSR B1259-63 near periastron ([@grove1995]).
For GRO J1744-28, the magnetic field is very weak for an X–ray pulsar. When the propeller effects take place, a significant amount of accreting material might leak through “between the field lines” and reach the neutron star surface ([@arons1980]). The observed emission in the faint state would then come from a large portion of the surface and so would not be pulsed. The surface temperature reached is expected to be lower than that of the hot spots; thus the spectrum is softer, which is consistent with the observation.
However, both sources are in the Galactic center region, so source confusion could be a serious problem. It is natural to question whether we actually detected the sources when the X–ray pulsation was not seen. We have carefully searched the catalogs for known X-ray sources within a $1^{\circ}$ radius circle around each source. None are found around GX 1+4. Moreover, the iron line at $\sim$6.4 keV is particularly prominent in GX 1+4 (see Fig. 3). Its presence and distinct shape with respect to the continuum greatly boosted our confidence about detection of the object in the faint state. GRO J1744-28 was still fairly bright ($\sim$50 mCrab) even in the faint state. Eighteen known X–ray sources appeared within the search circle, six of which can be brighter than 30 mCrab, but only GS 1741.2-2859 and A 1742-289 are potentially close enough to the PCA pointing direction ($<$ 30) and bright enough to contribute significantly to the observed counts. However, both are X–ray transients, and are currently off, according to the nearly continuous monitoring with the All-Sky Monitor (ASM) aboard [*RXTE*]{}. An extensive search for potential new sources in the region was also made with the ASM, but yielded null detection. The ASM long-term light curve indicates that GRO J1744-28 was on throughout the faint state. In fact, type II X–ray bursts were detected again by the PCA during the later part of this period.
We seem to have selected two X–ray pulsars with their magnetic properties at opposite extremes. Both experience a period when the mass accretion from the companion star is hindered by the centrifugal barrier. The inferred magnetic field strength is in line with our previous knowledge about both sources. These results are important in constraining the details of the theoretical models on the X–ray emission mechanisms and thus will help us complete the pictures on these sources, especially GRO J1744-28, an unexpected bursting X–ray pulsar. It should be noted that the derived dipole field strength for GRO J1744-28 is consistent with the reported upper limits ([@finger1996]; [@daumerie1996]; [@bildsten1997]) but is larger than the value derived from the observed 40 Hz QPO ([@sturner1996]), based on the assumption that the “beat frequency model” of Alpar & Shaham (1985) applies. This model is, however, inconsistent with the observed dependence of the QPO properties on the X-ray brightness of the source ([@zhang1996]).
I would like to thank D. Chakrabarty, A. M. Levine, S. N. Zhang, and J. Li for useful discussions. I have made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. This work is supported in part by NASA Contracts NAS5-30612.
Alpar, M. A., & Shaham, J. 1985, Nature, 316, 239 Arons, J. 1993, , 408, 160 Arons, J., & Lea, S. M. 1980, , 372, 565 Basko, M. M., & Sunyaev, R. A. 1976, , 20, 537 Bildsten, L, & Brown, E. F. 1997, , 477, 897 Chakrabarty, D., Finger, M. H., & Prince, T. A. 1996, 6478 Chakrabarty, D., & Roche, P. 1997, , submitted Chakrabarty, D., Bildsten, L., Grunsfeld, J. M., Koh, D. T., Nelson, R., W., Prince, T. A., & Vaughan, B. A. 1997, , 481, in press Cui, W., & Chakrabarty, D. 1996, 6478 Daumerie, P., Kalogera, V., Lamb, F. K., & Psaltis, D. 1996, Nature, 382, 141 Fermi, E. 1949, Phys. Rev. 75, 1169 Finger, M. H., Koh, D. T., Nelson, R. W., Prince, T. A., Vaughan, B. A., & Wilson, R. B. 1996, Nature, 381, 291 Ghosh, P., & Lamb, F. K. 1979, , 234, 296 Giles, A. B., et al. 1996, , 469, L25 Grove, J. E., et al. 1995, , 447, L113 Illarionov, A. F., & Sunyaev, R. A. 1975, , 39, 185 Lamb, F. K., Pethick, C. J., and Pines, D. 1973, , 184, 271 Lewin, W., H. G., Rutledge, R. E., Kommers, J. M., van Paradijs, J., & Kouveliotou, C. 1996, , 462, L39 Ostriker, E. C., & Shu, F. H. 1995, , 477, 813 Pringle, R. E., & Rees, M. J. 1972, , 21, 1 Sturner, S. J., and Dermer, C. D. 1996, , 465, L31 Wang, Y.-M. 1996, , 465, L111 White, N. E., Swank, J. H., & Holt, S. S. 1983, , 270, 711 Wilson, R. B., & Chakrabarty, D. 1997, 6536 Zhang, W., Morgan, E., Swank, J., Jahoda, K., Jernigan, G., & Klein, R. 1996, , 469, L29
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We compute the C\*-algebra generated by a group of composition operators acting on certain reproducing kernel Hilbert spaces over the disk, where the symbols belong to a non-elementary Fuchsian group. We show that such a C\*-algebra contains the compact operperators, and its quotient is isomorphic to the crossed product C\*-algebra determined by the action of the group on the boundary circle. In addition we show that the C\*-algebras obtained from composition operators acting on a natural family of Hilbert spaces are in fact isomorphic, and also determine the same *Ext*-class, which can be related to known extensions of the crossed product.'
address: |
Department of Mathematics\
University of Florida\
Gainesville, Florida 32603
author:
- 'Michael T. Jury'
bibliography:
- 'compext.bib'
date: 26 Sep 2005
title: '$C^*$-algebras generated by groups of composition operators'
---
[^1]
[^2]
Introduction
============
The purpose of this paper is to begin a line of investigation suggested by recent work of Moorhouse et al. [@moorhouse-thesis; @jm-tk-preprint; @jm-tk-bm-preprint]: to describe, in as much detail as possible, the $C^*$-algebra generated by a set of composition operators acting on a Hilbert function space. In this paper we consider a class of examples which, while likely the simplest cases from the point of view of composition operators, nonetheless produces $C^*$-algebras which are of great interest both intrinsically and for applications. In fact, the $C^*$-algebras we obtain are objects of current interest among operator algebraists, with applications in the study of hyperbolic dynamics [@emerson-thesis], noncommutative geometry [@MR88k:58149], and even number theory [@MR1931172].
Let $f\in H^2(\mathbb{D})$. For an analytic function $\gamma:\mathbb{D}\to\mathbb{D}$, the *composition operator with symbol $\gamma$* is the linear operator defined by $$(C_\gamma f) (z)=f(\gamma(z))$$ In this paper, we will be concerned with the $C^*$-algebra $${\mathcal{C}_\Gamma}=C^*(\{C_\gamma : \gamma\in\Gamma\})$$ where $\Gamma$ is a discrete group of (analytic) automorphisms of $\mathbb{D}$ (i.e. a *Fuchsian group*). For reasons to be described shortly, we will further restrict ourselves to *non-elementary* Fuchsian groups (i.e. groups $\Gamma$ for which the $\Gamma$-orbit of $0$ in $\mathbb{D}$ accumulates at at least three points of the unit circle $\mathbb{T}$.) Our main theorem shows that ${\mathcal{C}_\Gamma}$ contains the compact operators, and computes the quotient ${\mathcal{C}_\Gamma}/\mathcal{K}$:
Let $\Gamma$ be a non-elementary Fuchsian group, and let ${\mathcal{C}_\Gamma}$ denote the $C^*$-algebra generated by the set of composition operators on $H^2$ with symbols in $\Gamma$. Then there is an exact sequence $$\begin{CD}
0 @>>> \mathcal{K} @>\iota>> {\mathcal{C}_\Gamma}@>\pi >> C(\mathbb{T})\times\Gamma @>>> 0
\end{CD}$$
Here $C(\mathbb{T})\times\Gamma$ is the crossed product $C^*$-algebra obtained from the action $\alpha$ of $\Gamma$ on $C(\mathbb{T})$ given by $$\alpha_\gamma(f)(z)=f(\gamma^{-1}(z))$$ (Since the action of $\Gamma$ on $\partial\mathbb{D}$ is amenable, the full and reduced crossed products coincide; we will discuss this further shortly.) We will recall the relevant definitions and facts we require in the next section.
There is a similar result for the $C^*$-algebras $${\mathcal{C}_\Gamma}^n =C^*\left(\{C_\gamma \in \mathcal{B}(A^2_n)|\gamma\in\Gamma\}\right),$$ acting on the family of reproducing kernel Hilbert spaces $A^2_n$ (defined below), namely there is an extension $$\begin{CD}
0 @>>> \mathcal{K} @>>> {\mathcal{C}_\Gamma}^n @>>> C(\mathbb{T})\times\Gamma @>>> 0
\end{CD}$$ and we will show that each of these extensions represents the same element of the $Ext$ group $Ext(C(\partial\mathbb{D})\times\Gamma,\mathcal{K})$, and we will also prove that ${\mathcal{C}_\Gamma}$ and ${\mathcal{C}_\Gamma}^n$ are isomorphic as $C^*$-algebras. In fact we obtain a stronger isomorphism result, namely that *any* unital $C^*$-extension of $C(\partial\mathbb{D})\times\Gamma$ which defines the same $Ext$-class as ${\mathcal{C}_\Gamma}$ is isomorphic to ${\mathcal{C}_\Gamma}$:
Let $x\in \rm{Ext}(C(\partial\mathbb{D})\times\Gamma)$ denote the class of the extension $$\nonumber\begin{CD}
0 @>>> \mathcal{K} @>>> {\mathcal{C}_\Gamma}@>>> C(\partial\mathbb{D})\times\Gamma @>>> 0.
\end{CD}$$ If $e\in \rm{Ext}(C(\partial\mathbb{D})\times\Gamma)$ is a unital extension represented by $$\nonumber\begin{CD}
0 @>>> \mathcal{K} @>>> E @>>> C(\partial\mathbb{D})\times\Gamma @>>> 0
\end{CD}$$ and $e=x$, then $E\cong {\mathcal{C}_\Gamma}$ as $C^*$-algebras.
Finally, we will compare the extension determined by ${\mathcal{C}_\Gamma}$ to tow other recent constructions of extensions of $C(\partial\mathbb{D})\times\Gamma$. We show that the $Ext$-class of ${\mathcal{C}_\Gamma}$ coincides with the class of the $\Gamma$-equivariant Toeplitz extension of $C(\partial\mathbb{D})$ constructed by J. Lott [@lott-preprint], and differs from the extension of crossed products by cocompact groups constructed by H. Emerson [@emerson-thesis]. Finally we show that this extension in fact gives rise to a $\Gamma$-equivariant $KK_1$-cycle for $C(\partial\mathbb{D})$ which also accords with the construction in [@lott-preprint].
Preliminaries
=============
We will consider C\*-algebras generated by composition operators which act on a family of reproducing kernel Hilbert spaces on the unit disk. Specifically we will consider the spaces of analytic functions $A^2_n$, where $A^2_n$ is the space with reproducing kernel $$k_n(z,w)=(1-z\overline{w})^n$$ When $n=1$ this space is the Hardy space $H^2$, and its norm is given by $$\|f\|^2 =\lim_{r\to 1}\frac{1}{2\pi}\int_0^{2\pi} |f(re^{i\theta})|^2\, d\theta$$ For $n\geq 2$, the norm on $A^2_n$ is equivalent to $$\|f\|^2=\frac{1}{\pi}\int_\mathbb{D}|f(z)|^2 (1-|z|^2)^{n-2}\, dA(z)$$ though the norms are equal only for $n=2$ (the Bergman space).
An analytic function $\gamma:\mathbb{D}\to\mathbb{D}$ defines a *composition operator* $C_\gamma$ on $A^2_n$ by $$(C_\gamma f)(z)=f(\gamma(z))$$ In this paper, we will only consider cases where $\gamma$ is a Möbius transformation; in these cases $C_\gamma$ is easily seen to be bounded, by changing variables in the integrals defining the norms. An elementary calculation shows that if $\gamma :\mathbb{D}\to \mathbb{D}$ is analytic, then $$C_\gamma^* k_w(z) = k_{\gamma(w)}(z)$$ where $k$ is any of the reproducing kernels $k_n$.
We recall here the definitions of the full and reduced crossed product $C^*$-algebras; we refer to [@MR81e:46037] for details. Let a group $\Gamma$ act by homeomorphisms on a compact Hausdorff space $X$. This induces an action of $\Gamma$ on the commutative $C^*$-algebra $C(X)$ via $$(\gamma\cdot f)(x)=f(\gamma^{-1}\cdot x)$$ The *algebraic crossed product* $C(X)\times_{alg} \Gamma$ consists of formal finite sums $f=\sum_{\gamma\in\Gamma}f_\gamma [\gamma]$, where $f_\gamma\in C(\mathbb{T})$ and the $[\gamma]$ are formal symbols. Multiplication is defined in $C(X)\times_{alg}
\Gamma$ by $$\left(\sum_{\gamma\in\Gamma}f_\gamma [\gamma] \right)
\left(\sum_{\gamma\in\Gamma}f_\gamma^\prime [\gamma^\prime] \right) =
\sum_{\delta\in\Gamma}\sum_{\gamma\gamma^\prime=\delta} f_\gamma
(\gamma\cdot f^\prime_{\gamma^\prime})[\delta]$$ For $f=\sum_\gamma f_\gamma [\gamma]$, define $f^*\in
C(X)\times_{alg} \Gamma$ by $$f^* = \sum_{\gamma\in\Gamma} (\gamma \cdot \overline{f_{\gamma^{-1}}})
[\gamma]$$ With this multiplication and involution, $C(X)\times_{alg} \Gamma$ becomes a $*$-algebra, and we may construct a $C^*$-algebra by closing the algebraic crossed product with respect to a $C^*$-norm.
To obtain a $C^*$-norm, one constructs $*$-representations of $C(X)\times_{alg}\Gamma$ on Hilbert space. To do this, we first fix a faithful representation $\pi$ of $C(X)$ on a Hilbert space $\mathcal{H}$. We then construct a representation $\sigma$ of the algebraic crossed product on $\mathcal{H}\otimes\ell^2(\Gamma)=\ell^2(\Gamma, \mathcal{H})$ as follows: define a representation $\tilde{\pi}$ of $C(X)$ by its action on vectors $\xi\in\ell^2(\Gamma, \mathcal{H})$ $$(\tilde{\pi}(f))(\xi)(\gamma)=\pi(f\circ\gamma)\xi(\gamma)$$ Represent $\Gamma$ on $\ell^2(\Gamma,\mathcal{H})$ by left translation: $$(U(\gamma))(\xi)(\eta)=\xi(\gamma^{-1}\eta)$$ The representation $\sigma$ is then given by $$\sigma\left(\sum f_\gamma [\gamma]\right)=\sum \tilde{\pi}(f_\gamma) U(\gamma)$$ The closure of $C(X)\times_{alg}\Gamma$ with respect the norm induced by this representation is called the *reduced crossed product* of $\Gamma$ and $C(X)$, and is denoted $C(X)\times_r\Gamma$. The *full crossed product*, denote $C(X)\times \Gamma$, is obtained by taking the closure of the algebraic crossed product with respect to the maximal $C^*$-norm $$\|f\|=\sup_\pi \|\pi(f)\|$$ where the supremum is taken over *all* $*$-representations $\pi$ of $C(X)\times_{alg}\Gamma$ on Hilbert space. When $\Gamma$ is discrete, $C(X)\times \Gamma$ contains a canonical subalgebra isomorphic to $C(X)$, and there is a natural surjective $*$-homomorphism $\rho:C(X)\times\Gamma \to C(X)\times_r \Gamma$.
The full crossed product is important because of its universality with respect to covariant representations. A *covariant representation* of the pair $(\Gamma, X)$ consists of a faithful representation $\pi$ of $C(X)$ on Hilbert space, together with a unitary representation $u$ of $\Gamma$ on the same space satisfying the covariance condition $$u(\gamma)\pi(f)u(\gamma)^* = f\circ \gamma^{-1}$$ for all $f\in C(X)$ and all $\gamma\in\Gamma$. If $\mathcal{A}$ denotes the C\*-algebra generated by the images of $\pi$ and $u$, then there is a surjective \*-homomorphism from $C(X)\times\Gamma$ to $\mathcal{A}$; equivalently, any C\*-algebra generated by a covariant representation is isomorphic to a quotient of the full crossed product.
We now collect properties of group actions on topological spaces which we will require in what follows. Let a group $\Gamma$ act by homeomorphisms on a locally compact Hausdorff space $X$; The action of $\Gamma$ on $X$ is called *minimal* if the set $\{\gamma\cdot x |
\gamma\in \Gamma\}$ is dense in $X$ for each $x\in X$, and called *topologically free* if, for each $\gamma\in \Gamma$, the set of points fixed by $\gamma$ has empty interior.
Suppose now $\Gamma$ is discrete and $X$ is compact. Let $\text{Prob}(\Gamma)$ denote the set of finitely supported probability measures on $\Gamma$. We say $\Gamma$ *acts amenably on* $X$ if there exists a sequence of weak-\* continuous maps $b_x^n:X\to
\text{Prob}(\Gamma)$ such that for every $\gamma\in \Gamma$, $$\lim_{n\to \infty} \sup_{x\in X} \|\gamma\cdot b_x^n - b_{\gamma\cdot x}^n\|_1
=0$$ where $\Gamma$ acts on the functions $b_x^n$ via $(\gamma\cdot
b_x^n)(z)=b_x^n(\gamma^{-1}\cdot z)$, and $\|\cdot \|_1$ denotes the $l^1$-norm on $\Gamma$.
\[T:as\][@MR94m:46101] Let $\Gamma$ be a discrete group acting on a compact Hausdorff space $X$, and suppose that the action is topologically free. If $\mathcal{J}$ is an ideal in $C(X)\times \Gamma$ such that $C(X)\cap \mathcal{J}=0$, then $\mathcal{J}\subseteq \mathcal{J}_\lambda$, where $\mathcal{J}_\lambda$ is the kernel of the projection of the full crossed product onto the reduced crossed product.
\[T:amenability\][@MR2000g:19004; @MR86j:22014] If the $\Gamma$ acts amenably on $X$, then the full and reduced crossed product $C^*$-algebras coincide, and this crossed product is nuclear.
In this paper we are concerned with the case $X=\mathbb{T}$ and $G=\Gamma$, a non-elementary Fuchsian group. The action of $\Gamma$ on $\mathbb{T}$ is always amenable, so by Theorem \[T:amenability\] the full and reduced crossed products coincide, $C(\partial\mathbb{D})\times\Gamma$ is nuclear.
We also require some of the basic terminology concerning Fuchsian groups. Fix a base point $z_0\in\mathbb{D}$. The *limit set* of $\Gamma$ is the set accumulation points of the orbit $\{\gamma(z_0) : \gamma\in\Gamma\}$ on the boundary circle; it is closed (and does not depend on the choice of $z_0$. The limit set can be one of three types: it is either finite (in which case it consists of at most two points), a totally disconnected perfect set (hence uncountable), or all of the circle. In the latter case we say $\Gamma$ is of the *first kind*, and of the *second kind* otherwise. If the limit set is finite, $\Gamma$ is called *elementary*, and *non-elementary* otherwise.
Though we do not require it in this paper, it is worth recording that if $\Gamma$ is of the first kind, then $C(\partial\mathbb{D})\times\Gamma$ is simple, nuclear, purely infinite, and belongs to the bootstrap category $\mathcal{N}$ (and is clearly separable and unital). Hence, it satisfies the hypotheses of the Kirchberg-Phillips classification theorem, and is classified up to isomorphism by its (unital) $K$-theory [@MR98a:46085; @MR2000k:46093].
Finally, we recall briefly the basic facts about extensions of C\*-algebras and the *Ext* functor. An exact sequence of C\*-algebras $$\begin{CD}
0 @>>> B @>>> E @>>> A @>>> 0
\end{CD}$$ is called an *extension* of $A$ by $B$. (In this paper we consider only extensions for which $B=\mathcal{K}$, the C\*-algebra of compact operators on a separable Hilbert space.) Associated to an extension of $A$ by $\mathcal{K}$ is a $*$-homomorphism $\tau$ from $A$ to the Calkin algebra $\mathcal{Q}(\mathcal{H})=\mathcal{B}(\mathcal{H})/\mathcal{K}(\mathcal{H})$; this $\tau$ is called the *Busby map* associated to the extension. Conversely, given a map $\tau:A\to \mathcal{Q}(\mathcal{H})$ there is a unique extension having Busby map $\tau$; we will thus speak of an extension and its Busby map interchangeably. Two extensions of $A$ by $\mathcal{K}$ with Busby maps $\tau_1:A\to \mathcal{Q}(\mathcal{H}_1)$ and $\tau_2:A\to \mathcal{Q}(\mathcal{H}_2)$ are *strongly unitarily equivalent* if there is a unitary $u:\mathcal{H}_1\to \mathcal{H}_2$ such that $$\label{E:ue}
\pi(u)\tau_1(a)\pi(u)^*=\tau_2(a)$$ for all $a\in A$ (here $\pi$ denotes the quotient map from $\mathcal{B}(\mathcal{H}_1)$ to $\mathcal{Q}(\mathcal{H}_1)$). We say $\tau_1$ and $\tau_2$ are *unitarily equivalent*, written $\tau_1\sim_u \tau_2$, if (\[E:ue\]) holds with $u$ replaced by some $v$ such that $\pi(v)$ is unitary. An extension $\tau$ is *trivial* if it lifts to a $*$-homomorphism, that is there exists a $*$-homomorphism $\rho:A\to \mathcal{B}(\mathcal{H})$ such that $\tau(a)=\pi(\rho(a))$ for all $a\in A$. Two extensions $\tau_1, \tau_2$ are *stably equivalent* if there exist trivial extensions $\sigma_1, \sigma_2$ such that $\tau_1\oplus\sigma_1$ and $\tau_2\oplus \sigma_2$ are strongly unitarily equivalent; stable equivalence is an equivalence relation. If $A$ is separable and nuclear (which will always be the case in this paper) then the stable equivalence classes of extensions of $A$ by $\mathcal{K}$ form an abelian group (where addition is given by direct sum of Busby maps) called $Ext(A,\mathcal{K})$, abbreviated $Ext(A)$. Finally, each element of $Ext(A)$ determines an *index homomorphism* $\partial:K_1(A)\to \mathbb{Z}$ obtained by lifting a unitary in $M_n(A)$ representing the $K_1$ class to a Fredholm operator in $M_n(E)$ and taking its Fredholm index.
The Extensions ${\mathcal{C}_\Gamma}$ and ${\mathcal{C}_\Gamma}^n$
==================================================================
The Hardy space
---------------
This section is devoted to the proof of our main theorem, in the case of the Hardy space:
\[T:main\] Let $\Gamma$ be a non-elementary Fuchsian group, and let ${\mathcal{C}_\Gamma}$ denote the $C^*$-algebra generated by the set of composition operators on $H^2$ with symbols in $\Gamma$. Then there is an exact sequence $$\begin{CD}
0 @>>> \mathcal{K} @>\iota>> {\mathcal{C}_\Gamma}@>\pi >> C(\mathbb{T})\times \Gamma @>>> 0
\end{CD}$$
.2in
While the proof in the general case of $A^2_n$ follows similar lines, we prove the $H^2$ case first since it is technically simpler and illustrates the main ideas. The proof splits into three parts: first, we prove that ${\mathcal{C}_\Gamma}$ contains the unilateral shift $S$ and hence the compacts (Proposition \[P:shift\]). We then prove that the quotient ${\mathcal{C}_\Gamma}/\mathcal{K}$ is generated by a covariant representation of the topological dynamical system $(C(\mathbb{T}), \Gamma)$. Finally we prove that the $C^*$-algebra generated by this representation is all of the crossed product $C(\mathbb{T})\times \Gamma$.
We first require two computational lemmas.
\[L:factor\] Let $\gamma$ be an automorphism of $\mathbb{D}$ with $a=\gamma^{-1}(0)$ and let $$f(z)=\frac{1-\overline{a}z}{(1-|a|^2)^{1/2}}$$ Then $$C_\gamma C_\gamma^* = M_f M_f^*$$ where $M_f$ denotes the operator of multiplication by $f$.
[**Proof. **]{}It suffices to show $$\label{E:bilin}
\langle C_\gamma C_\gamma^* k_w, k_z\rangle = \langle M_f M_f^* k_w,
k_z\rangle$$ for all $z,w$ in $\mathbb{D}$. Since $C_\gamma^*k_\lambda =k_{\gamma(\lambda)}$ and $M_f^* k_\lambda = \overline{f(\lambda)}k_\lambda$, (\[E:bilin\]) reduces to the well-known identity $$\label{E:phi_ident}
\frac{1}{1-\gamma(z)\overline{\gamma(w)}}=\frac{(1-\overline{a}z)(1-a\overline{w})}{(1-|a|^2)(1-z\overline{w})}.$$
\[L:commute\] Let $\gamma(z)$ be an automorphism of $\mathbb{D}$ and let $S=M_z$. Then $$M_{\gamma} C_\gamma = C_\gamma S$$
[**Proof. **]{}For all $g\in H^2$, $$\begin{aligned}
\nonumber (C_\gamma S)(g)(z) &= \gamma(z)g(\gamma(z)) \\
\nonumber &= (M_\gamma C_\gamma)(g)(z)\end{aligned}$$
\[P:shift\] Let $\Gamma$ be a non-elementary Fuchsian group. Then ${\mathcal{C}_\Gamma}$ contains the unilateral shift $S$.
[**Proof. **]{}Let $\Lambda$ denote the limit set of $\Gamma$. Since $\Gamma$ is non-elementary, there exist three distinct points $\lambda_1$, $\lambda_2$, $\lambda_3\in\Lambda$. For each $\lambda_i$ there exists a sequence $\gamma_{n,i}$ such that $\gamma_{n,i}(0)\to \lambda_i$. Let $a_i^n
=\gamma_{n,i} (0)$. By Lemma \[L:factor\], $$(1-|a_i^n|^2)C_{\gamma^{-1}_{n,i}}C_{\gamma^{-1}_{n,i}}^* = (1-\overline{a_i^n} S)(1-\overline{a_i^n} S)^*$$ As $n\to \infty$, $a_i^n \to \lambda_i$ and the right-hand side converges to $$1-\overline{\lambda}S-\lambda S^* +SS^*$$ in ${\mathcal{C}_\Gamma}$. Taking differences of these operators for different values of $i$, we see that $\Re [\mu_1 S],\ \Re [\mu_2
S]\in {\mathcal{C}_\Gamma}$, with $\mu_1 =\overline{\lambda_1 -\lambda_2}$ and $\mu_2=\overline{\lambda_2 -\lambda_3}$. Note that since $\lambda_1$, $\lambda_2$, $\lambda_3$ are distinct points on the circle, the complex numbers $\mu_1$ and $\mu_2$ are linearly independent over $\mathbb{R}$ when identified with vectors in $\mathbb{R}^2$. We now show there exist scalars $a_1,\ a_2$ such that $$a_1 \Re [\mu_1 S] + a_2 \Re [\mu_2 S] =S$$ which proves the lemma. Such scalars must solve the linear system $$\begin{pmatrix} \mu_1 && \mu_2 \\
\overline{\mu_1} && \overline{\mu_2} \end{pmatrix}
\begin{pmatrix} a_1 \\
a_2 \end{pmatrix} = \begin{pmatrix} 1 \\
0
\end{pmatrix}$$ Writing $\mu_1=\alpha_1 +i\beta_1$, $\mu_2=\alpha_2+i\beta_2$, a short calculation shows that $$\det \begin{pmatrix} \mu_1 && \mu_2 \\
\overline{\mu_1} && \overline{\mu_2} \end{pmatrix}
=-2i \det \begin{pmatrix} \alpha_1 && \alpha_2 \\
\beta_1 && \beta_2 \end{pmatrix}$$ The latter determinant is nonzero since $\mu_1$, $\mu_2$ are linearly independent over $\mathbb{R}$, and the system is solvable. $\square$
The above argument does not depend on the discreteness of $\Gamma$; indeed it is a refinement of an argument due to J. Moorhouse [@moorhouse-thesis] that the $C^*$-algebra generated by *all* Möbius transformations contains $S$. The argument is valid for any group which has an orbit with three accumulation points on $\mathbb{T}$; e.g. the conclusion holds for any dense subgroup of $PSU(1,1)$.
.2in
**Proof of Theorem \[T:main\]** For an automorphism $\gamma$ of $\mathbb{D}$ set $$U_\gamma =(C_\gamma C_\gamma^*)^{-1/2} C_\gamma,$$ the unitary appearing in the polar decomposition of $C_\gamma$. By Lemma \[L:factor\], $C_\gamma C_\gamma^*=T_f T_f^*$, so we may write $$U_\gamma = T_{|f|^{-1}} C_\gamma +K$$ for some compact $K$. Now by Lemma \[L:commute\], if $p$ is any analytic polynomial, $$\begin{aligned}
\nonumber U_\gamma T_p &=T_{|f|^{-1}} C_\gamma T_p +K^\prime\\
\nonumber &= T_{|f|^{-1}} T_{p\circ \gamma} C_\gamma +K^\prime\\
\nonumber &= T_{p \circ \gamma}T_{|f|^{-1}} C_\gamma +K^{\prime\prime} \\
\nonumber &= T_{p\circ \gamma}U_\gamma +K^{\prime\prime} \end{aligned}$$ where we have used the fact that $T_{|f|^{-1}}$ and $T_p$ commute modulo $\mathcal{K}$. Taking adjoints and sums shows that $$\label{E:polycovar}
U_\gamma T_q U_\gamma^* =T_{q\circ \gamma} +K$$ for any trigonometric polynomial $q$. We next show that, for Möbius transformations $\gamma, \eta$, $$U_\gamma U_\eta = U_{\eta\circ\gamma} +K$$ for some compact $K.$ To see this, write $$\gamma(z)=\frac{az+b}{cz+d}, \qquad \eta(z)=\frac{ez+f}{gz+h}$$ Then $$U_\gamma = T_{|cz+d|^{-1}} C_\gamma +K_1, \qquad U_\eta = T_{|gz+h|^{-1}}
C_\eta +K_2.$$ Note that $$U_{\eta\circ\gamma} = T_{|g(az+b)+h(cz+d)|^{-1}}C_{\eta\circ\gamma} +K_3.$$ Then $$\begin{aligned}
\nonumber U_\gamma U_\eta &= T_{|cz+d|^{-1}} C_\gamma T_{|gz+h|^{-1}} C_\eta +K\\
\nonumber &= T_{|cz+d|^{-1}} T_{|g\gamma (z)+h|^{-1}} C_\gamma C_\eta
+K^\prime \\
\nonumber &=T_{|g(az+b)+h(cz+d)|^{-1}}C_{\eta\circ\gamma}+K^\prime \\
\nonumber &= U_{\eta\circ\gamma} +K^{\prime\prime}\end{aligned}$$
Observe now that ${\mathcal{C}_\Gamma}$ is equal to the C\*-algebra generated by $S$ and the unitaries $U_\gamma$. We have already shown that ${\mathcal{C}_\Gamma}$ contains $S$ and each $U_\gamma$, for the converse we recall that $C_\gamma=(C_\gamma C_\gamma^*)^{1/2}U_\gamma$, and since $C_\gamma C_\gamma^*)$ lies in $C^*(S)$ by Lemma \[L:factor\], we see that $C_\gamma$ lies in the C\*-algebra generated by $S$ and $U_\gamma$. Letting $\pi$ denote the quotient map $\pi:{\mathcal{C}_\Gamma}\to{\mathcal{C}_\Gamma}/\mathcal{K}$, it follows hat ${\mathcal{C}_\Gamma}/\mathcal{K}$ is generated as a $C^*$-algebra by a copy of $C(\mathbb{T})$ and the unitaries $\pi(U_\gamma)$, and the map $\gamma\to\pi(U_{\gamma^{-1}})$ defines a unitary representation of $\Gamma$. Let $\alpha:\Gamma\to\text{Aut}(C(\mathbb{T}))$ be given by $$\alpha_\gamma (f)(z) = f(\gamma^{-1}(z))$$ Then by (\[E:polycovar\]), $$\label{E:cov}
\pi(U_{\gamma^{-1}})\pi(T_f)\pi(U_{\gamma^{-1}}^*)=\pi(T_{f\circ\gamma^{-1}})=\pi(T_{\alpha_\gamma(f)})$$ for all trigonometric polynomials $f$, and hence for all $f\in
C(\mathbb{T})$ by continuity. Thus, ${\mathcal{C}_\Gamma}/\mathcal{K}$ is generated by $C(\mathbb{T})$ and a unitary representation of $\Gamma$ satisfying the relation (\[E:cov\]). Therefore there is a surjective $*$-homomorphism $\rho:C(\mathbb{T})\times\Gamma \to {\mathcal{C}_\Gamma}/\mathcal{K}$ satisfying $$\rho(f)=\pi(T_f), \qquad \rho(u_\gamma)=\pi(U_{\gamma^{-1}})$$ for all $f\in C(\mathbb{T})$ and all $\gamma\in\Gamma$. Let $\mathcal{J}=\text{ker}\, \rho$. The theorem will be proved once we show $\mathcal{J}=0$. To do this, we use Theorems \[T:as\] and \[T:amenability\]. Indeed, it suffice to show that $C(\mathbb{T})\cap\mathcal{J}=0$: then $\mathcal{J}\subset \mathcal{J}_\lambda$ by Theorem \[T:as\], and $\mathcal{J}_\lambda =0$ by Theorem \[T:amenability\].
To see that $C(\mathbb{T})\cap\mathcal{J}=0$, choose $f\in C(\mathbb{T})\cap \mathcal{J}$. Then $\pi(T_f)=\rho(f)=0$, which means that $T_f$ is compact. But then $f=0$, since nonzero Toeplitz operators are non-compact.
$\square$
Other Hilbert spaces
--------------------
In this section we prove the analogue of Theorem \[T:main\] for composition operators acting on the reproducing kernel Hilbert spaces with kernel given by $$k(z,w)=\frac{1}{{{(1-z\overline{w})}^n}}$$ for integers $n\geq2$.
We let $A^2_n$ denote the Hilbert function space on $\mathbb{D}$ with kernel $k(z,w)={(1-z\overline{w})}^{-n}$. For $n\geq2$, this space consists of those analytic functions in $\mathbb{D}$ for which $$\int_\mathbb{D} |f(w)|^2 (1-|w|^2)^{n-2}dA(w)$$ is finite, and the square root of this quantity is a norm on $A^2_n$ equivalent to the norm determined by the reproducing kernel $k$ (they are equal for $n=2$). We fix $n\geq2$ for the remainder of this section, and let $T$ denote the operator of multiplication by $z$ on $A^2_n$. The operator $T$ is a contractive weighted shift, and essentially normal. Moreover, if $f$ is any bounded analytic function on $\mathbb{D}$, then multiplication by $f$ is bounded on $A^2_n$ and is denoted $M_f$. If $\gamma$ is a Möbius transformation, then $C_\gamma$ is bounded on $A^2_n$. We let ${\mathcal{C}_\Gamma}^n$ denote the C\*-algebra generated by the operators $\{C_\gamma :\gamma\in\Gamma\}$ acting on $A^2_n$. In this section we prove:
\[T:main\_gen\] The C\*-algebra ${\mathcal{C}_\Gamma}^n$ contains $C^*(T)$, and in particular the compact operators $\mathcal{K}(A^2_n)$. The map defined by $\pi(C_\gamma)=|\gamma^\prime|^{n/2} u_\gamma$ extends to a $*$-homomorphism from ${\mathcal{C}_\Gamma}^n$ onto $C(\partial\mathbb{D})\times\Gamma$, and there is an exact sequence $$0\to \mathcal{K}(A^2_n)\to {\mathcal{C}_\Gamma}^n\to C(\partial\mathbb{D})\times\Gamma \to 0$$
We collect the following computations together in a single Lemma, the analogue for $A^2_n$ of the lemmas at the beginning of the previous section.
\[L:cgn\_calc\]For $\gamma\in\Gamma$, with $a=\gamma^{-1}(0)$ and $n$ fixed, let $$f(z)=\frac{{(1-\overline{a}z)}^n}{{(1-{|a|}^2)}^{n/2}}$$ Then $C_\gamma C_\gamma^* =M_fM_f^*$, and $M_\gamma C_\gamma = C_\gamma T$.
[**Proof. **]{}As in the Hardy space, for the first equation it suffices to prove that $$\label{E:bilin_general}
\langle C_\gamma C_\gamma^* k_w, k_z\rangle = \langle T_f T_f^* k_w,
k_z\rangle$$ for all $z,w$ in $\mathbb{D}$. This is the same as the identity $$\label{E:phi_ident_general}
{\left( \frac{1}{1-\gamma(z)\overline{\gamma(w)}}\right)}^n=\frac{{(1-\overline{a}z)}^n{(1-a\overline{w})}^n}{{(1-|a|^2)}^n{(1-z\overline{w})}^n}.$$ which is just (ref) raised to the power $n$. As before, the second equation follows immediately from the definitions. $\square$
The proof of Theorem \[T:main\_gen\] follows the same lines as that for the Hardy space, except that it requires more work to show that ${\mathcal{C}_\Gamma}^n$ contains $T$. We first prove that ${\mathcal{C}_\Gamma}^n$ is irreducible.
\[P:reduce\] The $C^*$-algebra ${\mathcal{C}_\Gamma}^n$ is irreducible.
[**Proof. **]{}We claim that the operator $T^n$ lies in ${\mathcal{C}_\Gamma}^n$; we first show how irreducibility follows form this and then prove the claim.
By [@MR2001h:47044], the only reducing subspaces of $T^n$ are direct sums of subspaces of the form $$X_k=\overline{\text{span}}\{z^{k+mn}|m=0,1,2,\dots\}$$ for $0\leq k\leq n-1$. Thus, since $T^n\in {\mathcal{C}_\Gamma}^n$, these are the only possible reducing subspaces for ${\mathcal{C}_\Gamma}^n$. Suppose, then, that $X$ is a nontrivial direct sum of distinct subspaces $X_k$ (i.e. $X\neq
A^2_n$). Observe that, for each $k$, either $X$ contains $X_k$ as a summand, or $X$ is orthogonal to $X_k$. In the latter case (which we assume holds for some $k$), the $k^{th}$ Taylor coefficient of every function in $X$ vanishes.
It is now easy to see that $X$ cannot be reducing (or even invariant) for ${\mathcal{C}_\Gamma}^n$. Indeed, if $X$ does not contain $X_0$ as a summand, then every function $f\in X$ vanishes at the origin, but if $\gamma\in \Gamma$ does not fix the origin then $C_\gamma f\notin X$. On the other hand, if $X$ contains the scalars, then consider the operator $F(\lambda)=(1-\overline{\lambda}T)^n(1-\lambda T^*)^n$, for $\lambda\in\Lambda$. Since $T^*$ annihilates the scalars, we have $F(\lambda)1=(1-\overline{\lambda}z)^n=p(z)$. But since $|\lambda|=1$, the $k^{th}$ Taylor coefficient of $p$ does not vanish, so $p\notin X$. But since $F(\lambda)$ belongs to ${\mathcal{C}_\Gamma}^n$, it follows that $X$ is not invariant for ${\mathcal{C}_\Gamma}^n$ and hence ${\mathcal{C}_\Gamma}^n$ is irreducible.
To prove that $T^n\in {\mathcal{C}_\Gamma}^n$, we argue along the lines of Proposition \[P:shift\], though the situation is somewhat more complicated. Using the same reproducing kernel argument as in that Proposition, we see that the operators $$\begin{aligned}
F(\lambda)&=(1-\overline{\lambda}T)^n(1-\lambda T^*)^n \\
&= \sum_{j=0}^n\sum_{k=0}^n (-1)^{(j+k)} \binom{n}{j}\binom{n}{k}\overline{\lambda}^k \lambda^j T^k T^{*j} \\
&=\sum_{d=0}^n \binom{n}{d}^2 T^d T^{*d} +2\Re \sum_{m=1}^n (-1)^m\overline{\lambda}^m \sum_{m\leq k\leq n}\binom{n}{k}\binom{n}{k-m}T^k T^{*{k-m}} \\
&= \sum_{d=0}^n \binom{n}{d}^2 T^d T^{*d} + \sum_{m=1}^n \overline{\lambda}^m E_m + \lambda^m E_m^*\end{aligned}$$ lie in ${\mathcal{C}_\Gamma}^n$ for all $\lambda$ in the limit set $\Lambda$ of $\Gamma$. Here we have adopted the notation $$E_m = (-1)^m \sum_{m\leq k\leq n} \binom{n}{k}\binom{n}{k-m}T^k T^{* k-m}$$ Note in particular that $E_n=(-1)^n T^n$. Forming differences as $\lambda$ ranges over $\Lambda$ shows that ${\mathcal{C}_\Gamma}^n$ contains all operators of the form $$G(\lambda,\mu)=F(\lambda)-F(\mu) = \sum_{m=1}^n (\overline{\lambda}^m-\overline{\mu}^m)E_m +(\lambda^m-\mu^m)E_m^*$$ for all $\lambda, \mu \in \Lambda$. We wish to obtain $T^n$ as a linear combination of the $G(\lambda,\mu)$; since $E_n$ is a scalar multiple of $T^n$ it suffices to show that there exist $2n$ pairs $(\lambda_j, \mu_j)\in \Lambda\times\Lambda$ and $2n$ scalars $\alpha_j$ such that $$E_n = \sum_{j=1}^{2n} \alpha_j G(\lambda_j, \mu_j)$$ Let $L$ be the $2n\times 2n$ matrix whose $j^{th}$ column is $$\begin{pmatrix} \overline{\lambda_j}-\overline{\mu_j} \\
\lambda_j-\mu_j \\
\overline{\lambda_j}^2-\overline{\mu_j}^2 \\
\lambda_j^2-\mu_j^2 \\
\vdots \\
\overline{\lambda_j}^n-\overline{\mu_j}^n \\
\lambda_j^n-\mu_j^n \\
\end{pmatrix}$$ We must therefore solve the linear system $$L \begin{pmatrix} \alpha_1 \\
\vdots \\
\alpha_{2n-1}\\
\alpha_{2n}\\
\end{pmatrix}
= \begin{pmatrix} 0 \\
\vdots \\
1 \\
0 \end{pmatrix}$$ for which it suffices to show that the matrix $L$ is nonsingular for some choice of the $\lambda_j$ and $\mu_j$ in $\Lambda$. To do this, we fix $2n+1$ distinct points $z_0, z_1,\dots
z_{2n}$ in $\Lambda$ and set $\lambda_j=z_0$ for all $j$ and $\mu_j=z_j$ for $j=1,\dots 2n$. The matrix $L$ then becomes the matrix whose $j^{th}$ column is $$\begin{pmatrix} \overline{z_0}-\overline{z_j} \\
z_0-z_j \\
\overline{z_0}^2-\overline{z_j}^2 \\
z_0^2-{z_j}^2 \\
\vdots \\
\overline{z_0}^n-\overline{z_j}^n \\
z_0^n-{z_j}^n \\
\end{pmatrix}$$ To prove that $L$ is nonsingular, we prove that its rows are linearly independent. To this end, let $c_j$ and $d_j$, $j=1,\dots n$ be scalars such that for each $k=1, \dots 2n$, $$\label{E:roots}
\sum_{j=1}^n c_j (z_0^j-z_k^j)+\sum_{j=1}^n d_j (\overline{z_0}^j-\overline{z_k}^j) = 0$$ To see that all of the $c_j$ and $d_j$ must be $0$, consider the harmonic polynomial $$P(z,\overline{z})=\sum_{j=1}^n c_j (z_0^j-z^j)+\sum_{j=1}^n d_j (\overline{z_0}^j-\overline{z}^j)$$ By (\[E:roots\]), $P$ has $2n+1$ distinct zeroes on the unit circle, namely the points $z_0, \dots z_{2n}$. But this means that the rational function $$Q(z)=\sum_{j=1}^n c_j (z_0^j-z^j)+\sum_{j=1}^n d_j (\overline{z_0}^j-\frac{1}{z^j})$$ also has $2n+1$ zeroes on the circle. But since the degree of $Q$ is at most $2n$, it follows that $Q$ must be the zero polynomial, and hence all the $c_j$ and $d_j$ are $0$.
\[P:bergshift\] ${\mathcal{C}_\Gamma}^n$ contains $T$.
[**Proof. **]{} We have established that for each $\lambda$ in the limit set, the operators $$(1-\overline{\lambda}T)^n(1-\overline{\lambda}T)^{*n}$$ lie in ${\mathcal{C}_\Gamma}^n$. Now, since $T$ is essentially normal, the difference $$(1-\overline{\lambda}T)^n(1-\overline{\lambda}T)^{*n} -
[(1-\overline{\lambda}T)(1-\overline{\lambda}T)^*]^n$$ is compact. Given two positive operators which differ by a compact, their (unique) positive $n^{th}$ roots also differ by a compact. The positive square root of the left-hand term above is thus equal to $$(1-\overline{\lambda}T)(1-\overline{\lambda}T)^*+K$$ for some compact operator $K$ (depending on $\lambda$), and lies in ${\mathcal{C}_\Gamma}^n$. Forming linear combinations as in Proposition \[P:shift\], we conclude that $T+K$ lies in ${\mathcal{C}_\Gamma}^n$ for some compact $K$. Now, as ${\mathcal{C}_\Gamma}^n$ is irreducible and contains a nonzero compact operator (namely, the self-commutator of $T+K$), it contains all the compacts, and hence $T$. $\square$
We now prove the analogue of Theorem \[T:main\] for ${\mathcal{C}_\Gamma}^n$:
\[T:berg\_main\] Let $\Gamma$ be a non-elementary Fuchsian group, and let ${\mathcal{C}_\Gamma}^n$ denote the $C^*$-algebra generated by the set of composition operators on $A^2$ with symbols in $\Gamma$. Then there is an exact sequence $$\begin{CD}
0 @>>> \mathcal{K} @>\iota>> {\mathcal{C}_\Gamma}^n @>\pi >> C(\mathbb{T})\times \Gamma @>>> 0
\end{CD}$$
[**Proof. **]{}The proof is similar to that of Theorem \[T:main\]; we define the unitary operators $U_\gamma$ in the same way (using the polar decomposition of $C_\gamma$), and check that he map that sends $\gamma$ to $U_{\gamma^{-1}}$ is a unitary representation of $\Gamma$ on $A^2_n$, modulo $\mathcal{K}$. The computation to check the covariance condition is essentially the same. As ${\mathcal{C}_\Gamma}^n$ is generated by the $T$ and the unitaries $U_\gamma$, the quotient ${\mathcal{C}_\Gamma}^n/\mathcal{K}$ is generated by a copy of $C(\partial\mathbb{D})$ (since $\partial\mathbb{D}$ is the essential spectrum of $T$) and a representation of $\Gamma$ which satisfy the covariance condition; the rest of the proof proceeds exactly as for $H^2$, up until the final step. At the final step in that proof, we have a Toeplitz operator $T_f$ which is compact; this implies that $f=0$ in the $H^2$ case but not for the spaces $A^2_n$. However the symbol of a compact Toeplitz operator on $A^2_n$ must vanish at the boundary of $\mathbb{D}$, so we may still conclude that $f=0$ on $\partial\mathbb{D}$ and the proof is complete.$\square$
$K$-Homology of $C(\partial\mathbb{D})\times\Gamma$
===================================================
$Ext$ classes of ${\mathcal{C}_\Gamma}$ and ${\mathcal{C}_\Gamma}^n$
--------------------------------------------------------------------
In this section we prove that the extensions of $C(\partial\mathbb{D})\times
\Gamma$ determined by ${\mathcal{C}_\Gamma}$ and ${\mathcal{C}_\Gamma}^n$ determine the same element of the group $\text{Ext}(C(\partial\mathbb{D})\times\Gamma)$, and we prove that ${\mathcal{C}_\Gamma}$ and ${\mathcal{C}_\Gamma}^n$ are isomorphic as $C^*$-algebras. (In general neither of these statements implies the other.) We borrow several ideas from \[Rørdam\] concerning the classification of extensions by the associated six-term exact sequence in $K$-theory. The reader is referred to [@MR2001g:46001] for an introduction to $K$-theory and [@MR88g:46082 Chapter 15] for the basic definitions and theorems concerning extensions of $C^*$-algebras and the $Ext$ group.
\[T:extclasses\] The extensions of $C(\partial\mathbb{D})\times\Gamma$ determined by ${\mathcal{C}_\Gamma}$ and ${\mathcal{C}_\Gamma}^n$ define the same element of the group $\text{Ext}(C(\partial\mathbb{D})\times\Gamma)$.
We will show that the Busby maps of the extensions determined by ${\mathcal{C}_\Gamma}$ and ${\mathcal{C}_\Gamma}^n$ are strongly unitarily equivalent. To do this we introduce an integral operator $V\in\mathcal{B}(A^2_n,A^2_{n-1})$ and prove the following lemma, which describes the properties of $V$ needed in the proof of the theorem.
\[L:v\_operator\]Consider the integral operator defined on analytic polynomials $f(w)$ by $$(Vf)(z)=\int_{\mathbb{D}}\frac{f(w)}{(1-\overline{w}z)}(1-|w|^2)^{-1/2}\, dA(w)$$ where $dA$ is normalized Lebesgue area measure on $\mathbb{D}$. Then:
1. For each $n\geq0$, $V$ extends to a bounded diagonal operator from $A^2_{n+1}$ to $A^2_n$, and there exists a compact operator $K_n$ such that $V+K_n$ is unitary.
2. If $T_{n+1}$ and $T_n$ denote multiplication by $z$ on $A^2_{n+1}$ and $A^2_n$ respectively, then $VT_{n+1}-T_nV$ is compact.
3. If $g$ is continuous on $\overline{\mathbb{D}}$ and analytic in $\mathbb{D}$, then the integral operator $$(V_{\overline{g}}f)(z)=\int_\mathbb{D}\frac{f(w)\overline{g(w)}}{1-\overline{w}z}(1-|w|^2)^{-1/2}dA(w)$$ is bounded from $A^2_{n+1}$ to $A^2_n$, and is a compact perturbation of $VM_g^*$.
[**Proof. **]{}If $f(w)=w^k$, then the integrand is absolutely convergent for each $z\in\mathbb{D}$, and we calculate $$\begin{aligned}
(Vf)(z) &=\int_\mathbb{D}\frac{w^k}{1-\overline{w}z}(1-|w|^2)^{-1/2}dA(w) \\
&= \sum_{j=0}^\infty\int_\mathbb{D}w^k\overline{w}^j z^j (1-|w|^2)^{-1/2}dA(w)\\
&= \left(\int_\mathbb{D}|w|^{2k} (1-|w|^2)^{-1/2}dA(w)\right) z^k \\
&= \alpha_k z^k \end{aligned}$$ It is known that the sequence $(\alpha_k)$ defined by the above integral is asymptotically $(k+1)^{-1/2}$. Each space $A^2_n$ has an orthonormal basis of the form $\beta_k z^k$, where the sequence $\beta_k$ is asymptotically $(k+1)^{(n-1)/2}$. It follows that $V$ intertwines orthonormal bases for $A^2_{n+1}$ and $A^2_n$ modulo a compact diagonal operator, which establishes the first statement. Moreover, the second statement also follows, by observing that the operators $T_{n+1}$ and $T_n$ are weighted shifts with weight sequences asymptotic to $1$.
To prove the last statement, first suppose $g(w)=w$, and let $\beta_k z^k$ be an orthonormal basis for $A^2_{n+1}$. Then a direct computation shows that $V_{\overline{g}}$ is a weighted backward shift with weight sequence $\alpha_{j+1} \beta_j /\beta_{j+1}$, $j=0, 1, \dots$. Thus with respect to this basis, $V^*V_{\overline{z}}$ is a weighted shift with weight sequence $\alpha_{j+1}\beta_j /\alpha_j\beta_{j+1}$. Since $\lim_{j \to\infty}\alpha_{j+1}/\alpha_j=1$, it follows that $V^*V_{\overline{z}}$ is a compact perturbation of $M_z^*$. By linearity, the lemma holds for polynomial $g$. Finally, if $p_n$ is a sequence of polynomials such that $M_{p_n}$ converges to $M_g$ in the operator norm (which is equal to the supremum norm of the symbol), then $M_{p_n}$ converges to $M_g$ in the essential norm, and $p_n$ converges to $g$ uniformly, so that $V_{\overline{p_n}}\to V_{\overline{g}}$ in norm and the result follows.
**Proof of Theorem \[T:extclasses\]**
We will prove that the Busby maps of the extensions determined by ${\mathcal{C}_\Gamma}$ and ${\mathcal{C}_\Gamma}^n$ are strongly unitarily equivalent. It suffices to prove, for each fixed $n$, the equivalence between ${\mathcal{C}_\Gamma}^n$ and ${\mathcal{C}_\Gamma}^{n+1}$. We first establish some notation: for a function $g$ in the disk algebra, we let $M_g$ denote the multiplication operator with symbol $g$ acting on $A^2_{n+1}$, and $\widetilde{M}_g$ the corresponding operator acting on $A^2_n$. Similarly, for the composition operators $C_\gamma$ and the unitaries $U_\gamma$ on $A^2_{n+1}$, a tilde denotes the corresponding operator on $A^2_n$. Now, since each of the algebras ${\mathcal{C}_\Gamma}^n, {\mathcal{C}_\Gamma}^{n+1}$ is generated by the operators $M_z$ (resp. $\widetilde{M}_z$) and the unitaries $U_\gamma$ (resp. $\widetilde{U}_\gamma$) on the respective Hilbert spaces, it suffices to produce a unitary $U$ (or indeed a compact perturbation of a unitary) from $A^2_{n+1}$ to $A^2_n$ such that the operators $UM_z-\widetilde{M}_zU$ and $UU_\gamma-\widetilde{U}_\gamma U$ are compact for all $\gamma\in\Gamma$. In fact we will prove that the operator $V$ of the previous lemma does the job.
To prove this, we will first calculate the operator $\widetilde{C}_\gamma VC_{\gamma^{-1}}$. Written as an integral operator, $$(\widetilde{C}_\gamma VC_{\gamma^{-1}}
f)(z)=\int_\mathbb{D}\frac{f(\gamma^{-1}(w))}{1-\overline{w}\gamma(z)}(1-|w|^2)^{-1/2}\,dA(w)$$ Applying the change of variables $w\to\gamma(w)$ to this integral, we obtain $$(\widetilde{C}_\gamma VC_{\gamma^{-1}}
f)(z)=\int_\mathbb{D}\frac{f(w)}{1-\overline{\gamma(w)}\gamma(z)}(1-|\gamma(w)|^2)^{-1/2}|\gamma^\prime(w)|^2\,dA(w)$$ A little algebra shows that $$1-|\gamma(w)|^2 = (1-|w|^2)|\gamma^\prime(w)|$$ Furthermore, multiplying and dividing the identity (\[E:phi\_ident\]) by $1-\overline{a}w$, we obtain $$\frac{1}{1-\overline{\gamma(w)}\gamma(z)}=\frac{1-\overline{a}z}{(1-\overline{a}w)(1-\overline{w}z)}|\gamma^\prime(w)|^{-1}$$ Thus the integral may be transformed into $$(\widetilde{C}_\gamma VC_{\gamma^{-1}} f)(z)=\int_\mathbb{D}\frac{f(w)}{1-\overline{w}z}\frac{1-\overline{a}z}{1-\overline{a}w}|\gamma^\prime(w)|^{1/2}(1-|w|^2)^{-1/2}\,dA(w)$$ Since $\gamma^\prime$ is analytic and non-vanishing in a neighborhood of the closed disk, there exists a function $\psi$ in the disk algebra such that $|\psi|^2=|\gamma^\prime|^{1/2}$ (choose $\psi$ to be a branch of $(\gamma^\prime)^{1/4}$). Thus we may rewrite this integral as $$\begin{aligned}
(\widetilde{C}_\gamma VC_{\gamma^{-1}} f)(z) &=\int_\mathbb{D}\frac{f(w)}{1-\overline{w}z}\frac{1-\overline{a}z}{1-\overline{a}w}\psi(w)\overline{\psi(w)}(1-|w|^2)^{-1/2}\,dA(w)\\
&= M_{1-\overline{a}z}V_{\overline{\psi}}M_{(1-\overline{a}z)^{-1}\psi(z)}\end{aligned}$$ Now, applying the lemma to $V_{\overline{\psi}}$ and using the fact that $V$ intertwines the multiplier algebras of $A^2_{n+1}$ and $A^2_n$ modulo the compacts, we have $$\begin{aligned}
\widetilde{C}_\gamma VC_{\gamma^{-1}} &= M_{1-\overline{a}z}V_{\overline{\psi}}M_{(1-\overline{a}z)^{-1}\psi(z)}\\
&= M_{1-\overline{a}z} VM_\psi^*M_\psi M_{(1-\overline{a}z)^{-1}} +K\\
&= VM_\psi^*M_\psi +K\end{aligned}$$ modulo the compacts. Multiplying on the right by $C_{\gamma^{-1}}$ and on the left by $V^*$ we get $$\label{E:VCV}
V^*\widetilde{C}_\gamma V =M_\psi^* M_\psi C_\gamma +K$$ Since $\psi^2(z)=(1-|a|^2)^{1/2}(1-\overline{a}z)^{-1}$, the calculations in the proof of \[T:main\] show that $$(C_\gamma C_\gamma^*)^{-1/2}=M_{\psi^{n+1}}M_{\psi^{n+1}}^* \quad\text{and}\quad (\widetilde{C}_\gamma \widetilde{C}_\gamma^*)^{-1/2}=\widetilde{M}_{\psi^{n}}\widetilde{M}_{\psi^{n}}^*$$ Using the fact that $V$ intertwines the C\*-algebras generated by $M_z$ and $\widetilde{M}_z$ modulo $\mathcal{K}$, and that these algebras are commutative modulo $\mathcal{K}$, we conclude after multiplying both sides of (\[E:VCV\]) on the left by $M_{\psi^n}M_{\psi^n}^*$ that $$V^*\widetilde{U}_\gamma V = V^*\widetilde{M}_{\psi^{n}}\widetilde{M}_{\psi^{n}}^* \widetilde{C}_\gamma V+K=M_{\psi^{n+1}}M_{\psi^{n+1}}^* C_\gamma +K =U_\gamma +K$$ which completes the proof.
$C^*$-algebras isomorphic to ${\mathcal{C}_\Gamma}$
---------------------------------------------------
In this section we prove that all unital extensions of $C(\partial\mathbb{D})\times\Gamma$ which determine the same *Ext* class as $\mathcal{C}_\Gamma$ have isomorphic pull-backs, i.e. the middle term in the exact sequence is isomorphic to $\mathcal{C}_\Gamma$. TO prove this, we apply Voiculescu’s theorem to obtain a stable isomorphism and use an analysis of the $K_0$-classes of finite projections in ${\mathcal{C}_\Gamma}$ to obtain a unital isomorphism.
We recall that if $A$ is a unital, nuclear C\*-algebra, an extension of $A$ by $\mathcal{K}$ is called *unital* if the Busby map $\tau:A\to \mathcal{Q}(\mathcal{H})$ is unital. A trivial extension is called *strongly unital* $\tau$ lifts to a unital homomorphism $\rho:A\to \mathcal{B}(\mathcal{H})$. (Not all unital trivial extensions are strongly unital.) We begin with the following lemma, which is a special case of [@MR99b:46108 Proposition 5.1].
Let $A$ be a unital $C^*$-algebra, and let $\tau_0=\pi\circ \alpha$, $\tau_1:A\to \mathcal{Q}(\mathcal{H})$ be unital extensions, with $\tau_0$ trivial. Then there is an isometry $v\in \mathcal{B}(\mathcal{H})$ such that $\alpha(1_A)=vv^*$. Set $\alpha^\prime(a)=v^*\alpha(a)v$ and set $\tau^\prime_0 = \pi\circ \alpha^\prime$, so that $\tau_0^\prime$ is a strongly unital trivial extension. Let $$\nonumber\begin{CD}
e:\quad 0 @>>> \mathcal{K} @>>> E @>\psi >> A @>>> 0
\end{CD}$$ $$\nonumber\begin{CD}
e^\prime:\quad 0 @>>> \mathcal{K} @>>> E^\prime @>\psi^\prime >> A @>>> 0
\end{CD}$$ be the extensions with Busby maps $\tau_1 \oplus \tau_0$, respectively, $\tau_1 \oplus \tau_0^\prime$. Let $s_1, s_2$ be isometries in $\mathcal{B}(\mathcal{H})$ with $s_1s_1^* + s_2s_2^* =1$ and set $w=s_1s_1^* +s_2 vs_2^*$, $p=ww^*$. Then $\beta(b)=w^*bw$ and $\eta(x)=w^*xw$ define an isomorphism $$\nonumber\begin{CD}
0 @>>> p\mathcal{K} p @>>> pEp @>\psi >> A @>>> 0 \\
& & @V\beta VV @VV\eta V @| \\
0 @>>> \mathcal{K} @>>> E^\prime @>>\psi^\prime > A @>>> 0
\end{CD}$$
[**Proof. **]{}[@MR99b:46108]
\[T:classify\] Let $x\in \rm{Ext}(C(\partial\mathbb{D})\times\Gamma)$ denote the class of the extension $$\nonumber\begin{CD}
0 @>>> \mathcal{K} @>>> {\mathcal{C}_\Gamma}@>>> C(\partial\mathbb{D})\times\Gamma @>>> 0.
\end{CD}$$ If $e\in \rm{Ext}(C(\partial\mathbb{D})\times\Gamma)$ is a unital extension represented by $$\nonumber\begin{CD}
0 @>>> \mathcal{K} @>>> E @>>> C(\partial\mathbb{D})\times\Gamma @>>> 0
\end{CD}$$ and $e=x$, then $E\cong {\mathcal{C}_\Gamma}$ as $C^*$-algebras.
[**Proof. **]{}We first prove that the index homomorphism $\partial$ from $K_1(C(\partial\mathbb{D})\times\Gamma)$ to $\mathbb{Z}$ determined by the extension $x$ is surjective (and hence so is the homomorphism determined by $e$). First, consider the function $f(z)=z$; this function is a unitary in $C(\partial\mathbb{D})\times\Gamma$ lying in the canonical subalgebra isomorphic to $C(\partial\mathbb{D})$. By construction, this unitary lifts to the unilateral shift $S$ on $H^2$, which is a Fredholm isometry of index $-1$. Thus $\partial[z]_1 =-1$, and since $-1$ generates $\mathbb{Z}$, $\partial$ is surjective.
Since $\partial$ is surjective, the six-term exact sequence in $K$-theory associated to $E$ $$\begin{CD}
&0 @>>> &K_1(E) @>>> &K_1(C(\partial\mathbb{D})\times\Gamma) \\
&@AAA & & & & @VV\partial V \\
&K_0(C(\partial\mathbb{D})\times\Gamma) @<<< &K_0(E) @<<< &\mathbb{Z}
\end{CD}$$ splits into the two sequences $$\begin{CD}
0 @>>> K_1(E) @>>> K_1(C(\partial\mathbb{D})\times\Gamma) @>\partial >> \mathbb{Z} @>>> 0
\end{CD}$$ $$\begin{CD}
0 @>>> K_0(E) @>>> K_0(C(\partial\mathbb{D})\times\Gamma) @>>> 0
\end{CD}$$ Thus $K_0(E)\cong K_0(C(\partial\mathbb{D})\times\Gamma)$, and as the isomorphism is induced by the quotient map which annihilates the compacts, it follows that $[p]_0=0$ for any finite projection $p\in E$.
Now let $\tau_1$ and $\tau_2$ denote the Busby maps associated to ${\mathcal{C}_\Gamma}$ and $E$ respectively. Since ${\mathcal{C}_\Gamma}$ and $E$ determine the same element in $\text{Ext}(C(\partial\mathbb{D})\times\Gamma)$, there exist unital trivial extensions $\sigma_1$ and $\sigma_2$ such that $\tau_1 \oplus \sigma_1
\sim_u \tau_2 \oplus \sigma_2$. Let $E_j$ denote the extension with Busby map $\tau_j \oplus \sigma_j$; we have $E_1\cong E_2$. Using $\sigma_1$ and $\sigma_2$, we may choose isometries $v_1$ and $v_2$ in $\mathcal{B}(\mathcal{H})$ as in the lemma to obtain strongly unital extensions $\sigma_1^\prime$ and $\sigma_2^\prime$. Let $E_j^\prime$ be the extension of $C(\partial\mathbb{D})\times\Gamma$ with Busby map $\tau_j \oplus
\sigma_j^\prime$, $j=1,2$. By Voiculescu’s theorem, $\tau_j \sim_u
\tau_j \oplus \sigma_j^\prime$, and it follows (since unitarily equivalent Busby maps determine isomorphic extensions) that $E_1^\prime \cong {\mathcal{C}_\Gamma}$ and $E_2^\prime \cong E$. Applying the lemma to $E_1$ and $E_2$, we obtain projections $p_j\in E_j$ such that $p_j
E_j p_j \cong E_j^\prime$. We claim that $[p_j]_0 = [1_{E_j}]_0$; from this the theorem follows, since we then have $p_j E_j p_j \cong
E_j$.
To prove the claim, observe that for the isometries $v_j$, the projections $1-v_jv_j^*$ are finite, and hence so are the projections $1-p_j$: $$\begin{aligned}
\nonumber
1-p = 1-ww^* &= 1-[s_1s_1^* + s_2vv^*s_2^*] \\
\nonumber &= s_2s_2^* - s_2vv^*s_2^* \\
\nonumber &= s_2[1-vv^*]s_2^* \end{aligned}$$ which is finite. Thus $1-p_j$ lies in $E_j$ (since $E_j$ contains the compacts) and $p_j\in Ej$ since $E_j$ is unital. Finally, $[1-p_j]_0
=0$, and $[p_j]_0 =[1_{E_j}]_0$. $\square$
Emerson’s construction
----------------------
When $\Gamma$ is cocompact, there is another extension of $C(\partial\mathbb{D})\times\Gamma$ which was constructed by Emerson [@emerson-thesis], motivated by work of Kaminker and Putnam [@MR98f:46056]; we will show that the $Ext$-class of ${\mathcal{C}_\Gamma}$ differs from the $Ext$-class of this extension.
The extension is constructed as follows: since $\Gamma$ is cocompact, we may identify $\mathbb{T}$ with the Gromov boundary $\partial\Gamma$. Let $f$ be a continuous function on $\partial\Gamma$, extend it arbitrarily to $\Gamma$ by the Tietze extension theorem, denote this extended function by $\tilde{f}$. Let $e_x, x\in\Gamma$ be the standard orthonormal basis for $\ell^2(\Gamma)$, and let $u_\gamma$, $\gamma\in\Gamma$ denote the unitary operator of left translation on $\Gamma$. Define a map $\tau:C(\partial\mathbb{D})\times\Gamma\to \mathcal{Q}(\ell^2(\Gamma))$ by $$\tau(f)e_x = \tilde{f}(x)e_x$$ and $$\tau(\gamma)e_x =u_\gamma e_x = e_{\gamma x},$$ it can be shown that these expressions are well defined modulo the compact operators and determine a $*$-homomorphism from $C(\partial\mathbb{D})\times\Gamma$ to the Calkin algebra of $\ell^2(\Gamma)$, i.e. an extension of $C(\partial\mathbb{D})\times\Gamma$ by the compacts. Let $\pi$ denote the quotient map $\pi:\mathcal{B}(\ell^2(\Gamma))\to \mathcal{Q}(\ell^2(\Gamma))$, and consider the pull-back $C^*$-algebra $E$: $$\begin{CD}
E @>\tilde{\tau}>> C(\partial\mathbb{D})\times\Gamma \\
@VVV @VV\tau V \\
\mathcal{B}(\ell^2(\Gamma)) @>>\pi> \mathcal{Q}(\ell^2(\Gamma))
\end{CD}$$
We will show this extension is distinct from ${\mathcal{C}_\Gamma}$ by showing that it induces a different homomorphism from $K_1(C(\partial\mathbb{D})\times\Gamma)$ to $\mathbb{Z}$. Indeed, consider the class $[z]_1$ of the unitary $f(z)=z$ in $K_1(C(\partial\mathbb{D})\times\Gamma)$; the extension belonging to ${\mathcal{C}_\Gamma}$ sends this class to $-1$. On the other hand, for the extension $\tau$ we claim the function $z$ lifts to a unitary in $E$; it follows that $\partial ([z]_1) =0$ in this case. To verify the claim, note that $z$ lifts to a diagonal operator on $\ell^2(\Gamma)$, and if $(x_n)$ is a subsequence in $\Gamma$ tending to the boundary point $\lambda\in\mathbb{T}$, then $\tilde{f}(x_n)\to f(\lambda)=\lambda$. We may thus choose all the $\tilde{f}(x)$ nonzero, and by dividing $\tilde{f}(x)$ by its modulus, we may assume they are all unimodular. Thus $z$ lifts to the unitary $diag(\tilde{f}(x))\oplus z$ in $E$.
Lott’s construction
-------------------
In this section we relate the extension $\tau$ given by $\mathcal{C}_\Gamma$ to an extension recently constructed by J. Lott [@lott-preprint]. In particular it will follow that (up to tensoring with $\mathbb{Q}$) $\tau$ lies in the range of the Baum-Douglas-Taylor boundary map $$\partial: K^0 (C(\overline{\mathbb{D}})\times\Gamma, C(\partial\mathbb{D})\times\Gamma )\to K^1(C(\partial\mathbb{D})\times\Gamma)$$ We first describe a construction of the extension $\sigma_+$ of [@lott-preprint]. Let $\mathcal{D}$ denote the Hilbert space of analytic functions on $\mathbb{D}$ with finite Dirichlet integral $$D(f)=\int_\mathbb{D}|f^\prime(z)|^2dA(z)$$ equipped with the norm $$\|f\|^2=|f(0)|^2 + \int_\mathbb{D}|f^\prime(z)|^2dA(z)=|f(0)|^2+D(f)$$ If $f$ is represented in $\mathbb{D}$ by the Taylor series $\sum_{n=0}^\infty a_n z^n$, then this norm is given by $$\|f\|^2 = |a_0|^2 + \sum_{n=1}^\infty n |a_n|^2$$ The operator of multiplication by $z$ on $\mathcal{D}$, denoted $M_z$, is a weighted shift with weight sequence asymptotic to $1$, and hence is unitarily equivalent to a compact perturbation of the unilateral shift on $H^2$. It follows that there is a \*-homomorphism $\rho:C(\partial\mathbb{D})\to \mathcal{Q}(\mathcal{D})$ with $\rho(z)=\pi(M_z)$. Now, by changing variables one checks that if $\gamma$ is a Möbius transformation, $D(f\circ\gamma)=D(f)$. Let $\mathcal{D}_0$ denote the subspace of $\mathcal{D}$ consisting of those functions which vanish at the origin. It then follows from the definition of the norm in $\mathcal{D}$ that the operators $$u_\gamma(f)(z)=f(\gamma(z))-f(\gamma(0))$$ are unitary on $\mathcal{D}_0$, and form a unitary representation of $\Gamma$. We extend this representation to all of $\mathcal{D}$ by letting $\Gamma$ act trivially on the scalars. Moreover, it is simple to verify (by noting that $u_\gamma$ is a compact perturbation of the composition operator $C_\gamma$) that for all $\gamma\in\Gamma$, $$u_\gamma M_z u_\gamma ^* = M_{\gamma(z)}$$ modulo compact operators. Arguing as in the proof of Theorem \[T:main\], we conclude that the pair $(\rho(f), \pi(u_\gamma))$ determines an injective \*-homomorphism from $C(\partial\mathbb{D})\times\Gamma$ to $\mathcal{Q}(\mathcal{D})$, which is unitarily equivalent to the Busby map $\sigma_+$ of [@lott-preprint].
We may now state the main theorem of this subsection:
\[T:lott\] The Busby maps $\tau$ and $\sigma_+$ are unitarily equivalent.
[**Proof. **]{}We first show that the Busby map $\tau: \mathcal{C}_\Gamma\to \mathcal{Q}(H^2)$ lifts to a completely positive map $\eta:C(\partial\mathbb{D})\times\Gamma \to\mathcal{B}(H^2)$. Define a unitary representation of $\Gamma$ on $L^2(\partial\mathbb{D})$ by $$U(\gamma^{-1})=M_{|\gamma^\prime|^{1/2}}C_\gamma$$ Together with the usual representation of $C(\partial\mathbb{D})$ as multiplication operators on $L^2$, we obtain a covariant representation of $(\Gamma, \partial\mathbb{D})$ which in turn determines a representation $\rho:C(\partial\mathbb{D})\times\Gamma
\to \mathcal{B}(L^2)$. Letting $P$ denote the Riesz projection $P:L^2\to H^2$, we next claim that the commutator $[\rho(a),P]$ is compact for all $a\in
C(\partial\mathbb{D})\times\Gamma$, and so the pair $(\rho, P)$ is an abstract Toeplitz extension of $C(\partial\mathbb{D})\times\Gamma$ by $\mathcal{K}$. To see this, it suffices to prove the compactness of the commutators $$[\pi(f), P] \quad \text{and} \quad [U(\gamma),P].$$ Now, it is well known that $[M_f, P]$ is compact, and as $U(\gamma)$ has the form $M_g C_{\gamma^{-1}}$ it suffices to check that $[C_\gamma,
P]$ is compact. It is easily checked that this latter commutator is rank one. Indeed, the range of $P$ is invariant for $C_\gamma$, so $[C_\gamma, P]= PC_\gamma P^\bot$. If we expand $h\in
L^2(\partial\mathbb{D})$ in a Fourier series $$h \sim \sum_{n\in\mathbb{Z}} \hat{h}(n)e^{in\theta}$$ then a short calculation shows that $$PC_\gamma P^\bot h \sim (\sum_{n<0} \hat{h}(n)\overline{\gamma(0)}^{|n|})
\cdot 1$$ so $PC_\gamma P^\bot$ is rank one.
Thus, we have established that the completely positive map $\eta:C(\partial\mathbb{D})\times\Gamma \to\mathcal{B}(H^2)$ given by $$\eta(a)=P\rho(a)P$$ is a homomorphism modulo compacts, and the calculations in the proof of Theorem \[T:main\] show that the map $$\tau(a)=\pi(P\rho(a)P)$$ coincides with the Busby map associated to ${\mathcal{C}_\Gamma}$. Thus, $\eta$ is a completely positive lifting of $\tau$ as claimed.
With the lifting $\eta$ in hand, to prove the unitary equivalence of $\tau$ and $\sigma_+$ it will therefore suffice to exhibit an operator $V:H^2\to \mathcal{D}$ such that $V$ is a compact perturbation of a unitary, and such that $$\pi(V\eta(a)V^*)=\sigma_+(a)$$ for all $a\in C(\partial\mathbb{D})\times\Gamma$. Since the crossed product is generated by the function $f(z)=z$ and the formal symbols $[\gamma]$, it suffices to establish the above equality on these generators. In fact, we may use the operator $V$ of Lemma \[L:v\_operator\]. Since the map $z^n \to n^{-1/2}z^n$ is essentially unitary from $H^2$ to $\mathcal{D}$, the proofs of statements (1) and (2) of Lemma \[L:v\_operator\] are still valid. The conclusion of statement (3) holds provided the hypothesis on $g$ is strengthened, by requiring that $g$ by analytic in a neighborhood of $\overline{\mathbb{D}}$. Moreover, the arguments of Theorem \[T:extclasses\] still apply, since the proof applies statement (3) of Lemma \[L:v\_operator\] only to the function $\psi$, which is indeed analytic across the boundary of $\mathbb{D}$. Thus, the arguments in the proof of Theorem \[T:extclasses\] prove that $V$ intertwines (modulo compacts) multiplication by $z$ on $H^2$ and $\mathcal{H}$, and also intertwines (modulo compacts) $C_\gamma$ acting on $\mathcal{H}$ with $U_\gamma$ on $H^2$. Since $u_\gamma$ is a compact perturbation of $C_\gamma$ on $\mathcal{H}$, it follows that $V$ intertwines $U_\gamma$ and $u_\gamma$.
Now, the Busby map $\sigma_+$ takes the function $z$ to the image of $M_z$ in the Calkin algebra $\mathcal{Q}(\mathcal{H})$. Since $\eta(z)=M_z\in\mathcal{B}(H^2)$, the intertwining property of $V$ (modulo compacts) may be written as $$\pi(V\eta(z)V^*)=\sigma_+(z)$$ Similarly, since $\eta$ applied to the formal symbol $[\gamma]$ (viewed as a generator of $C(\partial\mathbb{D})\times\Gamma$) is $U_\gamma$, the intertwining property for $V$ with respect to $U_\gamma$ and $u_\gamma$ reads $$\pi(V\eta([\gamma])V^*)=\sigma_+([\gamma])$$ Thus the equivalence of $\tau$ and $\sigma_+$ on generators is established, which proves the theorem. $\square$
We conclude by observing that the covariant representation on $L^2$ described in the proof of the previous theorem gives rise to an equivariant $KK_1$-cycle for $C(\partial\mathbb{D})$. Indeed, such a cycle consists of a triple $(U, \pi, F)$ where $(U, \pi)$ is a covariant representation on a Hilbert space $\mathcal{H}$ and $F$ is a bounded operator on $\mathcal{H}$ such that the operators $$F^2-I,\ F-F^*,\ [U(\gamma),F], \text{ and } [\pi(f),F]$$ are compact for all $\gamma\in\Gamma$ and $f\in C(\partial\mathbb{D})$. The computations in the previous proof show that the triple $(U, \pi, 2P-I)$ satisfies all of these conditions, and essentially the same unitary equivalence argument shows that this cycle represents (up to a scalar multiple) the class of [@lott-preprint Section 9.1].
[^1]: \*Research partially supported by NSF VIGRE grant DMS-9983601
[^2]: 2000 [*Mathematics Subject Classification.*]{} 20H10, 46L55, 46L80, 47B33, 47L80
| {
"pile_set_name": "ArXiv"
} |
INTRODUCTION
============
The physics of compact objects is entering a particularly exciting phase, as new instruments can now yield unprecedented observations. For example, there is evidence that the Rossi X-ray Timing Explorer has identified the innermost stable circular orbit around an accreting neutron star [@zsss98]. Also, the new generation of gravitational wave detectors under construction, including LIGO, VIRGO, GEO and TAMA, promise to detect, for the first time, gravitational radiation directly (see, e.g., [@t95]).
In order to learn from these observations (and, in the case of the gravitational wave detectors, to dramatically increase the likelihood of detection), one has to predict the observed signal from theoretical modeling. The most promising candidates for detection by the gravitational wave laser interferometers are the coalescences of black hole and neutron star binaries. Simulating such mergers requires self-consistent, numerical solutions to Einstein’s field equations in 3 spatial dimensions, which is extremely challenging. While several groups, including two “Grand Challenge Alliances” [@gc], have launched efforts to simulate the coalescence of compact objects (see also [@on97; @wmm96]), the problem is far from being solved.
Before Einstein’s field equations can be solved numerically, they have to be cast into a suitable initial value form. Most commonly, this is done via the standard 3+1 decomposition of Arnowitt, Deser and Misner (ADM, [@adm62]). In this formulation, the gravitational fields are described in terms of spatial quantities (the spatial metric and the extrinsic curvature), which satisfy some initial constraints and can then be integrated forward in time. The resulting “$\dot g - \dot
K$” equations are straightforward, but do not satisfy any known hyperbolicity condition, which, as it has been argued, may cause stability problems in numerical implementations. Therefore, several alternative, hyperbolic formulations of Einstein’s equations have been proposed [@fr94; @bmss95; @aacy96; @pe96; @f96; @acy98]. Most of these formulations, however, also have disadvantages. Several of them introduce a large number of new, first order variables, which take up large amounts of memory in numerical applications and require many additional equations. Some of these formulations require taking derivatives of the original equations, which may introduce further inaccuracies, in particular if matter sources are present. It has been widely debated if such hyperbolic formulations have computational advantages [@texas95]; their performance has yet to be compared directly with that of the original ADM equations. Accordingly, it is not yet clear if or how much the numerical behavior of the ADM equations suffers from their non-hyperbolicity.
In this paper, we demonstrate by means of a numerical experiment and a direct comparison that the standard implementation of the ADM system of equations, consisting of evolution equations for the bare metric and extrinsic curvature variables, is more susceptible to numerical instabilities than a modified form of the equations based on a conformal decomposition as suggested by Shibata and Nakamura [@sn95]. We will refer to the standard, “$\dot g - \dot
K$” form of the equations as “System I” (see Section \[sys1\] below). We follow Shibata and Nakamura and modify these original ADM equations by factoring out a conformal factor and introducing a spatial field of connection functions (“System II”, see Section \[sys2\] below). The conformal decomposition separates “radiative” variables from “nonradiative” ones in the spirit of the “York-Lichnerowicz” split [@l44; @y71]. With the help of the connection functions, the Ricci tensor becomes an elliptic operator acting on the components of the conformal metric. The evolution equations can therefore be reduced to a set of wave equations for the conformal metric components, which are coupled to the evolution equations for the connection functions. These wave equations reflect the hyperbolic nature of general relativity, and can also be implemented numerically in a straight-forward and stable manner.
We evolve low amplitude gravitational waves in pure vacuum spacetimes, and directly compare Systems I and II for both geodesic slicing and harmonic slicing. We find that System II is not only more appealing mathematically, but performs far better numerically than System I. In particular, we can evolve low amplitude waves in a stable fashion for hundreds of light travel timescales with System II, while the evolution crashes at an early time in System I, independent of gauge choice. We present these results in part to alert developers of 3+1 general relativity codes, many of whom currently employ System I, that a better set of equations may exist for numerical implementation.
The paper is organized as follows. In Section \[sec2\], we present the basic equations of both Systems I and II. We briefly discuss our numerical implementation in Section \[sec3\], and present numerical results in Section \[sec4\]. In Section \[sec5\], we summarize and discuss some of the implications of our findings.
BASIC EQUATIONS {#sec2}
===============
System I {#sys1}
--------
We write the metric in the form $$ds^2 = - \alpha^2 dt^2 + \gamma_{ij} (dx^i + \beta^i dt)(dx^j + \beta^j dt),$$ where $\alpha$ is the lapse function, $\beta^i$ is the shift vector, and $\gamma_{ij}$ is the spatial metric. Throughout this paper, Latin indices are spatial indices and run from 1 to 3, whereas Greek indices are spacetime indices and run from 0 to 3. The extrinsic curvature $K_{ij}$ can be defined by the equation $$\label{gdot1}
\frac{d}{dt} \gamma_{ij} = - 2 \alpha K_{ij},$$ where $$\frac{d}{dt} = \frac{\partial}{\partial t} - {\cal L}_{\beta}$$ and where ${\cal L}_{\beta}$ denotes the Lie derivative with respect to $\beta^i$.
The Einstein equations can then be split into the Hamiltonian constraint $$\label{ham1}
R - K_{ij}K^{ij} + K^2 = 2 \rho,$$ the momentum constraint $$\label{mom1}
D_j K^{j}_{~i} - D_i K = S_i,$$ and the evolution equation for the extrinsic curvature $$\label{Kdot1}
\frac{d}{dt} K_{ij} = - D_i D_j \alpha + \alpha ( R_{ij}
- 2 K_{il} K^l_{~j} + K K_{ij} - M_{ij} )$$ Here $D_i$ is the covariant derivative associated with $\gamma_{ij}$, $R_{ij}$ is the three-dimensional Ricci tensor $$\begin{aligned}
\label{ricci}
R_{ij} & = & \frac{1}{2} \gamma^{kl}
\Big( \gamma_{kj,il} + \gamma_{il,kj}
- \gamma_{kl,ij} - \gamma_{ij,kl} \Big) \\[1mm]
& & + \gamma^{kl} \Big( \Gamma^m_{il} \Gamma_{mkj}
- \Gamma^m_{ij} \Gamma_{mkl} \Big), \nonumber\end{aligned}$$ and $R$ is its trace $R = \gamma^{ij} R_{ij}$. We have also introduced the matter sources $\rho$, $S_i$ and $S_{ij}$, which are projections of the stress-energy tensor with respect to the unit normal vector $n_{\alpha}$, $$\begin{aligned}
\rho & = & n_{\alpha} n_{\beta} T^{\alpha \beta}, \nonumber \\[1mm]
S_i & = & - \gamma_{i\alpha} n_{\beta} T^{\alpha \beta}, \\[1mm]
S_{ij} & = & \gamma_{i \alpha} \gamma_{j \beta} T^{\alpha \beta}, \nonumber\end{aligned}$$ and have abbreviated $$M_{ij} \equiv S_{ij} + \frac{1}{2} \gamma_{ij}(\rho - S),$$ where $S$ is the trace of $S_{ij}$, $S = \gamma^{ij} S_{ij}$.
The evolution equations (\[gdot1\]) and (\[Kdot1\]) together with the constraint equations (\[ham1\]) and (\[mom1\]) are equivalent to the Einstein equations, and are commonly referred to as the ADM form of the gravitational field equations [@adm62; @footnote1]. We will call these equations System I. This system is widely used in numerical relativity calculations (e.g. [@aetal98; @cetal98]), even though its mathematical structure is not simple to characterize and may not be ideal for computation. In particular, the Ricci tensor (\[ricci\]) is not an elliptic operator: while the last one of the four terms involving second derivatives, $\gamma^{kl}\gamma_{ij,kl}$, is an elliptic operator acting on the components of the metric, the elliptic nature of the whole operator is spoiled by the other three terms involving second derivatives. Accordingly, the system as a whole does not satify any known hyperbolicity condition (see also the discussion in [@f96]). Therefore, to establish existence and uniqueness of solutions to Einstein’s equations, most mathematical analyses rely either on particular coordinate choices or on different formulations.
System II {#sys2}
---------
Instead of evolving the metric $\gamma_{ij}$ and the extrinsic curvature $K_{ij}$, we can evolve a conformal factor and the trace of the extrinsic curvature separately (“York-Lichnerowicz split” [@l44; @y71]). Such a split is very appealing from both a theoretical and computational point of view, and has been widely applied in numerical axisymmetric (2+1) calculations (see, e.g., [@e84]). More recently, Shibata and Nakamura [@sn95] applied a similar technique in a three-dimensional (3+1) calculation. Adopting their notation, we write the conformal metric as $$\tilde \gamma_{ij} = e^{- 4 \phi} \gamma_{ij}$$ and choose $$e^{4 \phi} = \gamma^{1/3} \equiv \det(\gamma_{ij})^{1/3},$$ so that the determinant of $\tilde \gamma_{ij}$ is unity. We also write the trace-free part of the extrinsic curvature $K_{ij}$ as $$A_{ij} = K_{ij} - \frac{1}{3} \gamma_{ij} K,$$ where $K = \gamma^{ij} K_{ij}$. It turns out to be convenient to introduce $$\tilde A_{ij} = e^{- 4 \phi} A_{ij}.$$ We will raise and lower indices of $\tilde A_{ij}$ with the conformal metric $\tilde \gamma_{ij}$, so that $\tilde A^{ij} = e^{4 \phi} A^{ij}$ (see [@sn95]).
Taking the trace of the evolution equations (\[gdot1\]) and (\[Kdot1\]) with respect to the physical metric $\gamma_{ij}$, we find [@footnote2] $$\label{phidot2}
\frac{d}{dt} \phi = - \frac{1}{6} \alpha K$$ and $$\label{Kdot2}
\frac{d}{dt} K = - \gamma^{ij} D_j D_i \alpha +
\alpha(\tilde A_{ij} \tilde A^{ij}
+ \frac{1}{3} K^2) + \frac{1}{2} \alpha (\rho + S),$$ where we have used the Hamiltonian constraint (\[ham1\]) to eliminate the Ricci scalar from the last equation. The tracefree parts of the two evolution equations yield $$\label{gdot2}
\frac{d}{dt} \tilde \gamma_{ij} =
- 2 \alpha \tilde A_{ij}.$$ and $$\begin{aligned}
\label{Adot2}
\frac{d}{dt} \tilde A_{ij} & = & e^{- 4 \phi} \left(
- ( D_i D_j \alpha )^{TF} +
\alpha ( R_{ij}^{TF} - S_{ij}^{TF} ) \right)
\nonumber \\[1mm]
& & + \alpha (K \tilde A_{ij} - 2 \tilde A_{il} \tilde A^l_{~j}).\end{aligned}$$ In the last equation, the superscript $TF$ denotes the trace-free part of a tensor, e.g. $R_{ij}^{TF} = R_{ij} - \gamma_{ij} R/3$. Note that the trace $R$ could again be eliminated with the Hamiltonian constraint (\[ham1\]). Note also that $\tilde \gamma_{ij}$ and $\tilde A_{ij}$ are tensor densities of weight $-2/3$, so that their Lie derivative is, for example, $${\cal L}_{\beta} \tilde A_{ij} = \beta^k \partial_k \tilde A_{ij}
+ \tilde A_{ik} \partial_j \beta^k
+ \tilde A_{kj} \partial_i \beta^k
- \frac{2}{3} \tilde A_{ij} \partial_k \beta^k.$$
The Ricci tensor $R_{ij}$ in (\[Adot2\]) can be written as the sum $$R_{ij} = \tilde R_{ij} + R_{ij}^{\phi}.$$ Here $R_{ij}^{\phi}$ is $$\begin{aligned}
R^{\phi}_{ij} & = & - 2 \tilde D_i \tilde D_j \phi -
2 \tilde \gamma_{ij} \tilde D^l \tilde D_l \phi \nonumber \\[1mm]
& & + 4 (\tilde D_i \phi)(\tilde D_j \phi)
- 4 \tilde \gamma_{ij} (\tilde D^l \phi) (\tilde D_l \phi),\end{aligned}$$ where $\tilde D_i$ is the derivative operator associated with $\tilde
\gamma_{ij}$, and $\tilde D^i = \tilde \gamma^{ij} \tilde D_j$.
The “tilde” Ricci tensor $\tilde R_{ij}$ is the Ricci tensor associated with $\tilde \gamma_{ij}$, and could be computed by inserting $\tilde \gamma_{ij}$ into equation (\[ricci\]). However, we can bring the Ricci tensor into a manifestly elliptic form by introducing the “conformal connection functions” $$\label{cgsf}
\tilde \Gamma^i \equiv \tilde \gamma^{jk} \tilde \Gamma^{i}_{jk}
= - \tilde \gamma^{ij}_{~~,j},$$ where the $\tilde \Gamma^{i}_{jk}$ are the connection coefficients associated with $\tilde \gamma_{ij}$, and where the last equality holds because $\tilde \gamma = 1$. In terms of these, the Ricci tensor can be written [@footnote3] $$\begin{aligned}
\tilde R_{ij} & = & - \frac{1}{2} \tilde \gamma^{lm}
\tilde \gamma_{ij,lm}
+ \tilde \gamma_{k(i} \partial_{j)} \tilde \Gamma^k
+ \tilde \Gamma^k \tilde \Gamma_{(ij)k} + \nonumber \\[1mm]
& & \tilde \gamma^{lm} \left( 2 \tilde \Gamma^k_{l(i}
\tilde \Gamma_{j)km} + \tilde \Gamma^k_{im} \tilde \Gamma_{klj}
\right).\end{aligned}$$ The principal part of this operator, $\tilde \gamma^{lm} \tilde
\gamma_{ij,lm}$, is that of a Laplace operator acting on the components of the metric $\tilde \gamma_{ij}$. It is obviously elliptic and diagonally dominant (as long as the metric is diagonally dominant). All the other second derivatives of the metric appearing in (\[ricci\]) have been absorbed in the derivatives of the connection functions. At least in appropriately chosen coordinate systems (for example $\beta^i = 0$), equations (\[gdot2\]) and (\[Adot2\]) therefore reduce to a coupled set of nonlinear, inhomogeneous wave equations for the conformal metric $\tilde
\gamma_{ij}$, in which the gauge terms $K$ and $\tilde \Gamma^i$, the conformal factor $\exp(\phi)$, and the matter terms $M_{ij}$ appear as sources. Wave equations not only reflect the hyperbolic nature of general relativity, but can also be implemented numerically in a straight-forward and stable manner. The same method has often been used to reduce the four-dimensional Ricci tensor $R_{\alpha\beta}$ [@c62] and to bring Einstein’s equations into a symmetric hyperbolic form [@fm72].
Note that the connection functions $\tilde \Gamma^i$ are pure gauge quantities in the sense that they could be chosen, for example, to vanish by a suitable choice of spatial coordinates (“conformal three-harmonic coordinates”, compare [@sy78]). The $\tilde
\Gamma^i$ would then play the role of “conformal gauge source functions” (compare [@c62; @fm72]). Here, however, we impose the gauge by choosing the shift $\beta^i$, and evolve the $\tilde
\Gamma^i$ with equation (\[Gammadot2\]) below. Similarly, $K$ is a pure gauge variable (and could be chosen to vanish by imposing maximal time slicing).
An evolution equation for the $\tilde \Gamma^i$ can be derived by permuting a time derivative with the space derivative in (\[cgsf\]) $$\begin{aligned}
\frac{\partial}{\partial t} \tilde \Gamma^i
& = & - \frac{\partial}{\partial x^j} \Big( 2 \alpha \tilde A^{ij}
\nonumber \\[1mm]
& & - 2 \tilde \gamma^{m(j} \beta^{i)}_{~,m}
+ \frac{2}{3} \tilde \gamma^{ij} \beta^l_{~,l}
+ \beta^l \tilde \gamma^{ij}_{~~,l} \Big).\end{aligned}$$ It turns out to be essential for the numerical stability of the system to eliminate the divergence of $\tilde A^{ij}$ with the help of the momentum constraint (\[mom1\]), which yields $$\begin{aligned}
\label{Gammadot2}
\frac{\partial}{\partial t} \tilde \Gamma^i
& = & - 2 \tilde A^{ij} \alpha_{,j} + 2 \alpha \Big(
\tilde \Gamma^i_{jk} \tilde A^{kj} - \nonumber \\[1mm]
& & \frac{2}{3} \tilde \gamma^{ij} K_{,j}
- \tilde \gamma^{ij} S_j + 6 \tilde A^{ij} \phi_{,j} \Big) +
\nonumber \\[1mm]
& & \frac{\partial}{\partial x^j} \Big(
\beta^l \tilde \gamma^{ij}_{~~,l}
- 2 \tilde \gamma^{m(j} \beta^{i)}_{~,m}
+ \frac{2}{3} \tilde \gamma^{ij} \beta^l_{~,l} \Big).\end{aligned}$$
We now consider $\phi$, $K$, $\tilde \gamma_{ij}$, $\tilde A_{ij}$ and $\tilde \Gamma^i$ as fundamental variables. These can be evolved with the evolution equations (\[phidot2\]), (\[Kdot2\]), (\[gdot2\]), (\[Adot2\]), and (\[Gammadot2\]), which we call System II. Note that obviouly not all these variables are independent. In particular, the determinant of $\tilde \gamma_{ij}$ has to be unity, and the trace of $\tilde A_{ij}$ has to vanish. These conditions can either be used to reduce the number of evolved quantities, or, alternatively, all quantities can be evolved and the conditions can be used as a numerical check (which is what we do in our implementation).
NUMERICAL IMPLEMENTATION {#sec3}
========================
In order to compare the properties of Systems I and II, we implemented them numerically in an identical environment. We integrate the evolution equations with a two-level, iterative Crank-Nicholson method. The iteration is truncated after a certain accuracy has been achieved. However, we iterate at least twice, so that the scheme is second order accurate.
The gridpoints on the outer boundaries are updated with a Sommerfeld condition. We assume that, on the outer boundaries, the fundamental variables behave like outgoing, radial waves $$Q(t,r) = \frac{G(\alpha t - e^{2 \phi} r)}{r}.$$ Here $Q$ is any of the fundamental variables (except for the diagonal components of $\tilde \gamma_{ij}$, for which the radiative part is $Q
= \tilde \gamma_{ii} - 1$), and $G$ can be found by following the characteristic back to the previous timestep and interpolating the corresponding variable to that point (see also [@sn95]). We found that a linear interpolation is adequate for our purposes.
We impose octant symmetry in order to minimize the number of gridpoints, and impose corresponding symmetry boundary conditions on the symmetry plains. Unless noted otherwise, the calculations presented in this paper were performed on grids of $(32)^3$ gridpoints, and used a Courant factor of $1/4$. The code has been implemented in a parallel environment on SGI Power ChallengeArray and SGI CRAY Origin2000 computer systems at NCSA using DAGH [@dagh] software for parallel processing.
RESULTS {#sec4}
=======
Initial Data
------------
For initial data, we choose a linearized wave solution (which is then evolved with the full nonlinear systems I and II). Following Teukolsky [@t82], we construct a time-symmetric, even-parity $L=2$, $M = 0$ solution. The coefficients $A, B$ and $C$ (see equation (6) in [@t82]) are derived from a function $$F(t,r) = {\cal A}\, (t \pm r) \, \exp(- \lambda (t \pm r)^2).$$ Unless noted otherwise, we present results for an amplitude ${\cal A} = 10^{-3}$ and a wavelength $\lambda = 1$. The outer boundary conditions are imposed at $x, y, z = 4$.
We evolve these initial data for zero shift $$\beta^i = 0,$$ and compare the performance of Systems I and II for both geodesic and harmonic slicing.
Geodesic Slicing
----------------
In geodesic slicing, the lapse is unity $$\alpha = 1.$$ Since the acceleration of normal observers satisfies $a_a = D_a \ln \alpha = 0$, these observers follow geodesics. The energy content of even a small, linear wave packet will therefore focus these observers, and even after the wave has dispersed, the observers will continue to coast towards each other. Since $\beta^i = 0$, normal observers are identical to coordinate observers, hence geodesic slicing will ultimately lead to the formation of a coordinate singularity even for arbitrarily small waves.
The timescale for the formation of this singularity can be estimated from equation (\[Kdot2\]) with $\alpha = 1$ and $\beta^i = 0$. The $\tilde A_{ij}$, which can be associated with the gravitational waves, will cause $K$ to increase to some finite value, say $K_0$ at time $t_0$, even if $K$ was zero initially. After roughly a light crossing time, the waves will have dispersed, and the further evolution of $K$ is described by $\partial_t K \sim K^2/3$, or $$\label{K_approx}
K \sim \frac{3 K_0}{3 - K_0(t - t_0)}$$ (see [@sn95]). Obviously, the coordinate singularity forms at $t \sim 3/K_0 + t_0$ as a result of the nonlinear evolution.
We can now evolve the wave initial data with Systems I and II and compare how well they reproduce the formation of the coordinate singularity.
In Figure 1, we show $K$ at the origin ($x = y = z = 0$) as a function of time both for System I (dashed line) and System II (solid line). We also plot the approximate analytic solution (\[K\_approx\]) as a dotted line, which we have matched to the System I solution with values $K_0 = 0.00518$ and $t_0 = 10$. For these values, equation (\[K\_approx\]) predicts that the coordinate singularity appears at $t \sim 590$. In the insert, we show a blow-up of System II for early times. It can be seen very clearly how the initial wave content lets $K$ grow from zero to the “seed” value $K_0$. Once the waves have dispersed, System II approximately follows the solution (\[K\_approx\]) up to fairly late times. System I, on the other hand, crashes long before the coordinate singularity appears.
In Figure 2, we compare the extrinsic curvature component $K_{zz}$ evaluated at the origin. The noise around $t \sim 8$, which is present in the evolutions of both systems, is caused by reflections of the initial wave off the outer boundaries. It is obvious from these plots that System II evolves the equations stably to a fairly late time, at which the integration eventually becomes inaccurate as the coordinate singularity approaches. We stopped this calculation when the iterative Crank-Nicholson scheme no longer converged after a certain maximum number of iterations. It is also obvious that System I performs extremely poorly, and crashes at a very early time, well before the coordinate singularity.
It is important to realize that the poor performance of System I is [*not*]{} an artifact of our numerical implementation. For example, the ADM code currently being used by the Black Hole Grand Challenge Alliance, is based on the equations of System I, and also crashes after a very similar time [@r98] (see also [@aetal98], where a run with a much smaller initial amplitude nevertheless crashes earlier than our System II). This shows that the code’s crashing is intrinsic to the equations and slicing, and not to our numerical implementation.
Harmonic Slicing
----------------
Since geodesic slicing is known to develop coordinate singularities for generic, nontrivial initial data, it is obviously not a very good slicing condition. We therefore also compare the two Systems using harmonic slicing. In harmonic slicing, the coordinate time $t$ is a harmonic function of the coordinates $\nabla^{\alpha} \nabla_{\alpha} t = 0$, which is equivalent to the condition $$\Gamma^0 \equiv g^{\alpha\beta} \Gamma^0_{\alpha\beta} = 0$$ (where the $\Gamma^{\alpha}_{\beta\gamma}$ are the connection coefficients associated with the four-dimensional metric $g_{\alpha\beta}$). For $\beta^i = 0$, the above condition reduces to $$\partial_t \alpha = - \alpha^2 K.$$ Inserting (\[phidot2\]), this can be written as $$\partial_t (\alpha e^{-6 \phi}) = 0
\mbox{~~~or~~~}
\alpha = C(x^i) e^{6 \phi},$$ where $C(x^i)$ is a constant of integration, which depends on the spatial coordinates only. In practice, we choose $C(x^i) = 1$.
In Figure 3, we show results for the same initial data as in the last section. Obviously, both Systems do much better for this slicing condition. System I crashes much later than in geodesic slicing (after about 40 light crossing times, as opposed to about 10 for geodesic slicing), but it still crashes. System II, on the other hand, did not crash after even over 100 light crossing times. We never encountered a growing instability that caused the code to crash.
SUMMARY AND CONCLUSION {#sec5}
======================
We numerically implement two different formulations of Einstein’s field equations and compare their performance for the evolution of linear wave initial data. System I is the standard set of ADM equations for the evolution of $\gamma_{ij}$ and $K_{ij}$. In System II, we conformally decompose the equations and introduce connection functions. The conformal decomposition naturally splits “radiative” variables from “nonradiative” ones, and the connection functions are used to bring the Ricci tensor into an elliptic form. These changes are appealing mathematically, but also have a striking numerical consequence: System II performs far better than System I.
It is interesting to note that most earlier axisymmetric codes (e.g. [@e84]) also relied on a decomposition similar to that of System II. Much care was taken to identify radiative variables and to integrate those variables as opposed to the raw metric components. It is surprising that this experience was abandoned in the development of most 3+1 codes, which integrate equations equivalent to System I. These codes have been partly successful [@cetal98], but obvious problems remain, as for example the inability to integrate low amplitude waves for arbitrarily long times. While efforts have been undertaken to stabilize such codes with the help of appropriate outer boundary conditions [@aetal98; @rar98], our findings point to the equations themselves as the fundamental cause of the problem, and not to the outer boundaries. Obviously, boundary conditions as employed in the perturbative approach in [@aetal98; @rar98] or in the characteristic approach in [@bglswi96] are still needed for accuracy – but our results clearly suggest that they are not needed for stability [@footnote4].
Some of the recently proposed hyperbolic systems are very appealing in that they bring the equations in a first order, symmetric hyperbolic form, and that all characteristics are physical (i.e., are either at rest with respect to normal observers or travel with the speed of light) [@aacy96; @acy98]. These properties may be very advantageous for numerical implementations, in particular at the boundaries (both outer boundaries and, in the case of black hole evolutions, inner “apparent horizon” boundaries). Some of these systems have also been implemented numerically, and show stability properties very similar to our System II [@cs98]. Our System II, on the other hand, uses fewer variables than most of the hyperbolic formulations, and does not take derivatives of the equations, which may be advantageous especially when matter sources are present. This suggests that a system similar to System II may be a good choice for evolving interior solutions and matter sources, while one may want to match to one of the hyperbolic formulations for a better treatment of the boundaries.
The mathematical structure of System II is more appealing than that of System I, and these improvements are reflected in the numerical behavior. We therefore conclude that the mathematical structure has a very deep impact on the numerical behavior, and that the ability to finite difference the standard “$\dot g - \dot K$” ADM equations may not be sufficient to warrant a stable evolution.
It is a pleasure to thank A. M. Abrahams, L. Rezzolla, J. W. York and M. Shibata for many very useful conversations. We would also like to thank H. Friedrich for very valuable comments, and S. A. Hughes for a careful checking of our code. Calculations were performed on SGI CRAY Origin2000 computer systems at the National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign. This work was supported by NSF Grant AST 96-18524 and NASA Grant NAG 5-3420 at Illinois, and the NSF Binary Black Hole Grand Challenge Grant Nos. NSF PHYS 93-18152, NSF PHY 93-10083 and ASC 93-18152 (ARPA supplemented).
W. Zhang, Z. P. Smale, T. E. Stohmayer and J. H. Swank, Astrophys. J. [**500**]{}, L171 (1998).
K. Thorne, in [*Proceedings of the Seventeenth Texas Symposium on Relativistic Astrophysics and Cosmology*]{}, edited by H. Böhringer, G. E. Morfill and J. E. Trümper (Annals of the New York Academy of Sciences, Vol. 759, New York, 1995).
Information about the Binary Black Hole Grand Challenge can be found at [www.npac.syr.edu/projects/bh/]{}, and about the Binary Neutron Star Grand Challenge at [wugrav.wustl.edu/Relativ/nsgc.html]{}
K. Oohara and T. Nakamura, in [*Relativistic Gravitation and Gravitational Radiation*]{}, edited by J.-A. Marck and J.-P. Lasota (Cambridge University Press, Cambridge, 1997).
J. R. Wilson and G. J. Mathews, Phys. Rev. Lett. [**75**]{}, 4161 (1995); J. R. Wilson, G. J. Mathews and P. Marronetti, Phys. Rev. D [**54**]{}, 1317 (1996).
R. Arnowitt, S. Deser and C. W. Misner, in [*Gravitation: An Introduction to Current Research*]{}, edited by L. Witten (Wiley, New York, 1962).
S. Frittelli and O. Reula, Commun. Math. Phys. [**166**]{}, 221 (1994).
C. Bona, J. Massó, E. Seidel and J. Stela, Phys. Rev. Lett. [**75**]{}, 600 (1995).
A. Abrahams, A. Anderson, Y. Choquet-Bruhat and J. W. York, Jr., Phys. Rev. Lett. [**75**]{}, 3377 (1996).
M. H. P. M. van Putten and D. M. Eardley, Phys. Rev. D [**53**]{}, 3056 (1996).
H. Friedrich, Class. Quantum Gravit. [**13**]{}, 1451 (1996).
A. Anderson, Y. Choquet-Bruhat and J. W. York, Jr., to appear in Topol. Methods in Nonlinear Analysis (also gr-qc/9710041).
For example at the Third Texas Workshop on 3-dimensional Numerical Relativity of the Binary Black Hole Grand Challenge, held at Austin, Texas, 1995 (Proceedings can be obtained from Richard Matzner, Center for Relativity, University of Texas at Austin, Texas).
M. Shibata and T. Nakamura, Phys. Rev. D [**52**]{}, 5428 (1995).
A. Lichnerowicz, J. Math. Pure Appl. [**23**]{}, 37 (1944).
J. W. York, Jr., Phys. Rev. Lett. [**26**]{}, 1656 (1971).
Note, however, that Arnowitt, Deser and Misner [@adm62] wrote these equations in terms of the conjugate momenta $\pi_{ij}$ instead of the extrinsic curvature $K_{ij}$.
A. M. Abrahams [*et. al.*]{} (The Binary Black Hole Grand Challenge Alliance) Phys. Rev. Lett. [**80**]{}, 1812 (1998);
G. B. Cook [*et. al.*]{} (The Binary Black Hole Grand Challenge Alliance) Phys. Rev. Lett. [**80**]{}, 2512 (1998).
For example: J. M. Bardeen and T. Piran, Phys. Rep. [**96**]{}, 205 (1983); C. R. Evans, PhD thesis, University of Texas at Austin (1984); A. M. Abrahams and C. R. Evans, Phys. Rev. D [**37**]{}, 318 (1988); A. M. Abrahams, G. B. Cook, S. L. Shapiro and S. A. Teukolsky, Phys. Rev. D [**49**]{}, 5153 (1994).
Note also that ${\cal L}_{\beta} \phi = \beta^i \partial_i \phi +
\partial_i \beta^i / 6$.
Shibata and Nakamura [@sn95] use a similar auxiliary variable $F_i = \tilde \gamma_{ij,j}$ to eliminate some second derivatives from the Ricci tensor.
T. De Donder, [*La gravifique einsteinienne*]{} (Gauthier-Villars, Paris, 1921); C. Lanczos, Phys. Z. [**23**]{}, 537 (1922); Y. Choquet-Bruhat, in [*Gravitation: An Introduction to Current Research*]{}, edited by L. Witten (Wiley, New York, 1962).
A. Fischer and J. Marsden, Comm. Math. Phys., [**28**]{}, 1 (1972).
L. Smarr and J. W. York, Jr., Phys. Rev. D [**17**]{}, 1945 (1978).
M. Parashar and J. C. Brown, in [*Proceedings of the International Conference for High Performance Computing*]{}, eds. S. Sahni, V. K. Prasanna and V. P. Bhatkar (Tata McGraw-Hill, New York, 1995), also [www.caip.rutgers.edu/$\sim$parashar/DAGH/]{}
S. A. Teukolsky, Phys. Rev. D [**26**]{}, 745 (1982).
L. Rezzolla, talk presented at the Binary Black Hole Grand Challenge Alliance’s meeting at the University of Pittsburgh, April 1998.
M. E. Rupright, A. M. Abrahams and L. Rezzolla, Phys. Rev. D [**58**]{}, 044005 (1998); L. Rezzolla, A. M. Abrahams, R. A. Matzner, M. Rupright and S. L. Shapiro, 1998, submitted.
N. Bishop, R. Gomez, L. Lehner, B. Szilagyi, J. Winicour and R. Isaacson, Phys. Rev. Lett. [**76**]{}, 4303 (1996).
Alternatively, the outer boundary conditions can be completely removed by a conformal rescaling; see, for example, P. Hübner, Phys. Rev. D [**53**]{}, 701 (1996) and P. Hübner, 1998, submitted (also gr-qc/9804065).
G. B. Cook, M. S. Scheel, private communication.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The first terms of the general solution for an asymptotically flat stationary axisymmetric vacuum spacetime endowed with an equatorial symmetry plane are calculated from the corresponding Ernst potential up to seventh order in the radial pseudospherical coordinate. The metric is used to determine the influence of high order multipoles in the perihelion precession of an equatorial orbit and in the node line precession of a non-equatorial orbit with respect to a geodesic circle. Both results are written in terms of invariant quantities such as the Geroch-Hansen multipoles and the energy and angular momentum of the orbit.'
author:
- |
L. Fernández-Jambrina\
Departamento de Enseñanzas Básicas de la Ingeniería Naval\
E.T.S.I. Navales\
Arco de la Victoria s/n\
E-28040-Madrid, Spain\
and\
C. Hoenselaers\
Department of Mathematical Sciences\
Loughborough University of Technology\
Loughborough LE11 3TU, United Kingdom
title: High Order Relativistic Corrections To Keplerian Motion
---
PACS numbers: 04.25 Nx, 95.10.Ce
Introduction
============
The construction of new exact solutions of the Einstein vacuum field equations describing an asymptotically flat stationary axially symmetric spacetime has increased in the last decade due to the several methods of generation of solutions from a given one, such as the Bäcklund transformations, the inverse scattering method and the HKX tranformations (cf. for instance [@Dietz] and references quoted therein.) Nevertheless we are very far from being able to implement the physical behaviour of an exact solution at will and in order to obtain results which could be tested experimentally we are led to use approximate expressions for the metrics with the desired physical requirements.
In this paper we study the influence of the first Geroch-Hansen multipole moments [@Geroch],[@Hansen] of order higher than three in some astrophysical situations with stationary and axial symmetry. As it is well known, these moments can be calculated from the coefficients of the power expansion of the Ernst potential on the symmetry axis [@Fodor], being linear the relation between both families of constants until the third moment, that is, the octupole. The subsequent expressions for the multipole moments of order higher than three become considerably more complicated as the order increases and there is not even a closed formula for calculating all of them. The purpose of this paper is therefore to show to what extent these nonlinearities affect the motion of test particles tracing their orbits around a non-spherical, in principle rotating, compact mass distribution. Of course, these terms are irrelevant for our solar system calculations, but they are meaningful for highly relativistic astrophysical objects, such as pulsars, as stated in [@Schafer].
With this aim in mind we calculate in section \[metric\] the first seven terms of a power expansion of the Ernst potential with arbitrary values for the multipole moments and construct the corresponding approximate metric. This result is used in section \[precession\] to obtain an expression for the perihelion precession of an equatorial trajectory and in section \[node\] to produce the corrections to the Newtonian precession of the nodes of a slightly non-equatorial orbit with reference to a close neighbouring geodesic circle. There is some previous work on this subject in [@quadr] and [@Quevedo], but these references deal only with terms up to the quadrupole moment. The results will be discussed in section \[discussion\].
Calculation of the metric\[metric\]
===================================
The metric of a stationary axially symmetric vacuum spacetime can be written in a canonical form in terms of the Weyl coordinates,
$$ds^2=-f(dt-Ad\phi)^2+\frac{1}{f}\{e^{2\gamma}(d\rho^2+dz^2)+\rho^2\,d\phi^2\},$$
where $t$ and $\phi$ are the coordinates associated with the commuting Killing vectors $\partial_t$ and $\partial_\phi$, and the functions $f$, $A$ and $\gamma$ depend only on the coordinates $\rho$ and $z$.
The whole set of Einstein equations can be shown [@Ernst] to be equivalent to the following system of partial differential equations,
$$\varepsilon=f+i\chi$$
$$\varepsilon_{\rho\rho}+\frac{1}{\rho}\varepsilon_\rho+\varepsilon_{zz}=
\frac{2}{\varepsilon+\bar\varepsilon}({\varepsilon_\rho}^2+{\varepsilon_z}^2)
\label{ernst}$$
$$A_\rho=\frac{4\,\rho}{(\varepsilon+\bar\varepsilon)^2}\,\chi_z\label{a1}$$
$$A_z=-\frac{4\,\rho}{(\varepsilon+\bar\varepsilon)^2}\,\chi_\rho\label{a2}$$
$$\gamma_\rho=\frac{\rho}{(\varepsilon+\bar\varepsilon)^2}\,(\varepsilon_\rho\bar
\varepsilon_\rho-\varepsilon_z\bar\varepsilon_z)\label{g1}$$
$$\gamma_z=\frac{\rho}{(\varepsilon+\bar\varepsilon)^2}\,(\varepsilon_\rho\bar
\varepsilon_z+\varepsilon_z\bar\varepsilon_\rho).\label{g2}$$
It can be shown that the integrability of the last four equations is guaranteed if equation (\[ernst\]), the Ernst equation, is satisfied. Therefore, in order to obtain a solution of the Einstein equations with two Killing vectors, it suffices to solve the Ernst equation and then calculate the metric functions by quadratures.
It is also usual to write the Ernst equation in terms of another potential, $\xi$, related to the previous one by the following relation,
$$\xi=\frac{1-\varepsilon}{1+\varepsilon},$$
which satisfies another partial differential equation,
$$(\xi\bar\xi-1)\,(\xi_{\rho\rho}+\frac{1}{\rho}\xi_\rho+\xi_{zz})=
2\,\bar\xi\,({\xi_\rho}^2+{\xi_z}^2), \label{ernst2}$$
which is also satisfied by $\xi^{-1}$. This latter form is the one introduced originally in [@Ernst] and allows a simple integration of the Kerr metric.
This form is particularly useful for calculating the multipole moments and it will be the one we shall employ.
In order to calculate the approximate solution, we shall write the Ernst potential $\xi$ as a function of two coordinates $r$ and $\theta$ related to the Weyl ones by the following transformation,
$$\rho=r\,\sin\theta\hspace{2cm} z=r\,\cos\theta.$$
We can implement the requirement of asymptotic flatness by writing $\xi$ as a formal inverse power expansion in the pseudospherical radial coordinate $r$,
$$\begin{aligned}
\xi=\sum_{n=1}^{\infty}\xi_n r^{-n}, \end{aligned}$$
where the functions $\xi_n$ depend only on the coordinate $\theta$.
Since we are interested in having a solution which is symmetric with respect to the equatorial plane $\theta=\pi/2$, we shall require that the functions $\xi_n$ of odd order be real whereas those of even order will be taken to be imaginary.
With this information at hand we can now proceed to calculate the metric functions. The function $f$ is just the real part of $\varepsilon$ and from our knowledge of $\xi$ we can calculate the first eight terms of its expansion in the radial coordinate,
$$\begin{aligned}
f=1+\sum_{n=1}^{\infty}f_n r^{-n} \end{aligned}$$
$$\begin{aligned}
f_1=-2\,{ m_0} \end{aligned}$$
$$\begin{aligned}
f_2=2\, { m_0}^{2} \end{aligned}$$
$$\begin{aligned}
f_3={ m_2}-\frac {4\,{ m_0}^{3}}{3}
-3\,m_2\,\cos^2\theta\end{aligned}$$
$$\begin{aligned}
{ f_4}=-2\,{ m_0}\,{ m_2}+{\frac {2\,{ m_0}^{4}}{3}}+
\left (6\,{ m_0}\,{ m_2}+2\,{ m_1}^{2}\right
)\cos^2\theta \end{aligned}$$
$$\begin{aligned}
{ f_5}&=&-{\frac {3\,{ m_4}}{4}}+2\, { m_0}^{2}{ m_2}+{\frac {8\,{ m_0}\,{
m_1}^{2}}{35}}-{\frac {4\,{
m_0}^{5}} {15}}+\nonumber\\ &+& \left ({\frac {15\,{ m_4}}{2}}-6\,{ m_0}^{2}{
m_2}-{\frac {44\,{ m_0}\,{ m_1}^{2}}{7}}\right )\cos^2\theta-{\frac {35\, {
m_4}\,\cos^4\theta}{4}}
\end{aligned}$$
$$\begin{aligned}
{ f_6}&=& {\frac {3\,{ m_0}\,{
m_4}}{2}}+{\frac
{{ m_2}^{2 }}{2}}-{\frac {4\,{ m_2}\,{ m_0}^{3}}{3}}-{\frac {16\,{ m_0}^{2}{ m_1}^{2}}{35}}
+{\frac {4\,{ m_0}^{6}}{45}}+ \nonumber\\&+&\left (-15\,{ m_0}\,{ m_4}-6\,{
m_1}\,{ m_3}-3\,{ m_2}^{2}+4\,{ m_2}\,{
m_0}^{3}+{ \frac {356\,{ m_0}^{2}{ m_1}^{2}}{35}}\right
)\cos^2 \theta+\nonumber\\&+&\left ({\frac {35\,{ m_0}\,{
m_4 }}{2}}+10\,{ m_1}\,{ m_3}+{\frac {9\,{ m_2}^{2}}{2}}\right )\cos^4\theta\end{aligned}$$
$$\begin{aligned}
{ f_7}&=&{\frac{5\,{ m_6}}{8}}-{\frac {3\,{ m_0} ^{2}{ m_4}}{2}}-{\frac {4\,{
m_0}\,{ m_1}\,{ m_3}}{11}}-{ m_0}\,{ m_2}^{2}-{\frac {16\,{ m_1 }^{2}\,{
m_2}}{231}}+\nonumber\\
&+&{\frac {2\,{ m_0}^{4}{ m_2}}{3}}+{\frac {32\,{ m_0}^{3}{ m_1}^{2}}{63}}
-{\frac {8\,{ m_0}^{7}}{315}}
+\nonumber\\&+&
\left (-{\frac { 105\,{ m_6}}{8}}+15\,{ m_0 }^{2}{ m_4}+{\frac {216\,{ m_0}\,{ m_1}\,{
m_3}}{11}}+6\,{ m_0}\,{
m_2}^{2}+\right.\nonumber\\ &+&\left.{\frac {38\,{ m_1}^{2}\,{ m_2}}{11}}
-2\,{ m_0 }^{4}{
m_2} -
{\frac {1208\,{ m_0}^{3}{ m_1}^{2}}{105}}\right)\cos^2\theta+\nonumber\\&+&
\left ({\frac {315\,{ m_6}}{8}}-{\frac {35\,{ m_0}^{2}{
m_4}}{2}}-{\frac
{340\,{ m_0}\,{ m_1}\,{ m_3}}{11}}-{\frac {114\,{
m_1}^{2}\,{ m_2}}{11}}-\right.\nonumber\\ &-&\left.9\,{ m_0}\,{ m_2}^{2}\right ) \cos^4\theta-{\frac {231\,{
m_6}}{8}}\,\cos^6\theta,\end{aligned}$$
where the constants $m_n$ which arise from the integration of the Ernst equation are real if $n$ is even and otherwise imaginary.
The first six terms of the metric function $A$ can be obtained by direct integration of the equations (\[a1\]) and (\[a2\]). In spite of the factor $i$ before the expression for $A$, this function is obviously real since the constants $m_n$ are imaginary for odd $n$,
$$\begin{aligned}
A=-i\sin^2\theta\sum_{n=1}^{\infty}A_nr^{-n} \end{aligned}$$
$$A_1=-{2\,{m_1}}$$
$$A_2= -{2\,{ m_0}\,{ m_1}}$$
$$A_3= { m_3}-{\frac {8\,{
m_0}^{2}{m_1}}{5}} -5\,{ m_3}\,\cos^2\theta$$
$$\begin{aligned}
A_4&=&\frac{3\,m_0\,m_3}{2}+\frac{m_1\,m_2}{2}-\frac{16\,{m_0}^3\,m_1}{15}+
\nonumber\\&+& \left(-\frac{15\,m_0\,m_3}{2}+\frac{3\,m_1\,m_2}{2}\right)
\cos^2\theta\end{aligned}$$
$$\begin{aligned}
A_5&=&-\frac {3\,
m_5}{4}+
\frac {4\, {m_0}^{2}\,m_3}{3}+ \frac {8\, m_0\, m_1\, m_2}{7}
-\frac {64\, {m_0}^4\, m_1}{ 105}+ \nonumber\\ &+&
\left(\frac{21\,m_5}{2}-\frac {20\, {m_0}^2 m_3}{3}\right)\cos^2\theta-
\frac {63\, m_5}{4}\,\cos^4\theta\end{aligned}$$
$$\begin{aligned}
A_6&=&-{ \frac
{5\,{ m_0}\,{ m_5}}{4}}-{\frac {{ m_1}\, { m_4}}{4}}
-{\frac {{ m_2}\, { m_3}}{2}} +{\frac {8\,{ m_0}^{3}\,{ m_3}}{9 }}+\nonumber\\&+&
{\frac {48\,{ m_0}^{2}\,{m_1}{ m_2}}{35}} +{\frac {8\,{ m_0}{ m_1}^{3}}{105}}-{\frac {32\,{
m_0}^{5}\,{ m_1}}{105}}+
\left(\frac {35\, m_0\, m_5\,}{2
}- \frac {5\,m_1\,m_4}{2}+\right.\nonumber\\ &+&\left.m_2\,m_3-\frac {40
\, {m_0}^{3}\, m_3}{9}-\frac {8\, {m_0}^{2}\, m_1\, m_2 }{ 5}+\frac
{16\,m_0{m_1}^3}{21}\right)\cos^2\theta +\nonumber\\ &+&\left(-\frac {105\, m_0\, m_5\,}{4}+\frac {35 \, m_1\, m_4\,}{4}-\frac {5\,m_2
\,m_3}{2}\right)\cos^4\theta.\end{aligned}$$
The only function which remains to be calculated is $\gamma$ and can be obtained as a quadrature from equations (\[g1\]) and (\[g2\]),
$$\begin{aligned}
\gamma=\sum_{n=1}^{\infty}\gamma_{n}r^{-2n} \end{aligned}$$
$$\begin{aligned}
\gamma_1=-\frac{{m_0}^2\,\sin^2\theta}{2} \end{aligned}$$
$$\begin{aligned}
\gamma_2&=&{\frac {3\,{ m_0}\,{ m_2} }{4}}-{\frac {{ m_1}^{2}}{4}}
+\left(-{\frac {9\,{ m_0 }\,{ m_2}}{2}+ {\frac {5\,{
m_1}^{2}}{2}}}\right)\cos^2\theta +\nonumber\\&+&\left({\frac { 15\,{ m_0}\,{ m_2}}{4}}-{\frac
{9\,{ m_1}^{2}}{4}}\right)\cos^4\theta\end{aligned}$$
$$\begin{aligned}
\gamma_3&=&
-{ \frac {5\,{ m_0}\,{ m_4}}{8}}+{\frac {{ m_1}\,{ m_3}}{2}}
-{\frac{3\,{ m_2}^{2}}{8}}+{\frac
{2\,{ m_0}^{2} { m_1}^{2}} {35}}
+ \nonumber\\ &+&\left({ \frac {75\,{ m_0}\,{
m_4}\,}{8}}-{\frac {21\,{ m_1}\,{ m_3}}{2}}+{\frac {4 5\,{
m_2}^{ 2}}{8}}-{\frac {2\,{ m_0}^{2}{
m_1}^{2}}{35}}\right)\cos^2\theta
+ \nonumber\\ &+&
\left(-{\frac
{175\,{ m_0}\,{ m_4}\, }{8}}+{\frac {55\,{ m_1}\,{
m_3}}{2}}-{\frac {117\,{ m_2}^{2}}{8}} \right)\cos^4\theta
+ \nonumber\\ &+&
\left({\frac
{105\,{ m_0}\,{ m_4}}{8}}-{ \frac {35\,{ m_1}\,{
m_3}}{2}}+{\frac {7 5\,{ m_2}^{ 2}}{8}}\right)\cos^6\theta\end{aligned}$$
$$\begin{aligned}
\gamma_4&=&{\frac {35\,{ m_0}\,{
m_6}}{64}}-{\frac {15\,{ m_1}\,{ m_5}}{32}}+{\frac {45\,{ m_2}\,{ m_4}}{64}}-
\nonumber\\ &-&
{\frac {9\,{
m_3}^{2}}{32}}-{\frac {14\,{
m_0}^{2 }{ m_1}\,{ m_3}}{165}}-{\frac {2\,{ m_0}\,{ m_1}^{2}{ m_2}}{33}}+{\frac {16\,{ m_0}^{4}{ m_1}^{2}}{1575}
}+ \nonumber \\&+&
\left (-{\frac {245\,{m_0}\,{ m_6}}{16}}+{\frac
{135\,{ m_1}\,{m_5}}{8}}-{\frac {315\,{
m_2}\,{ m_4}}{16}}+{\frac {81\,{ m_3}^{2}}{ 8}}+
\right.\nonumber\\&+&\left. {\frac
{28\,{ m_0}^{2}{m_1}\,{m_3}}{55}}-{\frac {4\,{ m_0}\,{ m_1}^{2}{
m_2}}{231}}+{\frac{16\,{ m_0}^{4}{ m_1}^{2}}{1575}}\right )\cos^2\theta+\nonumber\\&+&
\left(\frac {2205\,{ m_0}\,{ m_6}}{32}-{\frac {1365\,{ m_1}\,{m_5}}{16}}+{\frac
{3075\,{ m_2}\,{ m_4}}{32}} -\right.\nonumber\\&-& \left.
{\frac {795\,{ m_3}^{2}}{16}}-{\frac {14\,{ m_0}^{2}{ m_1}\,{
m_3}}{33}}+ {\frac {6\,{ m_0}\,{ m_1}^{2}{
m_2}}{77}}\right ) \cos^4\theta+\nonumber\\&+&\left(
-{\frac {1617\,{
m_0}\,{m_6}}{16}}+{\frac{1071\,{ m_1}\,{
m_5}}{8}}-{\frac {2415\,{ m_2}\,{ m_4}}{16}}+{\frac{625\,{m_3}^{2}}{
8}}\right )\cos^6\theta+\nonumber\\&+&\left (
{\frac {3003\,{ m_0}\, { m_6}}{64}}-
{\frac {2079\,{ m_1} \,{ m_5}}{32}}+{\frac {4725\,{ m_2}\,{
m_4}}{64}}-{\frac {1225\,{
m_3}^{2}}{32}}\right)\cos^8\theta.\end{aligned}$$
Perihelion precession of a closed orbit\[precession\]
=====================================================
It is well known from Bertrand’s theorem (cf. for instance [@Goldstein]) that stable bounded orbits of particles moving under the influence of a central force which is neither Newtonian nor harmonic are not closed. Therefore whenever the source of the gravitational field is not exactly monopolar, the bounded trajectories on the equatorial plane will no longer be the elliptic orbits described by Kepler’s first law but will take the form of a precessing ellipse if the deviation from spherical symmetry is small.
The situation becomes a bit more complicated in general relativity. Although it has been proven [@Perlick] that there are just two asymptotically flat static spherically symmetric spacetimes in which the stable orbits are closed, they are rather different from the classical physical situations. Therefore, when relativistic effects are taken into account, not even the motion around a spherical distribution of mass is closed. This effect has been tested in our solar system and it amounts to a slow precession of the perihelion of the orbit of Mercury. Of course other multipole moments of the mass distribution will also contribute to this effect and, in principle, these moments could be calculated by measuring the precession of a certain number of test particles orbiting at conveniently different distances from the gravitational source.
If the test particles are small enough for the tidal forces to be unimportant within the characteristic length of the particle, we can regard them as point particles. If the effects due to their intrinsic angular momentum can be taken as negligible it can be assumed that they trace out timelike geodesics in the spacetime surrounding the gravitational source. Hence, in order to study the influence of the far field multipole moments of the gravitational field, we shall have to solve the geodesic equations for the previously calculated metric. We shall restrict ourselves to the equatorial plane $\theta=\pi/2$.
Since the timelike and azimuthal coordinates are ignorable, we have two first integrals for the motion corresponding to the conserved quantities $E$ and $l$, respectively the total energy per unit of mass and the projection of the angular momentum on the $z$ axis, also per unit of mass, of the test particle. In terms of its 4-velocity $u=(\dot t,\dot r,
\dot\theta,\dot\phi)$ these quantities have the following form,
$$E=-\partial_t\cdot u=f\,(\dot t-A\dot\phi)$$
$$l=\partial_\phi\cdot u=f\,A\,(\dot
t-A\dot\phi)+\frac{1}{f}\,r^2\,\dot\phi,$$
where the overhead dot stands for the derivative with respect to proper time.
Therefore the equations for $t$ and $\phi$ can be written as follows,
$$\dot\phi=f\,\frac{l-E\,A}{r^2} \label{phi}$$
$$\dot t=\frac{E}{f}+f\,A\,\frac{l-E\,A}{r^2}. \label{t}$$
Another integral arises from the fact that the trajectory is timelike and therefore $u\cdot u=-1$. For the geodesics under consideration this means,
$$-1=-f(\dot t-A\dot\phi)^2+\frac{1}{f}(e^{2\gamma}\,\dot
r^2+r^2\,\dot\phi^2).\label{timelike}$$
From the previous three equations $\dot r$ can be obtained as a function of the non-ignorable coordinates and the conserved quantities. However, since we are interested in the shape of the orbit rather than in its time evolution, we divide (\[timelike\]) by $\dot\phi$ to get the derivative of the radial coordinate with respect to the azimuthal angle,
$${r_\phi}^2=e^{-2\gamma}\left\{\frac{r^4\,(E^2-f)}{f^2\,(l-E\,A)^2}-r^2\right\}.$$
It will be useful to write this equation in terms of another function $u=1/r$ as it is done in classical mechanics for solving the motion under central forces,
$${u_\phi}^2=e^{-2\gamma}\left\{\frac{E^2-f}{f^2\,(l-E\,A)^2}-u^2\right\}=F(u)=
\sum_{n=0}^{6}c_n\,u^n+O(u^7). \label{binet1}$$
This equation can be turned into a quasilinear one by taking a derivative with respect to $\phi$ and cancelling the $u_\phi$ factors, since for the analysis of perihelion precession circular orbits are of no interest,
$$u_{\phi\phi}=\frac{1}{2}\,F'(u).\label{binet2}$$
In order to solve these equations perturbatively we need expand them in powers of a small parameter. A good candidate is the inverse of the angular momentum per unit of mass, $l$, since according to Kepler’s 1-2-3 law, which is assumed to be a good approximation at a great distance from the source, it behaves as $l\sim\sqrt{m\,r}$, where $m$ is the mass of the particle. It can be combined with the mass of the source $m_0$ to yield an acceptable dimensionless small parameter for analysing the far gravitational field. Hence we shall use $\epsilon=m_0/l$ and expand $u$ in the following way,
$$u=\epsilon^2\,\sum_{n=0}^{11}u_n\,\epsilon^n+O(\epsilon^{14}).$$
The reason for starting the expansion at this order is that the expression for the Kepler ellipse, which is expected to be the first term, is second order in $\epsilon$.
The energy per unit of mass of the particle is also to be expanded in $\epsilon$. Therefore we write,
$$E=1+\epsilon^2\,\sum_{n=0}^{n=11}
E_n\,\epsilon^n+O(\epsilon^{14}).$$
In order to avoid the appearance of secular terms we use a coordinate $\psi$ related to $\phi$ by,
$$\psi=\omega\phi\hspace{1.5cm}\omega=\sqrt{1+\sum\omega_i\epsilon^i}.$$
The coefficients $c_n$ are all of the order $\epsilon^2$ except $c_2$ which is clearly of zeroth order in $\epsilon$ and therefore equation (\[binet1\]) takes the form of a hierarchy of forced harmonic oscillators which can be solved iteratively up to the order of accuracy provided by our knowledge of the metric,
$$u_{n\,\psi\psi}+u_{n}=f_n(\psi).$$
The first terms of the expansion of the solution to the equations (\[binet1\]) and (\[binet2\]) are,
$$u_0=\frac{1}{m_0}(1+\sqrt{1+2\,E_0}\cos\psi)\label{Kepler0}$$
$$u_1=0$$
$$u_2=\frac{6+4\,E_0}{m_0}$$
$$u_3=(8+4\,E_0)\,\frac{i\,m_1}{m_0^3}.$$
Of course the term of lowest order is the Kepler ellipse if $E_0$ is negative.
From the information we have about the metric we can calculate $\omega$ up to the eleventh power of $\epsilon$. These terms are just what we need to calculate the expression for the perihelion precession, so we shall focus on them. The expressions of the terms $u_{n}$ are not needed and therefore we shall no enclose them here.
Instead of writing the results as a function of the integration constants $m_i$, it will be more useful to write them in terms of the Geroch-Hansen multipole moments, $P_i$, the physical interpretation of which is more appealing. Bear in mind that the odd multipole moments are imaginary and have to be multiplied by $-i$ to obtain the usual real expressions $J_n$. To calculate these moments we shall make use of the procedure described in [@Fodor].
If we have and expansion of the Ernst potential $\xi$ on the symmetry axis in terms of the Weyl coordinate $z$, viz.
$$\xi(\rho=0)=\sum_{n=0}^\infty C_n\,z^{-(n+1)}$$
then the multipole moments can be calculated as follows,
$$P_n=C_n\hspace{2cm}n\leq 3$$
$$P_4=C_4+\frac{1}{7}\,\bar C_0\,(C_1^2-C_2\,C_0)$$
$$P_5=C_5+\frac{1}{3}\,\bar
C_0\,(C_2\,C_1-C_3\,C_0)+\frac{1}{21}\bar C_1\,(C_1^2-C_2\, C_0).$$
As a function of these multipole moments the first coefficients in the expansion of the energy per unit of mass, $E$, read,
$$\begin{aligned}
E_1&=&0 \end{aligned}$$
$$\begin{aligned}
E_2&=&-6-10\,E_0-{\frac {E_0^{2}}{2}} \end{aligned}$$
$$\begin{aligned}
E_3&=&-\left({8}+{12\,E_{0}}\right)\frac{i\,P_{1}}{{ P_0}^{2}} \end{aligned}$$
$$\begin{aligned}
E_4&=&-\frac {47}{4}-20\,E_0 -13\,E_0^{2}+
\frac {E_0^{3}}{2} +(2+3\,E_0)\frac { P_2}{ {P_0}^{3}}\end{aligned}$$
$$\begin{aligned}
E_5&=&-\left(56+ 104\,E_0+56\,E_0^{2}
\right)\frac{i\,P_{1}}{{ P_0}^{2}},\end{aligned}$$
where $E_0$, the Keplerian energy, is a free parameter which has $-1/2$ as a lower bound, corresponding to a circular orbit.
The frequency $\omega$ is different from one and therefore the orbit is not closed. Between two consecutives perihelion approaches the test particle traces an angle $2\,\pi/\omega$. Hence the perihelion has shifted an angle, $\Delta\phi$ given up to the eleventh power of $\epsilon$ by the following expression,
$$\begin{aligned}
{\Delta\phi}&=&2\,\pi\,(\omega^{-1}-1)=\pi\left\{
\Delta_{0}+\Delta_{1}+\Delta_{2}+\Delta_{4}+
\Delta_{8}+\right.
\nonumber\\&+&\left.\Delta_{16}+\Delta_{32}+\Delta_{2\times 4}+\Delta_{2\times 8}+\Delta_{2\times 16}+
\Delta_{4\times 8} \right\}, \end{aligned}$$
where the shift has been split into different terms according to their origin: The first one, $\Delta_{0}$, comprises the Newtonian contribution to the precession, that is, the terms which remain after taking the classical limit $c\rightarrow\infty$. Of course only the gravitational moments are present since rotation has no influence whatsoever in Newtonian dynamics. Since the speed of light, $c$, and the gravitational coupling constant, $G$, have been taken to be one, the terms look rather alike in magnitude. However, if the respective factors are written (a factor $G/c$ for each $\epsilon$, a factor $G/c^2$ for each $P_{2n}$, a factor $G/c^3$ for each $P_{2n+1}$, a factor $c^{-1}$ for each $l$ and a factor $G^{-2}$ for each $E_0$), the actual magnitude of every term is recovered. For instance, the first Newtonian term has a factor $G^2$ and the second a $G^4$. For oblate gravitational sources the quadrupole moment, $P_2$ is negative, whence it contributes to a positive shift of the perihelion in the first order. In the next order the shift contribution of the quadrupole is, however, always positive. (N.b. $E_0$, though negative for bounded orbits, has a lower limit about $-1/2$ which does not allow it to overcome the energy-independent term.) On the other hand the sedecimpole $P_4$ term bears the opposite sign to the one of the quadrupole,
$$\begin{aligned}
\Delta_{0}=-{\frac {3\,\,{ P_2}}{{
P_0}^{3}}}\,\epsilon^4+\left\{\left(\frac {105}{8}+\frac {45\,{ E_0}}{4}\right)
\frac{{ P_4}}{{ P_0}^{5}}+\left(\frac
{105}{8}+ \frac{15\,{ E_0}}{4}\right)\frac{{ P_2}^{2}}{{
P_0}^{6}}\right\}\epsilon^8.\end{aligned}$$
The second term, $\Delta_{1}$, comprises the contribution to the perihelion precession due to a spherically symmetric mass distribution, i.e. the Schwarzschild effect. It can be calculated exactly in terms of elliptic functions and is of the order of $G^2/c^2$. The contribution of every order is always positive,
$$\begin{aligned}
\Delta_{1}&=&6\,\,\epsilon^{2}+\left ({\frac {
105\,}{2}}+15\,\,E_0\right )\epsilon^{4}+\left ({ \frac {975\,}{2}}+165\,\,E_0\right
)\epsilon^{6}+\nonumber\\&+&\left ({\frac {159105\, }{32}}+{\frac {16725\,\,E_0}{8}}+{\frac
{705\,\,E_0^{2}}{8}}\right )\epsilon^{8}+\nonumber\\&+&\left
({\frac {1701507\,}{32}}+ {\frac
{216375\,\,E_0}{8}}+{\frac {20115\,\,E_0^{2}}{8}}\right
)\epsilon^{10}.\label{permon} \end{aligned}$$
In the term $\Delta_{2}$ we have included the influence of a dipole of rotation on the perihelion shift. It is of the order of $G^2/c^2$. Since its lower terms in $\epsilon$ are odd, it is sensitive to whether the probe rotates in the same direction as the source does or not. It is positive if the angular momentum of the source and the orbital angular momentum of the probe are antiparallel and negative otherwise. This has a qualitative explanation in the fact that the perihelion shift due to a mass monopole decreases with the angular momentum of the test particle; if the source happens to be rotating and its angular momentum is $J$, then $l$ is replaced in the first order by $l+2\,J/r$ as it can be seen in equation (\[binet1\]) after substitution of $P_1$ by $i\,J$, whence the ‘effective’ $l$ increases if both momenta are parallel and it would be expected that the perihelion advance diminishes. In contrast, the quadratic terms in $P_1$ are independent of the direction of rotation and are always positive whence they induce a perihelion advance. The cubic terms in $P_1$ behave as the linear ones,
$$\begin{aligned}
\Delta_{2}&=&{\frac {8\,i\,{ P_1} \epsilon^{3}}{{
P_0}^{2}}}+
\left (168+48\,E_0\right )\frac{i\,P_{1}}{{ P_0}^{2}}\epsilon^{5}-
\left (120+24\,E_0\right )\frac{{ P_1}^{2}}{{ P_0}^{4}}\epsilon^{6}+
\nonumber\\&+&
\left (2562+ 1020\,E_0+36\,E_0^{2}\right
)\frac{i\,P_{1}}{{ P_0}^{2}}\epsilon^{7}
- \nonumber\\ &-&\left (\frac {65607\,}{16}
+\frac {44607\,E_0}{28
}+\frac {195\,E_0^{2}}{4}\right
)\frac{{ P_1}^{2}}{{ P_0}^{4}}\epsilon^{8}+\nonumber\\ &+&
\left \{\left(36046+17640\,E_0+1356\,E_0^{2}-16\,E_0^{3}\right)
\frac {i\, P_1}{{ P_0}^{2}}-\left(2048+672\,E_0\right)
\frac {i\,{ P_1}^ {3}}{{ P_0}^{6}}
\right
\}\epsilon^{9}-\nonumber\\&-&
\left (\frac {10256685}{112}+\frac {1320387\,E_0
}{28}+\frac {118305\,E_0^{2}}{28}\right )\frac{{ P_1}^{2}}{{ P_0}^{4}}\epsilon^{10}+\nonumber\\&+&
\left \{\left( \frac {3927489}{8}+\frac
{569361\,E_0}{2}+\frac {70659
\,E_0^{2}}{2}+174\,E_{0}^3+15\,E_0^{4}
\right)\frac{i\,{ P_1}}{{P_0}^{2}}-\right.
\nonumber\\&-& \left.\left(\frac {2735961}{28}+\frac {341339\,E_0}
{7}+\frac {27429\,
E_0^{2}}{7}\right )\frac{i\,{P_{1}}^3}{{ P_0}^{6}}\right\}\epsilon^{11}. \end{aligned}$$
Under the name $\Delta_{4}$ the relativistic terms depending only on the quadrupole moment, $P_2$, and the mass are comprised. The first correction has a factor $G^4/c^2$ in front of it. As it was to be expected, it does not depend on the direction of rotation and, as its Newtonian counterpart, it is positive for oblate gravitational sources. The quadratic terms as a whole are always positive,
$$\begin{aligned}
\Delta_{4}&=&-\left (90
+42\,E_0\right )\frac{{ P_2}}{{P_0}^{3}}\epsilon^{6}-\left
({\frac
{25383\,}{16}}+{\frac {28305\,\,E_0}{ 28}}+
{\frac {375\,E_0^{2}}{4}} \right )\frac{{ P_2}}{{ P_0}^{3}}\epsilon^{8}+
\nonumber\\&+&\left\{-\left ({\frac
{ 2686203}{112}}+{\frac {503379\,E_0}{28
}}+{\frac {80187\,E_0^{2}}{28}}\right)\frac{{ P_2}}{{
P_0}^{3}}+\right.\nonumber\\ &+&\left.\left({\frac
{12471}{16}}+519\,E_0
+{\frac {165\,E_ 0^{2}}{4}}\right
)\frac{{ P_2}^{2}}{{ P_0}^{6}}\right\}\epsilon^{10} \end{aligned}$$
The symbol $\Delta_{8}$ stands for the corrections to the perihelion shift due to a rotation octupole moment $P_3$. The first correction is of the order of $G^4/c^2$. They bear the same relation to the dipole terms as the quadrupole to the monopole terms: The overall sign changes,
$$\begin{aligned}
\Delta_{8}&=& -\left\{ \left ({30}+{24\,E_0}\right )\epsilon^{7}
+\left
({801}+{825\,E_0}+{138\,E_0^{2}
}\right )\epsilon^{9}+\nonumber\right.\\ &+&\left. \left ({\frac
{28671}{2}}+{16475\,E_0}+{4278\,E_0^{2}}+{102\,E_0^{3}
}\right )\epsilon^{11}
\right\} \frac{i\,P_{3}}{{P_{0}}^4}\end{aligned}$$
The influence of the sedecimpole gravitational moment, $P_4$, is included in $\Delta_{16}$ and does not show up until the tenth order of the small parameter. In non-geometrized units it is proportional to $G^6/c^2$, $$\begin{aligned}
\Delta_{16}=\left ({\frac {7425}{16}}+{
570\,E_0}+{\frac {495\,E_0^{2}}{4}}\right )\frac{{ P_4}}{{
P_0}^{5}}\epsilon^{10}.\end{aligned}$$
The last multipole moment to be considered is the rotational trigintaduopole moment, $P_5$, and it is comprised in $\Delta_{32}$. It is of the eleventh order in $\epsilon$, $$\begin{aligned}
\Delta_{32}= \left ({\frac {945}{8}}+{
210\,E_0}+{\frac {135\, E_0^{2}}{2}}\right )\frac{i{ P_5}}{{
P_0}^{6}}\epsilon^{11}.\end{aligned}$$
Now we review the couplings among the different multipole moments other than mass. Up to the order considered there is no coupling between the gravitational moments higher than the mass (except for self-couplings), but there are rotation-rotation couplings and gravitation-rotation couplings. The first one to appear is the dipole-quadrupole coupling, $\Delta_{2\times 4}$. It has a factor of $G^4/c^2$ in the lowest order. If both angular momenta are antiparallel and the source is oblate ($P_2<0$), then the contribution of the bilinear terms is positive. The quadratic terms in the quadrupole $P_2$ are again positive if $J$ and $l$ are antiparallel. Finally the quadratic terms in the dipole are positive if the body is oblate,
$$\begin{aligned}
\Delta_{2\times 4}&=& -\left (90+24\,E_0\right
)\frac{\,i{ P_1}\,{ P_2}}{{ P_0}^{5}}\epsilon^{7}-
\left (3939+2127\,E_0+126\,E_0^{2}\right
)\frac{\,i{ P_1}\,{ P_2}}{{ P_0}^{5}}\epsilon^{9}+
\nonumber\\&+&
\left (2280+900\,E_0 \right)\frac{{ P_1}^{2}{ P_2}}{{ P_0}^{7}}\epsilon^{10}
+
\left\{\left ({\frac {1419}{2}}+303\,E_{0}\right)\frac{i\,{ P_1}{ P_2 }^{2}}
{{ P_0}^{8}}-
\right.\nonumber\\&-&\left.
\left({\frac
{2825301 }{28}}+{\frac {507992\,E_0}{7
}}+{\frac {75765\,E_0^{2}} {7}}+90\,E_0^{3}\right )
\frac {i\,{ P_1} { P_2}}{{
P_0}^{5}}\right\}\epsilon^{11}.\end{aligned}$$
The only rotation-rotation coupling up to this order is between the dipole and octupole moments. It comes under the name of $\Delta_{2\times 8}$ and it is at least of the order $G^6/c^4$. Since it is linear on both rotational moments this term is not sensitive to the direction of rotation. It is of the tenth order in $\epsilon$,
$$\begin{aligned}
\Delta_{2\times 8}=\left ({1068} +
{972\,\,E_0}+{120\,E_0^{2}}\right )
\frac{{ P_1}\,{ P_3}}{{P_0}^{6}}\epsilon^{10}.\end{aligned}$$
The higher rotation-gravitation couplings involve the rotational dipole and the gravitational sedecimpole, $\Delta_{2\times 16}$, and the rotational octupole and the gravitational quadrupole, $\Delta_{4\times 8}$. Both appear first in the eleventh order of perturbation and are at least of the order of $\epsilon^{11}$. The term $\Delta_{2\times 16}$ is positive if $P_4$ is positive and the angular momenta are antiparallel,
$$\begin{aligned}
\Delta_{2\times 16}= \left ({\frac {4005}{8}}+{ {495\,E_0}}+{\frac {
135\,E_0^{2}}{2\,}}\right )\frac{i{ P_1}\,{ P_4}}{{ P_0 }^{7}}\epsilon^{11}
\end{aligned}$$
$$\begin{aligned}
\Delta_{4\times 8}= \left ({\frac {1383}{4
}}+{348\,E_0}+{
45\,E_0 ^{2}}\right )
\frac{i{ P_2}\,{ P_3}}{{ P_0}^{7}}\epsilon^{11}.
\end{aligned}$$
It would be of great interest to know the range of applicability of this perturbative expansion. In an appendix at the end of this paper a simpler case is studied: It is shown there for which values of the parameters the expansions are acceptable for the Schwarzschild metric.
Precession of the line of nodes of a nonequatorial orbit\[node\]
================================================================
Let us consider now a bounded orbit slightly departing from the equatorial plane. If the mass distribution of the source were spherically symmetric, the equatorial plane would not be at all privileged and the orbit would always intersect it at the same nodes. However this is no longer the situation when the source is not exactly spherical, since then the nodes precess due to the perturbation generated by higher order multipoles. In classical nonrelativistic mechanics the first contribution to the precession of the nodes arises from the quadrupole moment, as it is shown in an appendix at the end of this paper, whereas in general relativity it is the rotational dipole moment the first one to contribute.
In the approximation used in this section we consider a geodesic on the equatorial plane and calculate the evolution of small deviations from it. Since the geodesic equation reads,
$$\ddot x^\mu+\Gamma^\mu_{\rho\sigma}\,\dot x^\rho\,\dot
x^\sigma=0,$$
it is straightforward that nearby geodesics which deviate from the original geodesic by a vector $\delta^\mu$ fulfill,
$$\ddot\delta^\mu+2\,\Gamma^\mu_{\rho\sigma}\,\dot\delta^\rho\,\dot
x^\sigma+\Gamma^\mu_{\rho\sigma,\nu}\,\dot x^\rho\,\dot
x^\sigma\,\delta^\nu=0.$$
Since we are interested in small deviations from the equatorial plane, we shall focus our attention on the $\theta$ coordinate. Taking into account that the reference geodesic lies on the symmetry plane $\theta=\pi/2$ and that the first derivatives of the metric with respect to $\theta$ vanish on it, the geodesic deviation equation for $\delta^\theta$ reduces to,
$$\ddot\delta^\theta-\frac{1}{2}\,g^{\theta\theta}\,g_{\rho\sigma,\theta\theta}
\,\dot x^\rho\,\dot x^\sigma\,\delta^\theta=0.$$
Instead of considering an arbitrary bounded reference geodesic on the equatorial plane, we shall restrict ourselves to geodesic circles. Of course many interesting features will be lost, but we deem this paper would become even longer if we take them into account.
In order to compute the evolution of the nodes with respect to the azimuthal angle on the geodesic, the previous equation needs to be divided by $\dot\phi^2$, which is constant on the circle. From now on we shall write $\delta$ instead of $\delta^\theta$ to avoid cumbersome notations,
$$\delta_{\phi\phi}+\Omega^2\,\delta=0\hspace{1.5cm}\Omega^2=-\frac{1}{2\,
\dot\phi^2 }\, g^{\theta\theta}\,g_{\rho\sigma,\theta\theta}\,\dot
x^\rho\,\dot x^\sigma,$$
where every function is to be calculated on $\theta=\pi/2$ and $r=R$, the radius of the geodesic circle.
The previous expression states that the nodes of nearby geodesics are separated by regular intervals of the coordinate $\phi$. If $\Omega$ is different from one, these nodes will travel around the geodesic circle instead of remaining at constant values of the azimuthal angle. Since it would be of interest to write down the result in a coordinate independent expression, it will be required to make use of the geodesic equations to remove the dependence on the radius of the circle, $R$, and also to cast the energy, $E$, as a function of the angular momentum of the circular orbit, $l$, so that the final result be a function of the multipole moments and $l$ only.
As it is well known, the timelike geodesic equations can be obtained from the Lagrange equations applied to a functional $L=g_{\mu\nu}\,\dot
x^\mu\,\dot x^\nu$ and the constraint $L=-1$. Two of these equations, namely (\[phi\]) and (\[t\]), have already been used to remove the dependence on the derivatives of the ignorable coordinates. The equation corresponding to variations of $\theta$ is automatically satisfied. We are left just with two equations, corresponding to the constraint and the variations of $r$, viz.
$$1=\frac{E^2}{f}-f\,\frac{(l-E\,A)^2}{r^2}\label{tim}$$
$$\frac{E^2}{f^2}\,\partial_rf-2\,E\,f\frac{l-E\,A}{R^2}\partial_rA-f^2
\frac{(l-E\,A)^2}{R^4}\partial_r\left(\frac{r^2}{f}\right)=0,$$
where we have used $\dot r=0$ and $\theta=\pi/2$.
From the last equation we get $E/l$ as a function of $R$,
$$\frac{E}{l}=\frac{-b\pm\sqrt{b^2-4\,a\,c}}{2\,a},$$
$$\begin{aligned}
a&=&A^2\,f^2\,R^{-4}\,\partial_r(r^2\,f^{-1})-f^{-2}\,\partial_rf-2\,A\,f\,
R^{-2}\partial_rA=\nonumber\\&=&-2\,P_0\,R^{-2}+O(R^{-3}) \end{aligned}$$
$$\begin{aligned}
b&=&2\,R^{-2}\,f\partial_rA-2\,A\,f^2\,R^{-4}\partial_r(r^2\,f^{-1})=\nonumber\\
&=&-12\,i\,P_1\, R^{-4}+O(R^{-5}) \end{aligned}$$
$$\begin{aligned}
c=f^2\,R^{-4}\partial_r(r^2\,f^{-1})=2\,R^{-3}+O(R^{-4}).\end{aligned}$$
Since the energy must be positive, the solution with the minus (plus) signus in front of the square root equation corresponds to a positive (negative) $l$. On the other hand equation (\[tim\]) furnishes $l$ in terms of $E/l$ which has obviously the same coefficients which were obtained for $E$ in the previous section after substitution of $E_0=-1/2$.
The frequency of the precession of the nodes can be written now independently of the choice of coordinates. As was to be expected, the first term is equal to one, corresponding to the frequency of the nodes when the mass distribution of the gravitational source is perfectly spherical.
The contributions to the precession of the line of nodes of a timelike geodesic which is slightly tilted with respect to a geodesic equatorial circle have been classified in the same way as it was done for the perihelion shift: A classical term plus the relativistic terms, divided according to their multipole content. In order to write down the expressions in non-geometrized units, factors including $G$ and $c$ have to be included as it was done in the previous section. The description which was made there of the necessary factors for each term in $\Delta\phi$ for the perihelion shift also applies. The information that we have of the metric allows us to calculate the coefficients up to $l^{-13}$.
There is, of course, no contribution from a pure mass monopole. Were the frequency equal to one, then the nodes would remain at constant $\phi$. Therefore, $\Delta\phi=2\,\pi\,(\Omega^{-1}-1)$ describes the angle through which the line of nodes has precessed in one revolution of the reference circle,
$$\begin{aligned}
\Delta\phi&=&\pi\,\left\{\Delta_{0}
+\Delta_{2}+\Delta_{4}
+\Delta_{8}+\Delta_{16}+\Delta_{32}+\Delta_{2\times 4} +\right.
\nonumber\\&+&\left.\Delta_{2\times 8}+\Delta_{2\times 16}+\Delta_{4\times 8} \right\}\end{aligned}$$
For oblate objects ($P_2<0$), the influence of the mass quadrupole amounts to a delay in the precession of the line of nodes with respect to the $\phi$ coordinate on the circle of reference. Therefore the line of nodes does not precess in the same direction as the perihelion. This fact should not be confused with the precession of the angular momentum vector in time, which is of course positive. The contribution from the sedecimpole term is negative for positive $P_4$ and the one from the sexagintaduopole is positive for positive $P_6$. There is also a classical coupling between $P_4$ and $P_2$. The nonlinear terms in the quadrupole moment bear a positive sign,
$$\begin{aligned}
\Delta_{0}&=&\frac{3\,{ P_0}\,{ P_2}}{l^{4}}+\left
(-{\frac {15\,{ P_0}^{3}{
P_4}}{2}}+{\frac {9\,{ P_0}^{2}{ P_2}^{2}}{4}}\right )l^{-8}+\nonumber \\&+&
\left ({\frac {105\,{ P_0}^{5}{ P_6}}{8}}+{\frac {45\,{ P_0}^{4}{ P_2 }\,{
P_4}}{8}}+
{\frac {81\,{ P_0}^{3}{ P_2}^{3}}{8}}\right )l^{-12}.\end{aligned}$$
Most of the dipole-dependent terms are sensitive to the direction of motion of the test particle relative to the rotation of the source. If the angular momenta are parallel ($l$ has the same sign as $J=-i\,P_1$), then the linear and cubic terms in $P_1$ induce an advance of the line of nodes. On the other hand the quadratic and quartic terms in the dipole moment are not sensitive to that relative sign and their influence always amounts to a delay,
$$\begin{aligned}
\Delta_{2}&=&-\frac{4\, i\,{ P_0}{
P_1}}{l^{3}}-\frac{18\,i\,{ P_0}^{3}{ P_1} }{l^{5}}+\frac{18\,{ P_0}^{2}{
P_1}^{2}}{l^{6}}-\frac {243\,i\,{ P_0}^{5}{ P_1}}{2\,l^7}+\frac
{4131\,{ P_0}^{4}{ P_1}^{2}}{14\,l^8}+\nonumber\\&+&\left (196\,i{ P_0}^{3} { P_1}^{3}-{
\frac {3861\,i{ P_0}^{7}{ P_1}}{4}}\right)l^{-9} +\frac
{27294 \,{ P_0}^{6}{ P_1}^{2}}{7}\,l^{-10}+ \nonumber\\&+&\left ({\frac { 37983\,i{
P_0}^{5}{ P_1}^{3}}{7}}-{\frac { 268515\,i{ P_0}^{9}{ P_1}}{32}} \right
)l^{-11}+\nonumber\\&+&\left (-2565\,{ P_0}^{4}{ P_1}^{4}+{\frac
{1060043\,{ P_0}^{8}{ P_1}^{2}}{22}}\right )l^{-12} +\nonumber\\&+& \left
({\frac {734200\,i{ P_0}^{7}{ P_1}^{3}}{7}}-{\frac { 4944807\,i{ P_0}^{11}{
P_1}}{64}} \right )l^{-13}.\end{aligned}$$
The relativistic contribution of the linear terms in the quadrupole moment $P_2$ is again negative for oblate sources. The quadratic terms furnish a negative contribution to the precession shift regardless of whether the source is oblate or prolate, whereas the classical quadratic correction is positive, as it was shown previously,
$$\begin{aligned}
\Delta_{4}&=&\frac{24\,{ P_0}^{3}{ P_2}}{l^{6}}+\frac
{2799\,{ P_0}^{5}{ P_2}}{14\,l^{8}}+\left (-24\,{ P_0}^{4}{
P_2}^{2}+{\frac {12396\,{ P_0}^{7}{
P_2}}{7}}\right)l^{-10}+\nonumber\\&+& \left (-{\frac
{327223\,{P_0}^{6}{P_2}^{2}}{308}}+ {\frac
{361993\,{ P_0}^{9}{ P_2}}{22}}\right )l^{-12}.\end{aligned}$$
As it happened with the perihelion precession, the rotational octupole term has the opposite sign to the one of the linear dipole term and of course it is dependent on the direction in which the probe orbits,
$$\begin{aligned}
\Delta_{8}&=&\frac{12\,i{ P_0}^{3}{ P_3}}{
l^{7}}+\frac{156\,i{ P_0}^{5}{ P_3}}{l^{9}}+{\frac {3387\,i{ P_0}^{7}{
P_3}}{2 \,l^{11}}}+\frac{17707\,i{ P_0}^{9} { P_3}}{l^{13}}.\end{aligned}$$
The relativistic sedecimpole term bears the same sign as its classical counterpart, although it is much smaller in magnitude. There is no relativistic term in $P_6$, just the classical term which has already been described,
$$\begin{aligned}
\Delta_{16}=-\frac{120\,{ P_0}^{5}{ P_4}}{l^{10}}-{\frac
{2905\,{ P_0}^{7}{ P_4}}{2\,l^{12}}},\end{aligned}$$
and again we have another permutation of sign; the term in $P_{5}$ bears the same sign as the linear dipole term,
$$\begin{aligned}
\Delta_{32}= -{\frac {45\,i{ P_0}^{5}{ P_5}}{2\,l^{11}}}
-{\frac {1905\,i{ P_0}^{7} { P_5}}{4\,l^{13}}}.\end{aligned}$$
Now we discuss the coupling terms between multipole moments other than the mass. The bilinear coupling between the dipole and the quadrupole moment is positive for oblate gravitational sources which rotate in the same direction as the test particle. The quadratic term in $P_1$ is negative for oblate objects no matter in which direction they rotate. The quadratic term in $P_2$ is positive when the angular momenta $J$ and $l$ are parallel,
$$\begin{aligned}
\Delta_{2\times 4}&=&\frac{18\,i{ P_0}^{2}{ P_1}\,{ P_2}}{l^{7}}
+\frac{441\,i{ P_0}^{4} { P_1}\,{ P_2}}{l^{9}}-\frac{273\,{ P_0}^{3}{
P_1}^{2}{ P_2}}{l^{10}}+\nonumber\\&+& \left (-{\frac { 81\,i{ P_0}^{3}{
P_1}\,{ P_2}^{2}}{2}}+{ \frac {198369\,i{ P_0}^{6}{ P_1}\,{ P_2}}{28}
}\right )l^{-11}- {\frac {3007003\,{ P_0}^{5 }{ P_1}^{2}{
P_2}}{308\,l^{12}}}+ \nonumber\\&+&\left (-{\frac {10899i
{ P_0}^{5}{ P_1}{ P_2}^{2}}{4}}-4482i{ P_0}^{4 }{ P_1}^{3}{ P_2}+
{\frac {5509583i{ P_0}^{8}{ P_1}{
P_2}}{56}} \right
)l^{-13}.\end{aligned}$$
The rotational bilinear coupling between the dipole and the octupole moment is independent of the direction of rotation and it is positive when $J=-i\,P_1$ and $J_3=-i\,P_3$ have the same sign. There is however a higher coupling which is quadratic in the dipole moment and contributes in the same way as $\Delta_{8}$ does,
$$\begin{aligned}
\Delta_{2\times 8}=-\frac{186\,{ P_0}^{4}{ P_1}\,{
P_3}}{l^{10}}-{\frac {100201\,{ P_0}^{6}{ P_1}\,{
P_3}}{22\,l^{12}}}-\frac{3168\,i{ P_0}^{5} { P_1}^{2}{ P_3}}{l^{13}}.\end{aligned}$$
The last terms to be considered up to this order are the couplings between $P_1$ and $P_4$ and between $P_2$ and $P_3$, which are both sensitive to the relative directions of rotation of the source and the probe particle,
$$\begin{aligned}
\Delta_{2\times 16}&=&- {\frac {255\,i{ P_0}^{4}{ P_1}\,{
P_4}}{2\,l^{11}}}- {\frac { 14193\,i{ P_0}^{6}{ P_1}\,{ P_4}}{4\,l^{13}}}\end{aligned}$$
$$\begin{aligned}
\Delta_{4\times 8}&=&- \frac{21\,i{ P_0}^{4}{ P_2}\,{
P_3}}{l^{11}}- {\frac {1971 \,i{ P_0}^{6}{ P_2}\,{ P_3}}{2\,l^{13}}}.\end{aligned}$$
Conclusions\[discussion\]
=========================
In this paper we have displayed the approximate general asymptotically flat stationary axisymmetric metric for the vacuum spacetime surrounding a compact source possessing a symmetry plane orthogonal to the symmetry axis. The calculations have been carried out for the Ernst potential up to the term in $r^{-7}$ in the pseudospherical radial coordinate. This has been useful to calculate relativistic corrections to the classical orbits around a compact mass distribution. In particular the perihelion precession on the symmetry plane and the precession of the nodes of a slightly tilted circular orbit have been calculated.
Concerning the perihelion shift, it has been shown that the contributions of each gravitational and rotational multipole moment follow a curious pattern of alternation of signs: The term in the mass monopole is always positive, whereas the linear quadrupole term is negative for positive $P_2$ (the quadratic term in $P_2$ is always positive) and the sedecimpole term is again positive for positive $P_4$. The sexagintaquattuorpole $P_6$ is outside the limits of our perturbative expansion. If we think for instance of an oblate source which is very close to a classical axisymmetric homogeneous ellipsoid, then we would also have alternating signs in the gravitational multipole expansion ($P_{2\,n}=3\,P_0\,(c^2-a^2)^n/ [(2\,n+1)(2\,n+3)])$, $c$ and $a$ being the lengths of the ellipsoid’s semiaxis parallel and orthogonal to the symmetry axis, respectively) and every term would have a positive contribution. The linear and cubic rotational dipole term is always positive for counterrotating configurations of the source and the probe (the quadratic contribution is always positive regardless of the relative rotation) whereas the octupole term is positive if $J_3=-i\,P_3$ and $l$ bear the same sign. The trigintaduopole term has the same sign as the dipole term. It should be pointed out that the energy dependent terms are not strong enough to affect these signs. The coupling between different multipoles preserves the sign of the product of the corresponding linear terms, that is, if there is a $P_n-P_m$ coupling, then its sign is obtained from the product of the $-1$ factors multiplying the corresponding linear terms in $P_n$ and $P_m$. No coupling between mass multipole moments of order higher than the monopole appears to this order except for self-couplings. The sign independent terms, such as the mass monopole term and the quadratic dipole and quadratic quadrupole terms, always give rise to a perihelion advance.
The influence of the different multipole moments on the precession of the line of nodes of a particle departing by a small amount from the equatorial plane is somewhat different from the behaviour which has been shown for the perihelion precession: The alternating pattern of the signs of the linear terms in the multipole moments other than the mass is preserved both for gravitational and rotational moments independently up to the considered perturbation order: If the $P_{2n}$ were all positive, then the contribution of each $P_{2n}$ would be opposite to the one of $P_{2n+2}$ and a similar reasoning is valid for the rotational terms. However the relativistic coupling terms bear the opposite sign to the one which would be expected from the product of the $-1$ factors before the corresponding linear terms: For instance, if the $P_1$ and $P_2$ linear terms are positive then the corresponding bilinear coupling is negative. On the other hand, this rule is not valid for the Newtonian self-couplings. Another difference arises from the fact that the gravitational sexagintaduopole moment $P_6$ does influence the precession of the line of nodes in this order of perturbation, whereas it does not affect the perihelion precession. Also, there were no couplings between the gravitational moments (self-couplings and mass-couplings excluded) in the perihelion precession, but there is one (the classical $P_2-P_4$-coupling) in the node precession. It is curious that, while the Newtonian sign-independent terms are always positive, the relativistic ones (the quadratic dipole and the quadratic quadrupole self-couplings) are negative.
Of course most of these corrections are meaningless for astronomical purposes in our solar system, but are very likely to be relevant for highly relativistic astrophysical objects, suchs as pulsars and blackholes, where other post-Newtonian effects [@Schafer] have been shown to be present.
Schwarschild spacetime
======================
In this appendix we shall study the range of applicability of the perturbation expansion for bounded orbits in Schwarschild’s spacetime. For this metric the geodesic equation (\[binet1\]) can be solved exactly in terms of elliptic functions. Instead of writing it as a differential equation for $u$ as a function of $\phi$, we are writing it as a differential equation for $\phi$. In this section $u$ is no longer the inverse of the pseudospherical radius but of the usual Boyer-Lindquist radius,
$$\phi_u=\left(2\,P_0\,u^3-u^2+\frac{2\,P_0}{l^2}\,u+\frac{E^2-1}{l^2}\right)^
{-1/2}=g(u)^{-1/2}.\label{binetsch}$$
We are considering that $E<1$ and therefore $g(u)$ has at least one zero. For bounded motion we need three zeros so that the orbit ranges between the apsidal points. If we call these zeros $a\geq b\geq c\geq 0$, then equation (\[binetsch\]) can be integrated [@tablas] in terms of the elliptic integral of the first kind, $F(\gamma,q)$, in the region $b\geq u>c$,
$$(\phi-\phi_0)\,\sqrt{\frac{P_0\,(a-c)}{2}}=F(\gamma,q)=\int_0^\gamma
d\alpha\, (1-q^2\,\sin^2\alpha)^{-1/2}$$
$$\gamma=\arcsin\sqrt{\frac{u-c}{b-c}}\hspace{1.5cm}q=
\sqrt{\frac{b-c}{a-c}}.$$
An expression for $u$ in terms of $\phi$ is easily obtained taking into account that the elliptic sine is the sine of $F(\gamma,q)$,
$$u=c+(b-c)\textrm{sn}^2\left\{\sqrt{\frac{P_0\,(a-c)}{2}}
\,(\phi-\phi_0)\right\}.$$
Since the real period of the elliptic sine is $4\,K(q)=4\,F(\pi/2,q)$, then our $u$ function is $2\,K(q)$-periodic. Therefore the exact perihelion precession of the orbit of a test particle around a spherical non-rotating compact object will be,
$$\Delta\phi=\frac{2\,\sqrt{2}\,K(q)}{\sqrt{P_0\,(a-c)}}-2\,\pi,$$
the perturbation expansion of which in $\epsilon$ coincides with the one of $\Delta_{1}$ in equation (\[permon\]), as it was to be expected,
$$K(q)=\frac{\pi}{2}\left\{1+\sum_{n=1}^\infty\left(\frac{(2\,n-1)!!}{2^n\,n!}
\right)^2\,q^{2\,n}\right\}.$$
The limits of the range of applicability of the previous expansion are $q=1$ ($a=b$) and $q=0$ ($b=c$). For both values, there are two zeros of $g(u)$ which coalesce and the bounded motion is no longer stable. Therefore we are led to study the range of parameters for which $g(u)$ has double roots. This happens when $u$ takes either of the values $u_\pm$ which are solutions of $g'(u)=0$,
$$u_\pm=\frac{1\pm\sqrt{1-12\epsilon^2}}{6\,P_0},$$
and therefore the allowed region is the one enclosed between the curves $g(u_\pm)=0$ in the $E^2-q^2$ parameter plane. This yields critical values for the energy per unit of mass, $E_c^2=8/9$, and the perturbation parameter, $\epsilon_c^2=1/12$. The perturbative approach is no longer valid beyond this point in the parameter plane since there are no stable bounded orbits. It is remarkable that for $\epsilon^2>1/16$ there is not only a lower limit for the energy of the bounded orbit but also an upper limit.
Classical precession of the line of nodes
=========================================
In this appendix we shall briefly derive the classical expression for the precession of the line of nodes.
The Lagrangian for the motion of a particle in a gravitational field is,
$$L=\frac{1}{2}\,\dot r^2+\frac{1}{2}\,r^2\dot
\theta^2+\frac{1}{2}\, r^2\,\sin^2\theta\,\dot\phi^2-V(r,\theta)$$
$$V(r,\theta)=-\sum_{n=0}^\infty
\frac{P_n\,p_n(\cos\theta)}{r^{n+1}}.$$
The equations of motion and conserved quantities which can be obtained from this Lagrangian are:
$$E=\frac{1}{2}\,\dot r^2+\frac{1}{2}\,r^2\dot
\theta^2+\frac{1}{2}\, r^2\,\sin^2\theta\,\dot\phi^2+V(r,\theta)$$
$$l=r^2\,\sin^2\theta\dot\phi$$
$$\ddot
r=r\,\dot\theta^2+r\,\sin^2\theta\,\dot\phi^2-\partial_rV(r,\theta)$$
$$r^2\,\ddot\theta+2\,r\,\dot
r\,\dot\theta=r^2\,\sin\theta\,\cos\theta\,\dot\phi^2-\partial_\theta
V(r,\theta)\label{theta}.$$
Truncating the Legendre expansion at the sexagintaquattuorpole multipole moment, $P_6$, we get the following expressions for the energy per unit of mass, $E$, and the radius, $R$, of a circular orbit on the plane $\theta=\pi/2$ in the far field region,
$$E=-\frac{P_0^2}{2\,l^2}+\frac{P_0^3\,P_2}{2\,l^6}-\frac{3\,P_0^5\,P_4+9\,
P_0^4\,P_2^2}{8\,l^{10}}+O(l^{-14})$$
$$\begin{aligned}
R^{-1}=\frac{P_0}{l^2}-\frac{3\,P_0^2\,P_2}{2\,l^6}+
\frac{15\,P_0^{4}\,P_4+36\,P_0^3\,P_2^2}{8\,l^{10}}+O(l^{-14}).
\label{radius} \end{aligned}$$
In order to obtain the oscillations about the equatorial plane of a slightly tilted bounded trajectory with respect to the circular orbit, we introduce a small variation in the equation (\[theta\]). The result will be divided by $\dot\phi^2$ to yield the evolution of $\delta\theta$ as a function of $\phi$,
$$(\delta\theta)_{\phi\phi}=-\Omega^2\,\delta\theta
\hspace{1.5cm}\Omega=\sqrt{1+\frac{R^2}{l^2}\,V_{\theta\theta}(R,\pi/2)}.$$
From this expression we get the frequency of the oscillations, $\Omega$, which can be written in terms of $l$ and the multipole moments by inserting equation (\[radius\]) into it,
$$\begin{aligned}
\Delta\phi&=&2\,\pi\,\left(\frac{1}{\Omega}-1\right)=\frac{3\,\pi\,P_0\,P_2}
{l^4}+\frac{\pi\,(9\,P_0^2\,P_2^2-30\,P_0^3\,P_4)}{4\,l^8}+\nonumber\\&+&\frac{\pi\,
(105\,P_0^5\,P_6+45\,P_0^4\,P_2\,P_4+81\,P_0^3\,P_2^3)}{8\,l^{12}}+O(l^{-16}),\end{aligned}$$
which obviously coincides with the term $\Delta_{0}$ which was calculated using the full relativistic theory in section \[node\].
The present work has been supported by Dirección General de Enseñanza Superior Project PB98-0772. L.F.J. wishes to thank F.J. Chinea, L.M. González-Romero, F. Navarro-Lérida and M.J. Pareja for valuable discussions. L.F.J. wishes to thank the Department of Mathematical Sciences of the Loughborough University of Technology for their hospitality.
[99]{} (eds.: C. Hoenselaers and W. Dietz, Springer Verlag), Berlin-New York (1984) R. Geroch, [*J. Math. Phys.*]{} [**11**]{} 2580 (1970) R. O. Hansen, [*J. Math. Phys.*]{} [**15**]{} 46 (1974) G. Fodor, C. Hoenselaers, Z. Perjés, [*J. Math. Phys.*]{} [**30**]{} 2252 (1989) T. D’amour, G. Schäfer, [*Nuovo Cimento*]{} [**B 101**]{} 127 (1988) C. Hoenselaers, [*Prog. Theor. Phys.*]{} [**56**]{} 324 (1977) H. Quevedo, [*Ph. D. Thesis*]{} Universität zu Köln (1987) F. J. Ernst, [*Phys.Rev.*]{} [**167**]{} 1175 (1968) H. Goldstein, [*Classical Mechanics*]{}, Addison-Wesley (1980) V. Perlick, [*Class. Quantum Grav.*]{} [**9**]{} 1009 (1992) I. S. Gradsteyn, I. M. Ryzhik, [*Table of integrals, series and products*]{}, Academic Press, New York (1965)
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The Landau-Selberg-Delange method provides an asymptotic formula for the partial sums of a multiplicative function whose average value on primes is a fixed complex number $v$. The shape of this asymptotic implies that $f$ can get very small on average only if $v=0,-1,-2,\dots$. Moreover, if $v<0$, then the Dirichlet series associated to $f$ must have a zero of multiplicity $-v$ at $s=1$. In this paper, we prove a converse result that shows that if $f$ is a multiplicative function that is bounded by a suitable divisor function, and $f$ has very small partial sums, then there must be finitely many real numbers $\gamma_1$, $\dots$, $\gamma_m$ such that $f(p)\approx -p^{i\gamma_1}-\cdots-p^{-i\gamma_m}$ on average. The numbers $\gamma_j$ correspond to ordinates of zeroes of the Dirichlet series associated to $f$, counted with multiplicity. This generalizes a result of the first author, who handled the case when $|f|\le 1$ in previous work.'
address:
- |
Département de mathématiques et de statistique\
Université de Montréal\
CP 6128 succ. Centre-Ville\
Montréal, QC H3C 3J7\
Canada
- |
Department of mathematics\
Stanford University\
450 Serra Mall, Building 380\
Stanford, CA 94305-2125\
USA
author:
- Dimitris Koukoulopoulos
- 'K. Soundararajan'
title: The structure of multiplicative functions with small partial sums
---
Introduction
============
Throughout this paper $f$ will denote a multiplicative function and $$L(s,f) =\sum_{n=1}^{\infty} \frac{f(n)}{n^s}$$ will be its associated Dirichlet series, which is assumed to converge absolutely in Re$(s)>1$. We then have $$-\frac{L^{\prime}}{L} (s,f) =\sum_{n=1}^{\infty} \frac{\Lambda_f(n)}{n^s},$$ for certain coefficients $\Lambda_f(n)$ that are zero unless $n$ is a prime power.
Let $D$ denote a fixed positive integer. We shall restrict attention to the class of multiplicative functions $f$ such that [ $$\label{Lambda_f}\begin{split}
|\Lambda_f (n) |\le D\cdot \Lambda(n)
\end{split}$$ ]{} for all $n$. This is a rich class of functions that includes most of the important multiplicative functions that arise in number theory. For example, the M[" o]{}bius function, the Liouville function, divisor functions, and coefficients of automorphic forms (or if one prefers an axiomatic approach, $L$-functions in the Selberg class) satisfying a Ramanujan bound are all covered by this framework.
When $f(p)\approx v$ in an appropriately strong form, Selberg [@selberg] built on ideas of Landau [@landau; @landau2] to prove that [ $$\label{Selberg}\begin{split}
\sum_{n\le x} f(n) = \frac{c(f,v)}{\Gamma(v)} x(\log x)^{v-1} + O_f(x(\log x)^{v-2}) ,
\end{split}$$ ]{} where $c(f,v)$ is some a non-zero constant given in terms of an Euler product. Delange [@delange] strengthened this theorem to a full asymptotic expansion: $$\sum_{n\le x} f(n) = x(\log x)^{v-1} \sum_{j=0}^{J-1} \frac{c_j(f,v)}{\Gamma(v-j)(\log x)^j}
+ O(x(\log x)^{v-1-J})$$ for any $J\in{\mathbb{N}}$, where $c_0(f,v)=c(f,v)$. In particular, if $v\in\{0,-1,-2,\dots\}$, then the partial sums of $f$ satisfy the bound [ $$\label{f-small}\begin{split}
\sum_{n\le x} f(n) \ll \frac{x}{(\log x)^A} \qquad(x\ge 2)
\end{split}$$ ]{} for any $A>0$.
This paper is concerned with the converse problem: assuming that holds for some $A>D+1$, what can be deduced about $f(p)$? If we already knew that $f(p)\approx v$ on average, then relation would imply that $|v|\le D$. Comparing with , we conclude that $v\in\{0,-1,-2,\dots,-D\}$. The goal of this paper is to prove such a converse result to the Landau-Selberg-Delange theorem without assuming prior knowledge of the average behavior of $f(p)$.
This problem was studied in the case $D=1$ by the first author [@dk-gafa]. If holds for some $A>2$, then by partial summation one can see that $L(s,f)$ converges (conditionally) on the line Re$(s)=1$. The work in [@dk-gafa] established that on the line Re$(s)=1$ the function $L(s,f)$ can have at most one simple zero. If $L(1+it, f) \neq 0$ for all $t$, then $$\lim_{x\to\infty} \frac{1}{\pi(x)} \sum_{p\le x} f(p)=0,$$ while if $L(1+i\gamma,f)=0$ for some (unique) $\gamma\in{\mathbb{R}}$ then $$\lim_{x\to\infty} \frac{1}{\pi(x)} \sum_{p\le x} (f(p) + p^{i\gamma}) =0 .$$ In this paper we establish a generalization of this result for larger values of $D$.
\[converse-thm\] Fix a natural number $D$ and a real number $A>D+2$. Let $f$ be a multiplicative function such that $|\Lambda_f|\le D\cdot\Lambda$, and such that $$\sum_{n\le x}f(n) \ll \frac{x}{(\log x)^A}$$ for all $x\ge2$. Then there is a unique multiset $\Gamma$ of at most $D$ real numbers such that $$\Big| \sum_{p\le x} \Big( f(p) + \sum_{\substack{ \gamma \in \Gamma \\ |\gamma| \le T}} p^{i\gamma}\Big) \log p \Big| \le C_1 \frac{x}{\sqrt{\log x}} + C_2 \frac{x}{\sqrt{T}}$$ for all $x,T\ge2$, where $C_1 = C_1(f,T)$ is a constant depending only on $f$ and $T$, and $C_2$ is an absolute constant. In particular, $$\lim_{x\to\infty} \frac{1}{x} \sum_{p\le x}\Big( f(p) + \sum_{\gamma \in \Gamma} p^{i\gamma}\Big) \log p = 0 .$$
The multiset $\Gamma$ consists of the ordinates of the zeroes of $L(s,f)$ on the line ${{\rm Re}}(s)=1$, repeated according to their multiplicity. Its rigorous construction is described in Proposition \[Prop1\]. The constant $C_1=C_1(f,T)$ in Theorem \[converse-thm\] can be calculated explicitly in terms of upper bounds for the Dirichlet series $L(s,f) \prod_{\gamma \in \Gamma, |\gamma|\le T} \zeta(s-i\gamma)$ and its derivatives, together with a lower bound for this quantity on the line segment $[1-iT, 1+iT]$.
Qualitatively Theorem \[converse-thm\] establishes the kind of converse theorem that we seek. There are two deficiencies in the theorem: first, the range $A > D+2$ falls short of the optimal result $A > D+1$ (which in the case $D=1$ was attained in [@dk-gafa], and which we can attain in a special case – see Section 5); and second, one would like an understanding of the uniformity with which the result holds. On the other hand, the proof that we present is very simple, and we postpone the considerably more involved arguments needed for more precise versions of the theorem to another occasion.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The first author was supported by the Natural Sciences and Engineering Research Council of Canada (Discovery Grants 435272-2013 and 2018-05699) and by the Fonds de Recherche du Québec – Nature et Technologies (Établissement de nouveaux chercheurs universitaires 2016-NC-189765; Projet de recherche en équipe 2019-PR-256442). The second author was supported in part by the National Science Foundation Grant and through a Simons Investigator Grant from the Simons Foundation. The paper was completed while the second author was a senior Fellow at the ETH Institute for Theoretical Studies, whom he thanks for their warm and generous hospitality.
The classes ${\mathcal{F}}(D)$ and ${\mathcal{F}}(D;A)$
=======================================================
We introduce the classes of multiplicative functions ${\mathcal F}(D)$ and ${\mathcal F}(D;A)$, and establish some preliminary results. Throughout $\tau_D$ will denote the $D$-th divisor function, which arises as the Dirichlet series coefficients of $\zeta(s)^D$.
\[def1\] Given a natural number $D$, we denote by ${\mathcal{F}}(D)$ the class of all multiplicative functions such that $|\Lambda_f|\le D\cdot\Lambda$.
\[lem:f-bound\] Let $f$ be an element of ${\mathcal F}(D)$. Then its inverse under Dirichlet convolution $g$ is also in ${\mathcal F}(D)$, and both $|f(n)|$ and $|g(n)|$ are bounded by $\tau_D(n)$ for all $n$.
Note that $$L(s,f) = \exp\bigg\{ \sum_{n=2}^\infty \frac{\Lambda_f(n)}{n^s\log n} \bigg\}
= \sum_{j=0}^\infty \frac{1}{j!} \bigg( \sum_{n=1}^\infty \frac{\Lambda_f(n)}{n^s\log n} \bigg)^j,$$ so that by comparing coefficients [ $$\label{f-Lambda_f}\begin{split}
f(n) = \sum_{j=1}^\infty \frac{1}{j!} \sum_{n_1\cdots n_j=n} \frac{\Lambda_f(n_1)\cdots \Lambda_f(n_j)}{\log n_1\cdots \log n_j} .
\end{split}$$ ]{} Thus, by the definition of ${\mathcal F}(D)$, $|f(n)|$ is bounded by the coefficients of $$\sum_{j=0}^{\infty} \frac 1{j!} \bigg( \sum_{n=1}^{\infty} \frac{D\Lambda(n)}{n^s \log n} \bigg)^{j} = \zeta(s)^D.$$ This shows that $|f(n)| \le \tau_D(n)$ for all $n$. Since the inverse $g$ may be defined by setting $\Lambda_g(n) = -\Lambda_f(n)$, it follows that $g$ is in ${\mathcal F}(D)$ and that $|g(n)| \le \tau_D(n)$ as well.
For later use, let us record that if $f\in {\mathcal F}(D)$, then for $\sigma >1$ we have [ $$\label{2.2}\begin{split} \log L(s, f) = \sum_{n=2}^{\infty} \frac{\Lambda_f(n)}{n^s \log n} = \sum_{p} \frac{\Lambda_f(p)}{p^s \log n} + O(1)
= \sum_{p} \frac{f(p)}{p^s}+ O(1).
\end{split}$$ ]{}
We now introduce the class ${\mathcal F}(D;A)$, which is the subclass of multiplicative functions in ${\mathcal F}(D)$ with small partial sums.
\[def2\] Given a natural number $D$, and positive real numbers $A$ and $K$, we denote by ${\mathcal F}(D;A,K)$ the class of functions $f\in {\mathcal{F}}(D)$ such that $$\Big| \sum_{n\le x} f(n) \Big| \le K \frac{x}{(\log x)^{A}}\qquad\text{for all}\quad x\ge 3.$$ The class ${\mathcal F}(D;A)$ consists of all functions lying in ${\mathcal F}(D;A,K)$ for some constant $K$.
The following proposition about the class ${\mathcal{F}}(D;A)$ is an important stepping stone in the proof of Theorem \[converse-thm\]. In particular, it gives a description of the multiset $\Gamma$ appearing in Theorem \[converse-thm\].
\[Prop1\]Suppose $f$ is in the class ${\mathcal F}(D;A)$ with $A >D+1$.
1. The series $L(s,f)$ and the series of derivatives $L^{(j)}(s,f)$ with $1\le j < A-1$ all converge uniformly in compact subsets of the region ${{\rm Re}}(s)\ge 1$.
2. For any real number $\gamma$, there exists an integer $j\in[0,D]$ with $L^{(j)}(1+i\gamma, f) \neq 0$. If $L(1+i\gamma,f)=0$, then $1+i\gamma$ is called a *zero* of $L(s,f)$ and the *multiplicity* of this zero is the smallest natural number $j$ with $L^{(j)}(1+i\gamma,f) \neq 0$.
3. Counted with multiplicity, $L(s,f)$ has at most $D$ zeros on the line ${{\rm Re}}(s) =1$.
4. Let $\Gamma$ denote the (possibly empty) multiset of ordinates $\gamma$ of zeros $1+i\gamma$ of $L(s,f)$, so that $\Gamma$ has cardinality at most $D$. Let ${\widetilde \Gamma}$ denote a (multi-)subset of $\Gamma$, and let $\widetilde{m}$ denote the largest multiplicity of an element in $\widetilde{\Gamma}$. The Dirichlet series $$L(s,f_{\widetilde \Gamma}) = L(s,f) \prod_{\gamma \in {\widetilde \Gamma}} \zeta(s-i\gamma )$$ and the series of derivatives $L^{(j)} (s,f_{\widetilde \Gamma})$ for $1 \le j < A-{\widetilde m}-1$ all converge uniformly in compact subsets of the region ${{\rm Re}}(s)\ge 1$.
We next establish the following lemma which contains part (a) of Proposition \[Prop1\] and more. The remaining parts of the proposition will be established in Section 4.
\[lem:L-der-bound\] Let $f \in {\mathcal F}(D;A,K)$ with $A >1$, and consider an integer $j\in[0,A-1)$. For any $M\ge N \ge 3$ and any $s\in{\mathbb{C}}$ with ${{\rm Re}}(s)\ge1$, we have $$\label{L-der-bound1}
\bigg| \sum_{N<n\le M} \frac{f(n)(\log n)^j}{n^s}\bigg| \ll_{A,D} \frac{K(1+|s|)}{(\log N)^{A-j-1}}.$$ In particular, the series $L^{(j)}(s,f)$ converges uniformly in compact subsets of the region ${{\rm Re}}(s)\ge 1$. Furthermore, it satisfies the pointwise bound $$\label{L-der-bound2}
|L^{(j)}(\sigma+ it, f)| \ll_{A,D} (K(1+|t|))^{\frac{D+j}{D+A-1}}$$ for $s=\sigma+it$ with $\sigma\ge1$ and $t\in{\mathbb{R}}$.
Since $|f|\le\tau_D$ by Lemma \[lem:f-bound\], all claims follow in the region ${{\rm Re}}(s)\ge2$ from the bound $\sum_{n>N}\tau_D(n)/n^2\ll_D(\log N)^{D-1}/N$.
Let us now assume we are in the region $1\le {{\rm Re}}(s)\le2$. Using partial summation, we have $$\begin{aligned}
\sum_{N<n\le M} f(n) \frac{(\log n)^j}{n^{\sigma+it}}
&= \Big(\sum_{N<n\le M} f(n)\Big) \frac{(\log M)^j}{M^{\sigma+it}} - \int_N^M \Big( \sum_{N<n\le y} f(n) \Big)
\bigg( \frac{(\log y)^j}{y^{\sigma+ it}} \bigg)^{\prime} {\mathrm{d}}y.
\label{eqn:f-der-ps}
\end{aligned}$$ We estimate both terms on the right hand side of using our assumption on the partial sums of $f$, thus obtaining that $$\begin{aligned}
\sum_{N<n\le M} f(n) \frac{(\log n)^j}{n^{\sigma+it}}
&\ll \frac{K}{M^{\sigma-1}(\log M)^{A-j}}+ \int_N^M \frac{Ky}{(\log y)^A}\cdot \frac{(\log y)^j (1+|t|)}{y^{\sigma +1}} {\mathrm{d}}y \label{PS}\\
&\ll \frac{K(1+|t|)}{(\log N)^{A-j-1}}. \nonumber
\end{aligned}$$ This establishes . In particular, $L^{(j)}(s,f)$ converges uniformly in compact subsets of the region $1\le{{\rm Re}}(s)\le2$ by Cauchy’s criterion.
To obtain , we let $M\to\infty$ in to find that $$\label{L-der-bound3}
L^{(j)}(s,f) = \sum_{n\le N} \frac{f(n)(-\log n)^j}{n^s}+ O_{A,D}\bigg(\frac{K(1+|t|)}{(\log N)^{A-j-1}}\bigg) .$$ Since $|f(n)|\le \tau_D(n)$ by Lemma \[lem:f-bound\], the first term on the right side of is bounded in size by $$(\log N)^j \sum_{n\le N} \frac{\tau_D(n)}{n} \ll (\log N)^{D+j}.$$ Choosing $N = \exp((K(1+|t|)^{\frac{1}{D+A-1}})$ yields the desired bound.
Three lemmas
============
Here we collect together three disparate lemmas that will be used in the future. All of these lemmas are of a standard nature, and proofs are provided for completeness. We begin with an asymptotic formula for partial sums of generalized divisor functions.
\[lem:DivisorSums\] Let ${\mathcal A} = \{ \alpha_1, \ldots, \alpha_m\}$ be a multiset, consisting of $k$ distinct elements, and arranged so that $\alpha_1,\dots,\alpha_k$ denote these $k$ distinct values. Suppose that these distinct values $\alpha_j$ appear in ${\mathcal A}$ with multiplicity $m_j$. Let $\tau_{\mathcal A}(n)$ denote the multiplicative function $$\tau_{\mathcal A}(n) = \sum_{d_1 \cdots d_m =n} d_1^{i\alpha_1} \cdots d_{m}^{i\alpha_m}.$$ Then for large $x$ we have $$\sum_{n\le x} \tau_{\mathcal A}(n) = \sum_{j=1}^{k} x^{1+i\alpha_j} P_{j,{\mathcal A}}(\log x) + O(x^{1-\delta}),$$ where $P_{j, {\mathcal{A}}}$ denotes a polynomial of degree $m_j-1$ with coefficients depending on ${\mathcal{A}}$, and $\delta=\delta({\mathcal{A}})$ is some positive real number.
Note that in the region Re$(s)> 1$ $$\sum_{n=1}^{\infty} \frac{\tau_{\mathcal A}(n)}{n^s} = \prod_{j=1}^{k} \zeta(s-i\alpha_j)^{m_j}.$$ Now the lemma follows by a standard application of Perron’s formula to write (with $c>1$) $$\sum_{n\le x} \tau_{\mathcal A}(n) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \prod_{j=1}^{k} \zeta(s-i\alpha_j)^{m_j} \frac{x^s}{s} {\mathrm{d}}s,$$ and then shifting contours to the left of the $1$-line and evaluating the residues of the poles of order $m_j$ at $1+i\alpha_j$.
The next lemma is a quantitative version of Dirichlet’s theorem on Diophantine approximation.
\[lemDirichlet\] Let $\lambda_1,\dots, \lambda_k$ be $k$ real numbers. For $j=1,\dots,k$, let $\delta_j \in [0,1/2)$ be given. Then for all $T>0$, the set of $t \in [-T,T]$ such that $\Vert \lambda_j t \Vert \le \delta_j$ for $j=1,\dots,k$ has measure at least $ T\delta_1 \cdots \delta_k$. Here $\Vert x \Vert$ denotes the distance of $x$ to the nearest integer.
Given $\delta \in [0,1/2)$, define the function $f_\delta$ by letting $f_\delta(t) =\max(0, 1- |t|/\delta)$ when $t \in (-1/2,1/2)$, and then extend $f_\delta$ to a 1-periodic function on ${\mathbb{R}}$. Note that $f_\delta(t)$ minorizes the indicator function of $t$ with $\Vert t \Vert \le \delta$. Moreover, the Fourier coefficients of $f_\delta$ are all non-negative and satisfy the bound $\widehat{f_{\delta}}(n) \ll 1/n^2$ for $n\in{\mathbb{Z}}\setminus\{0\}$.
Now the measure of the set in the lemma is $$\ge \int_{-T}^{T} \Big( 1- \frac{|t|}{T} \Big) \prod_{j=1}^{k} f_{\delta_j}(\lambda_j t) dt = \sum_{n_1, \ldots, n_k} \prod_{j=1}^{k} \widehat{f_{\delta_j}}(n_k) \int_{-T}^{T}
\Big(1- \frac{|t|}{T}\Big) e((n_1\lambda_1+\ldots +n_k\lambda_k ) t) {\mathrm{d}}t.$$ Since the Fourier transform of the function $\max(0, 1-|t|/T)$ is always non-negative, we conclude that the above is $$\ge \prod_{j=1}^{k} \widehat{f_{\delta_j}}(0) \int_{-T}^{T} \Big(1-\frac{|t|}{T}\Big) {\mathrm{d}}t = T \delta_1 \cdots \delta_k,$$ thus completing our proof.
Our final lemma gives a variant of the Brun–Titchmarsh theorem for primes in short intervals. Define $\Lambda_j(n)$ by means of $$(-1)^{j} \frac{\zeta^{(j)}(s)}{\zeta(s)} = \sum_{n} \frac{\Lambda_j(n)}{n^s}.$$ Thus $\Lambda_0(n)=1$ if $n=1$ and $0$ for $n >1$, while $\Lambda_1(n) = \Lambda(n)$ is the usual von Mangoldt function. Using the identity $$\label{N3.1}
(-1)^{j+1} \frac{\zeta^{(j+1)}}{\zeta} = - \Big( (-1)^j \frac{\zeta^{(j)}}{\zeta}\Big)^{\prime} + \Big( -\frac{\zeta^{\prime}}{\zeta} \Big)
\Big( (-1)^j \frac{\zeta^{(j)}}{\zeta}\Big),$$ one can check easily that $\Lambda_j(n) \ge 0$ for all $j$ and $n$. In addition, $\Lambda_j(n)$ is supported on integers composed of at most $j$ distinct prime factors, and is bounded by $C_j (\log n)^j$ on such integers for a suitable constant $C_j$.
\[lem:BT\] Fix ${\varepsilon}>0$ and $j\in{\mathbb{N}}$. Uniformly for $x\ge2$ and $x^{{\varepsilon}} < y \le x$, we have $$\sum_{x < n \le x+y} \Lambda_j(n) \ll_{j,{\varepsilon}} y (\log x)^{j-1}.$$
We argue by induction on $j$. The base case $j=1$ is a direct corollary of the classical Brun-Titchmarsh inequality (for example, see [@MV Theorem 3.9]). Now suppose that $j\ge 2$ and that the lemma holds for $\Lambda_1,\dots, \Lambda_{j-1}$.
The number of integers in $(x,x+y]$ all of whose prime factors are $\ge \sqrt{y}$ may be bounded by $\ll y/\log y \ll_{{\varepsilon}} y/\log x$ (see [@MV Theorem 3.3]). Therefore, with $P^{-}(n)$ denoting the least prime factor of the integer $n$, we have that $$\label{N3.2}
\sum_{\substack{ x< n\le x+y \\ P^{-}(n) >\sqrt{y}}} \Lambda_j(n)
\ll_j (\log x)^{j} \sum_{\substack{ x< n\le x+y \\ P^{-}(n) >\sqrt{y}}} 1
\ll_{j,{\varepsilon}} y (\log x)^{j-1}.$$ To establish the lemma, it remains to show that $$\label{N3.3}
\sum_{\substack{ x< n\le x+y \\ P^{-}(n) \le \sqrt{y}}} \Lambda_j(n) \ll_{j,{\varepsilon}} y (\log x)^{j-1}.$$
Let $p$ be a prime and suppose $n= p^a m$ with $a\ge 1$ and $p \nmid m$. Note that $$\begin{aligned}
\Lambda_j(n)
&= \sum_{d|n} \mu(d) \log^j(n/d)
= \sum_{d|m} \mu(d) \Big( \log^j(p^a m/d) - \log^j(p^{a-1}m/d)\Big)\\
&= \sum_{\ell=1}^{j} \binom{j}{\ell} \big( \log^\ell(p^a) - \log^\ell(p^{a-1}) \big)
\sum_{d|m} \mu(d) \log^{j-\ell}(m/d) \\
&=\sum_{\ell=1}^{j} \binom{j}{\ell} \big( \log^\ell(p^a) - \log^\ell(p^{a-1}) \big)
\Lambda_{j-\ell}(m). \end{aligned}$$ If $a=1$, then we deduce that $$\label{lambda-recurrence}
\Lambda_j(n) \ll_j (\log p) \big( \Lambda_0(m) + \Lambda_1(m) +\ldots +\Lambda_{j-1}(m)\big).$$ On the other hand, if $a>1$, then we use the bound $\Lambda_{j-\ell}(m)\ll_j (\log m)^{j-\ell}$ to conclude that $$\label{lambda-recurrence2}
\Lambda_j(n) \ll_j \log(p^a) (\log n)^{j-1}.$$
We now return to the task of estimating , using the above two estimates. Let $p$ denote the smallest prime factor of $n$, so that $p\le \sqrt{y}$. The terms with $p\Vert n$ contribute, using the induction hypothesis and , $$\begin{aligned}
\label{N3.4}
&\ll_j \sum_{p\le \sqrt{y}} (\log p) \sum_{x/p < m \le (x+y)/p} (\Lambda_0(m) +\Lambda_1(m) + \ldots +\Lambda_{j-1}(m))
\nonumber \\
&\ll_j \sum_{p\le \sqrt{y}} (\log p) \cdot \frac{y}{p} (\log x)^{j-2}
\ll y (\log x)^{j-1}. \end{aligned}$$ Lastly, using , we find that the terms with $p^2|n$ contribute $$\begin{aligned}
\label{N3.5}
&\ll (\log x)^{j-1} \mathop{\sum\sum}_{\substack{p \le \sqrt{y} ,\, a \ge 2 \\ p^a \le x+y} }
\log(p^a) \sum_{x/p^a \le m \le (x+y)/p^a} 1 \nonumber\\
& \ll
(\log x)^{j-1} \mathop{\sum\sum}_{\substack{p \le \sqrt{y} ,\, a \ge 2 \\ p^a \le x+y} }
\log(p^a) \Big( \frac{y}{p^a} + 1 \Big)\ll y(\log x)^{j-1}.\end{aligned}$$ Combining and yields , completing the proof of the lemma.
Proof of Proposition \[Prop1\]
==============================
Recall that part (a) of Proposition \[Prop1\] was already established in Lemma \[lem:L-der-bound\]. We now turn to the remaining three parts of the proposition, with the next lemma settling part (b).
\[lem:logL\] Let $f\in {\mathcal F}(D;A)$ with $A> D+1$. For any real number $\gamma$, there exists an integer $j\in[0,D]$ with $L^{(j)}(1+i\gamma, f)\neq 0$. The *multiplicity* of the zero of $L(s,f)$ at $s=1+i\gamma$ is defined as the smallest such $j$ with $L^{(j)}(1+i\gamma, f)\neq 0$. If $m$ is the multiplicity of $1+i\gamma$ (we allow the possibility that $m =0$, which occurs when $L(1+i\gamma, f)\neq 0$), then $$\Big| \sum_{p\le x} \frac{m+{{\rm Re}}(f(p)p^{-i\gamma})}{p}\Big| \le C$$ for some constant $C=C(f,\gamma)$.
As $\sigma \to 1^+$, Taylor’s theorem shows that $$L(\sigma+ i\gamma, f) = \sum_{j=0}^{D} \frac{(\sigma-1)^j}{j!} L^{(j)}(1+i\gamma, f) + o((\sigma -1)^D).$$ But since ${{\rm Re}}(f(p)p^{-i\gamma}) \ge -D$ for all $p$, relation implies that $$|L(\sigma+i\gamma,f)| \gg \exp\Big(\sum_{p}\frac{ {{\rm Re}}(f(p)p^{-i\gamma})}{p^{\sigma}}\Big) \gg (\sigma -1)^D.$$ Therefore $L^{(j)}(1+i\gamma, f) \neq 0$ for some $0\le j\le D$, and the notion of multiplicity is well defined.
If $m\le D$ denotes the multiplicity, then a new application of Taylor’s theorem gives $$L(\sigma +i\gamma, f) = \frac{(\sigma -1)^m}{m!} L^{(m)}(1+i\gamma, f) + o((\sigma -1)^m).$$ Writing $\sigma =1+1/\log x$ and taking logarithms, we find that $$\Big|\sum_{p \le x} \frac{m+{{\rm Re}}(f(p)p^{-i\gamma})}{p} \Big| =\Big| \log |L(1+1/\log x+i\gamma, f)| + m \log \log x + O(1) \Big|
\le C(f,\gamma),$$ as desired.
We now turn to the task of proving part (c) of Proposition \[Prop1\]. Suppose $1+i\gamma_1,\dots,1+i\gamma_k$ are distinct zeros of $L(s,f)$, and let $m_j$ denote the multiplicity of the zero $1+i\gamma_j$. We wish to show that $m _1+ \cdots + m_k \le D$, so that part (c) would follow. A key role will be played by the auxiliary function $$A_N(x) = \lim_{T\to \infty} \frac{1}{2T} \int_{-T}^{T} | 1 + e^{it\gamma_1} + \ldots +e^{i t \gamma_k}|^{2N} e^{it x} {\mathrm{d}}t,$$ where $N$ is an integer that will be chosen large enough. By expanding the $(2N)$-th power, it is easy to see that $A_N(x)$ is non-zero only for those real $x$ that may be written as $j_1 \gamma_1 + \cdots + j_k \gamma_k$ with $|j_1| + \cdots + |j_k| \le N$. Note that there may be linear relations among the $\gamma_j$, so that $A_N(x)$ could have a complicated structure. The following lemma summarizes the key properties of $A_N(x)$ for our purposes.
\[lem:A\_N(x)\] Let $N$ be a natural number.
1. For each $1/2\ge \delta>0$ and each $N$, we have $A_N(0) \gg_k \delta^k (k+1-\delta)^{2N}$.
2. Let ${\varepsilon}>0$ and $j\in\{1,\dots,k\}$. If $N$ is large enough in terms of ${\varepsilon}$ and $k$, then $A_N(\gamma_j) \ge (1-{\varepsilon}) A_N(0)$.
\(a) Put $\lambda_j = \gamma_j/(2\pi)$ and consider the set of $t\in [-T,T]$ such that $\Vert \lambda_j t \Vert \le \delta/(\pi k)$ for $j=1,\dots,k$. By Lemma \[lemDirichlet\], this set has measure $\ge T (\delta/(\pi k))^k$. Moreover, for such $t$ we have $|e^{i\gamma_j t} - 1 | =|\sin(\pi\lambda_jt)|\le \delta/k$ for each $1\le j\le k$, so that $|1+e^{it\gamma_1} + \cdots +e^{it\gamma_k}| \ge (k+1-\delta)$. Part (a) of the lemma follows at once.
\(b) Let ${\mathcal{T}}$ denote the set of $t\in[-T,T]$ such that $\cos (t \gamma_j) \le 1-4\delta$. Then $|1+e^{it\gamma_j}|\le2\sqrt{1-2\delta}\le2-2\delta$, whence $|1+e^{it \gamma_1} + \cdots +e^{it\gamma_k}| \le k+1- 2\delta$. Therefore, in view of part (a), we have that $$\frac{1}{2T}\int_{{\mathcal{T}}}
|1+e^{it\gamma_1} + \cdots + e^{it\gamma_k}|^{2N} {\mathrm{d}}t
\le (k+1-2\delta)^{2N} \le \delta A_N(0),$$ provided that $N$ is large enough. Hence, if $T$ is sufficiently large, [$$\begin{aligned}
\frac{1}{2T}\int_{-T}^{T} \cos(t\gamma_j)
|1+e^{it\gamma_1} + \cdots + e^{it\gamma_k}|^{2N} {\mathrm{d}}t
& \ge
\frac{1-4\delta}{2T} \int_{[-T,T]\setminus {\mathcal{T}}} |1+e^{it\gamma_1} + \cdots + e^{it\gamma_k}|^{2N} {\mathrm{d}}t \\
&\qquad
- \frac{1}{2T}\int_{{\mathcal{T}}} |1+e^{it\gamma_1} + \cdots + e^{it\gamma_k}|^{2N} {\mathrm{d}}t \\
&\ge (1-4\delta)(1-\delta)A_N(0) - \delta A_N(0).
\end{aligned}$$ ]{} Taking $\delta$ suitably small in terms of ${\varepsilon}$ completes the proof of the lemma.
Let $N$ be a large integer to be chosen later, and consider the behavior of $$\label{eqn:lambda-dfn}
\lambda_N(x):= \frac{1}{\log \log x} {{\rm Re}}\Big( \sum_{p\le x} \frac{f(p)}{p} |1+p^{i\gamma_1} + \cdots + p^{i\gamma_k}|^{2N}\Big),$$ as $x\to \infty$. Since $|f(p)| \le D$ always, on the one hand we have that $$\label{eqn:lambda-lb}
\lambda_N(x)
\ge \frac{ -D}{\log \log x} \sum_{p\le x} \frac{1}{p} |1+p^{i\gamma_1} + \cdots + p^{i\gamma_k}|^{2N}
= - D A_N(0) +o(1),$$ with the second relation following from the Prime Number Theorem.
On the other hand, we may expand $|1+p^{-i\gamma_1} + \cdots +p^{-i\gamma_k}|^{2N}$ to find that $$\lambda_N(x) = \frac{1}{\log \log x}\sum_{\substack{0\le j_1,\dots,j_{2N}\le k \\ \gamma=\sum_{n\le N}\gamma_{j_n}-\sum_{n>N}\gamma_{j_n}}}
\sum_{p\le x} \frac{{{\rm Re}}(f(p)p^{-i\gamma})}{p}$$ with the convention that $\gamma_0=0$. If now $\gamma=\gamma_\ell$ for some $\ell$, then the sum over $p$ equals $-m_\ell\log\log x+O(1)$. The number of choices of $j_1$, $\ldots$, $j_{2N}$ that lead to $\gamma= \gamma_\ell$ is exactly $A_N(\gamma_\ell)$. If $\gamma$ is not $\gamma_\ell$ for some $1\le \ell \le k$, then by Lemma \[lem:logL\] we see that the sum over $p$ is bounded above by a constant. Indeed if $\gamma$ is not an ordinate of a zero of $L(s,f)$ on the $1$–line, then the sum over $p$ is simply $O(1)$; [*a priori*]{}, there could be other zeros of $L(s,f)$ besides $1+i\gamma_1$, $\ldots$, $1 +i\gamma_k$ and $\gamma$ could be one of these zeros, but nevertheless the sum over $p$ is bounded above by $O(1)$. In conclusion, $$\lambda_N(x) \le - \sum_{\ell=1}^k m_\ell \sum_{\substack{0\le j_1,\dots,j_{2N}\le k \\ \sum_{n\le N}\gamma_{j_n}-\sum_{n>N}\gamma_{j_n} = \gamma_\ell}}1+o(1)
= - \sum_{\ell=1}^k m_\ell A_N(\gamma_\ell) +o(1).$$ Comparing the above inequality with , we infer that $$\label{eqn:multiplicities-ineq}
\sum_{\ell=1}^{k} m_\ell A_N(\gamma_\ell) \le D A_N(0).$$ To complete the proof, we apply Lemma \[lem:A\_N(x)\](b) with ${\varepsilon}=1/(m_1+\cdots+m_k+1)$ to find that the right hand side of is $>A_N(0)(m_1+\cdots+m_k-1)$, as long as $N$ is large enough. Since $A_N(0)>0$, we conclude that $m_1+\cdots+m_k<D+1$, as desired.
It remains lastly to prove part (d) of Proposition \[Prop1\]. Suppose that the multiset $\widetilde \Gamma$ consists of $k$ distinct values, and has been arranged so that $\gamma_1,\dots, \gamma_k$ are these distinct values, and each such $\gamma_j$ occurs in $\widetilde \Gamma$ with multiplicity ${\widetilde}{m}_j$. As in Lemma \[lem:DivisorSums\], put $\tau_{\widetilde \Gamma}(n) = \sum_{d_1 \cdots d_m = n} d_1^{i\gamma_1} \cdots d_m^{i\gamma_m}$ and define $f_{\widetilde \Gamma}$ to be the Dirichlet convolution $f * \tau_{\widetilde \Gamma}$.
\[lem:f-tau\] With the above notations, we have $$\sum_{n\le x} f_{\widetilde \Gamma}(n) \ll C(f) \frac{x}{(\log x)^{A- \widetilde{m}}} + \frac{x (\log \log x)^{2D}}{(\log x)^A},$$ for some constant $C(f)$, and with ${\widetilde m}$ denoting the maximum of the multiplicities ${\widetilde}{m}_1,\dots,{\widetilde}{m}_k$.
As in the hyperbola method we may write, for some parameter $2\le z\le \sqrt{x}$ to be chosen shortly, $$\sum_{n\le x} f_{\widetilde \Gamma}(n) = \sum_{a\le x/z} f(a) \sum_{b\le x/a} \tau_{\widetilde \Gamma}(b) + \sum_{b\le z} \tau_{\widetilde \Gamma}(b) \sum_{x/z \le a \le x/b} f(a).$$ Using our hypothesis on the partial sums of $f$, and since $\sqrt{x} \le x/z \le x/b$, we see that the second term above is $$\label{eqn:f-tau-1}
\ll \sum_{b\le z} |\tau_{\widetilde \Gamma}(b)| \frac{x}{b (\log x)^A} \ll \frac{x (\log z)^D}{(\log x)^A},$$ since $|\tau_{\widetilde \Gamma}(b)|$ may be bounded by the $D$-th divisor function. On the other hand, using Lemma \[lem:DivisorSums\], the first term equals $$\label{eqn:f-tau-2}
\sum_{a\le x/z} f(a) \sum_{j=1}^{k} \frac{x^{1+i\gamma_j}}{a^{1+i\gamma_j}} P_{j,\widetilde\Gamma}(\log x/a) + O\Big( x^{1-\delta} \sum_{a\le x/z} \frac{|f(a)|}{a^{1-\delta}} \Big),$$ where $P_{j,\widetilde \Gamma}$ denotes a polynomial of degree $\widetilde{m}_j-1$ with coefficients depending on $f$ and $\widetilde \Gamma$. Since $|f(a)|$ is bounded by the $D$-th divisor function, the error term in is easily bounded by $\ll x(\log x)^D/z^{\delta}$. Now consider the main term in . Applying (with $N=x/z$ and $M\to \infty$ there), for any $0 \le \ell \le m_j-1$ we have $$\sum_{a \le x/z} \frac{f(a)}{a^{1+i\gamma_j}} (\log a)^\ell = (-1)^{\ell} L^{(\ell)}(1+i\gamma_j) + O_f\Big( \frac{1}{(\log x)^{A-\ell-1} }\Big) \ll_f \frac{1}{(\log x)^{A-\ell-1}},$$ since $L^{(\ell)}(1+i\gamma_j) =0$ for all $0\le \ell \le m_j-1$. Therefore $$\sum_{a\le x/z} \frac{f(a)}{a^{1+i\gamma_j}} P_{j,\widetilde \Gamma}(\log x/a) \ll \frac{1}{(\log x)^{A-m_j}},$$ and we conclude that the quantity in is $$\ll \frac{x}{(\log x)^{A-\widetilde{m}}} + \frac{x(\log x)^D}{z^{\delta}}.$$ Combine this with , and choose $z=\exp((\log \log x)^2)$ to obtain the lemma.
Combining Lemma \[lem:f-tau\] with the argument of Lemma \[lem:L-der-bound\], part (d) of Proposition \[Prop1\] follows.
Proof of Theorem \[converse-thm\] in a special case
===================================================
In this section we establish Theorem \[converse-thm\] in the special case when $L(s,f)$ has a zero of multiplicity $D$, say at $1+i\gamma$. By Proposition \[Prop1\] there can be no other zeros of $L(s,f)$ on the $1$-line. In this special case, we can in fact prove a stronger result, obtaining non-trivial information in the optimal range $A>D+1$. In the next section, we shall consider (by a very different method) the remaining cases when the multiplicity of any zero is at most $D-1$.
Write $g(n) = f(n) n^{-i\gamma}$, and consider $G= \tau_D *g$. We begin by establishing some estimates for $\sum_{n\le x} G(n)$ and $\sum_{n\le x} |G(n)|/n$. Note that $G(n) = n^{-i\gamma} f_{\Gamma}$ for the multiset $\Gamma$ composed of $D$ copies of $\gamma$. Hence, Lemma \[lem:f-tau\] and partial summation imply that [ $$\label{eq:g-bound1}\begin{split}
\sum_{n\le x} G(n) \ll_f \frac{x}{(\log x)^{A-D}}.
\end{split}$$ ]{}
By Lemma \[lem:logL\] we have $$\Big| \sum_{p\le x} \frac{{{\rm Re}}(G(p))}{p} \Big| = \Big| \sum_{p\le x} \frac{D+{{\rm Re}}(g(p))}{p} \Big| = \Big| \sum_{p\le x} \frac{D+ {{\rm Re}}(f(p)p^{-i\gamma})}{p} \Big|
\ll_f 1.$$ Since $|D+g(p)|^2 = D^2+ 2D{{\rm Re}}(g(p)) + |g(p)|^2 \le 2D (D+{{\rm Re}}(g(p)))$, an application of Cauchy-Schwarz gives $$\label{N5.2}
\sum_{p\le x}\frac{|D+g(p)|}{p} \ll_f \sqrt{\log\log x}.$$ It follows that $$\label{N5.3}
\sum_{n\le x}\frac{|G(n)|}{n} \ll \exp\Big( \sum_{p\le x} \frac{|G(p)|}{p} \Big) \ll \exp\big(O_f\big(\sqrt{\log\log x}\,\big)\big).$$
After these preliminaries, we may now begin the proof of Theorem \[converse-thm\] in this situation. We shall consider the function $G*\overline{G} = \tau_{2D}*g*\overline{g}$. Note that $\Lambda_{G*\overline{G}}(n)
= 2D \Lambda(n) + \Lambda_g(n) + \Lambda_{\overline{g}}(n)$ is always real and non-negative. Thus $G*\overline{G}$ is also a real and non-negative function, and we have $$2\sum_{p\le x} (D+ {{\rm Re}}(g(p))) = \sum_{p\le x} (G*{\overline G})(p) \le \sum_{n\le x} (G*\overline{G})(n).$$ We bound the right side above using the hyperbola method. Thus, using and , $$\begin{aligned}
\sum_{n\le x} (G*{\overline{G}})(n)
&= 2{{\rm Re}}\Big(\sum_{a\le \sqrt{x}} G(a) \sum_{b\le x/a} {\overline{G}}(b) \Big)- \Big|\sum_{a\le \sqrt{x}}G(a)\Big|^2 \\
&\ll_f \frac{x}{(\log x)^{A-D}} \sum_{a\le x} \frac{|G(a)|}{a} + \frac{x}{(\log x)^{2(A-D)}} \ll_{f,{\varepsilon}} \frac{x}{(\log x)^{A-D-{\varepsilon}}}\end{aligned}$$ for any fixed ${\varepsilon}>0$. Thus $$\sum_{p\le x} \big(D+ {{\rm Re}}(g(p)) \big)
\ll_{f,{\varepsilon}} \frac{x}{(\log x)^{A-D-{\varepsilon}}},$$ and using $|D+g(p)|^2 \le 2D (D+{{\rm Re}}(g(p)))$ and Cauchy-Schwarz we conclude that $$\label{N5.4}
\sum_{p\le x} |D+ f(p)p^{-i\gamma}| \log p = \sum_{p\le x} |f(p) + Dp^{i\gamma}| \log p
\ll_{f,{\varepsilon}} \frac{x}{(\log x)^{(A-1-D-{\varepsilon})/2}}.$$
Once the estimate has been established, it may be input into the above argument and the bound may be tidied up. Partial summation starting from leads to the bound $\sum_{p\le x} |D+g(p)|/p \ll_f 1$ in place of . In turn this replaces by the bound $\sum_{n\le x} |G(n)|/n \ll_f 1$. Using this in our hyperbola method argument produces now the cleaner bound $$\label{N5.5}
\sum_{p\le x} |D+ f(p)p^{-i\gamma}| \log p = \sum_{p\le x} |f(p) + Dp^{i\gamma}| \log p \ll_f \frac{x}{(\log x)^{(A-1-D)/2}}.$$
As mentioned earlier, the estimate obtains non-trivial information in the optimal range $A>D+1$. If we suppose that $A> D+2$, then the right side of is $\ll_f x/\sqrt{\log x}$, and Theorem \[converse-thm\] follows in this special case if $|\gamma|\le T$. If $|\gamma| >T$ then note that $$\sum_{p\le x} p^{i\gamma} \log p = \frac{x}{1+i\gamma} + O_f\Big( \frac{x}{\log x}\Big) \ll \frac{x}{T} + C(f) \frac{x}{\log x},$$ so that the theorem holds as stated in this case also.
Proof of Theorem \[converse-thm\]: The general case
===================================================
In the previous section we established Theorem \[converse-thm\] in the special situation when $L(s,f)$ has a zero of multiplicity $D$ on the $1$-line. We now consider the more typical situation when all the zeros (if there are any) of $L(s,f)$ on the line Re$(s)=1$ have multiplicity $\le D-1$. The argument here is based on some ideas from [@GHS].
Throughout we put $c=1+1/\log x$, and $T_0 =\sqrt{T}$. Let ${\widetilde \Gamma}$ denote the multiset of zeros of $L(s,f)$ lying on the line segment $[1-iT, 1+iT]$, and let $f_{\widetilde \Gamma} = f * \tau_{\widetilde \Gamma}$ denote the multiplicative function defined for Lemma \[lem:f-tau\]. We start with a smoothed version of Perron’s formula: $$\begin{aligned}
\label{4.1}
\frac{1}{2\pi i} \int_{(c)} \bigg(-\frac{L^{\prime}}{L}\bigg)^{\prime}(s,f_{\widetilde \Gamma}) \frac{x^s}{s}
\bigg(\frac{e^{s/T_0}-1}{s/T_0}\bigg)^{10} {\mathrm{d}}s
&= \sum_{n\le x} \Lambda_{f_{\widetilde \Gamma}}(n) \log n
+ O\Big( \sum_{x < n < e^{10/T_0} x} \Lambda(n) \log n\Big) \nonumber\\
&= \sum_{p\le x} \Big(f(p) + \sum_{\substack{\gamma \in \Gamma \\ |\gamma| \le T}} p^{i\gamma}\Big) (\log p)^2
+ O\Big( \frac{x\log x}{T_0}\Big). \end{aligned}$$ Our goal now is to bound the left hand side of , and to do this we split the integral into several ranges. There is a range of small values $|t| \le T$, and the range of larger values $|t| >T$, which we further subdivide into dyadic ranges $2^rT < |t| \le 2^{r+1} T$ with $r\ge 0$.
Small values of $|t|$
---------------------
We start with the range $|t|\le T$. Since $A>D+2$ and all zeroes of $L(s,f)$ are assumed to have multiplicity $\le D-1$, we have $A-{\widetilde m} -1 >2$. Therefore, by Proposition \[Prop1\](d) $L^{(j)}(s,f_{\widetilde \Gamma})$ exists for ${{\rm Re}}(s)\ge 1$ and $j\in\{0,1,2\}$, and is bounded above in magnitude on the segment $[1-iT,1+iT]$. Further, $|L(s,f_{\widetilde \Gamma})|$ is bounded away from zero on the compact set $[1-iT, 1+iT]$ since all the zeros of $L(s,f)$ in that region are accounted for in the multiset $\widetilde \Gamma$. Therefore there is some constant $C(f,T)$ such that for all $|t|\le T$ one has $$\bigg| \bigg(\frac{L^{\prime}}{L}\bigg)^{\prime}(c+it,f_{\widetilde \Gamma}) \bigg| \le C(f,T).$$ We deduce that $$\label{eqn:perron-bound1}
\bigg| \int_{\sigma=c,\, |t|\le T}\bigg(\frac{L^{\prime}}{L}\bigg)^{\prime}(s,f_{\widetilde \Gamma})
\cdot \frac{x^s}{s} \cdot
\bigg( \frac{e^{s/T_0}-1}{s/T_0}\bigg)^{10} {\mathrm{d}}t \bigg|
\le x C_1(f,T),$$ for a suitable constant $C_1(f,T)$.
Large values of $|t|$
---------------------
Now we turn to the larger values of $|t|$, namely when $2^r T < |t| \le 2^{r+1} T$ for some $r\ge 0$. Writing $$\bigg( \frac{L^{\prime}}{L}\bigg)^{\prime}(s,f_{\widetilde \Gamma})
= \bigg( \frac{L^{\prime}}{L} \bigg)^{\prime}(s,f)
+ \sum_{\gamma \in \widetilde\Gamma} \bigg( \frac{\zeta^{\prime}}{\zeta} \bigg)^{\prime}(s-i\gamma) ,$$ the desired integral splits naturally into two parts. Now for $|t| \ge T$ we have $$\bigg( \frac{\zeta^{\prime}}{\zeta} \bigg)'(c+it -i\gamma)
\ll \Big( \frac{1}{(\log x)^2} + |t-\gamma|^2 \Big)^{-1} + |t|^{{\varepsilon}},$$ so that $$\label{4.12}
\bigg| \int\limits_{\substack{\sigma=c \\ 2^{r}T \le |t| \le 2^{r+1}T}}
\sum_{\gamma\in \widetilde\Gamma}
\bigg( \frac{\zeta^{\prime}}{\zeta} \bigg)^{\prime}(s-i\gamma)
\cdot \frac{x^s}{s} \cdot
\bigg( \frac{e^{s/T_0}-1}{s/T_0} \bigg)^{10} {\mathrm{d}}s \bigg|
\ll \frac{x\log x}{2^r T_0}.$$
It remains now to estimate $$\label{4.13}
\bigg| \int\limits_{\substack{\sigma=c \\ 2^{r}T \le |t| \le 2^{r+1}T}}
\bigg(\frac{L^{\prime}}{L}\bigg)^{\prime}(s,f)
\cdot \frac{x^s}{s}
\cdot \bigg(\frac{e^{s/T_0}-1}{s/T_0}\bigg)^{10} {\mathrm{d}}s \bigg|.$$ To help estimate this quantity, we state the following lemma whose proof we postpone to the next section.
\[lem2.5\] Let $X \ge 2$ and $\sigma >1$ be real numbers. Let $f\in {\mathcal F}(D)$ and suppose $j\ge 1$ is a natural number. Put $G_j(s) = (-1)^j L^{(j)}(s,f)/L(s,f)$. Then $$\int_{-X}^X |G_j(\sigma +it)|^2 {\mathrm{d}}t \ll X (\log X)^{2j} + \Big( \frac{1}{\sigma -1} \Big)^{2j-1}.$$
Returning to , in the notation of Lemma \[lem2.5\], we have $$\bigg(\frac{L^{\prime}}{L}\bigg)^{\prime}(s,f)
= G_2(s) - G_1(s)^2.$$ Using this identity, the integral in splits into two parts, and using Lemma \[lem2.5\] we may bound the second integral (with $X=2^{r}T$) by $$\begin{aligned}
\label{4.2}
\frac{x}{X (X/T_0)^{10}} \int_{X< |t| \le 2X} |G_1(c+it)|^2 {\mathrm{d}}t
& \ll \frac{x}{X(X/T_0)^{10}} \Big( X (\log X)^2 + \log x \Big)
\ll \frac{x \log x}{2^r T_0}.
\end{aligned}$$
Finally, we must bound the integral arising from $G_2(s)$. To this end, we define $$\label{4.3}
I(j;X,\alpha) = \Big| \int\limits_{\substack{\sigma=c \\ X \le |t| \le 2X}}
G_j(s+\alpha) \frac{x^s}{s} \Big( \frac{ e^{s/T_0}-1}{s/T_0} \Big)^{10} {\mathrm{d}}s \Big|,$$ so that we require a bound for $I(2;2^rT,0)$. We shall bound $I(j;X,\alpha)$ in terms of $I(j+1;X, \alpha+\beta)$ for suitable $\beta>0$, and iterating this will eventually lead to a good bound for $I(2;X,0)$.
\[lem4.1\] Let $X\ge T$, and $\alpha\ge 0$ be real numbers. For $j\ge 2$ and all $k\ge 1$ we have $$I(j;X,\alpha) \ll_k \int_0^1 I(j+k;X,\alpha+\beta)\beta^{k-1} {\mathrm{d}}\beta + \frac{x}{(X/T_0)^2}
\Big(\frac{1}{\log x} +\alpha\Big)^{-(j-1)} .$$
Note that $$G_{j}(s)^{\prime} = - G_{j+1} (s) + G_1(s) G_j(s),$$ so that $$\begin{aligned}
G_j(s+\alpha) & = - \int_{0}^{1} G_{j}^{\prime}(s+\alpha+\beta) {\mathrm{d}}\beta + O(1) \\
&= \int_0^1 (G_{j+1}(s+\alpha+\beta) -G_1(s+\alpha+\beta) G_j(s+\alpha+\beta)) {\mathrm{d}}\beta + O(1).
\end{aligned}$$ Using this in the definition of $I$, we obtain that $$\begin{aligned}
I(j;X,\alpha) &\ll \int_{0}^{1} I(j+1;X,\alpha+\beta) {\mathrm{d}}\beta \\
&+ \frac{x}{X(X/T_0)^{10}} \Big( X + \int_0^1
\int_{X}^{2X} |G_1(c+\alpha+\beta+ it) G_j(c+\alpha+\beta+it)| {\mathrm{d}}t {\mathrm{d}}\beta\Big).
\end{aligned}$$
Using Cauchy–Schwarz and Lemma \[lem2.5\], the second term above is (since $j\ge 2$) $$\begin{aligned}
&\ll \frac{x}{X(X/T_0)^{10}} \Big( X + X (\log X)^{j+1} + \int_0^{1} \Big(\frac{1}{\log x} + \alpha +\beta\Big)^{-j} {\mathrm{d}}\beta \Big) \\
&\ll \frac{x}{(X/T_0)^2} \Big( \frac{1}{\log x} + \alpha\Big)^{-(j-1)}.
\end{aligned}$$ Thus we conclude that $$\label{4.4}
I(j;X,\alpha) \ll \int_{0}^{1} I(j+1;X,\alpha+\beta) {\mathrm{d}}\beta + \frac{x}{(X/T_0)^2} \Big( \frac{1}{\log x} + \alpha\Big)^{-(j-1)}.$$
This establishes the lemma in the case $k=0$, and the general case follows by iterating this argument $k-1$ times. In doing so, we make use of the following estimate: $$\label{eqn:int-bound}
\int_0^1 \frac{\beta^{m-1}}{(\alpha+\beta+1/\log x)^{j+m-1}}{\mathrm{d}}\beta \ll_{m,j} \frac{1}{(\alpha+1/\log x)^{j-1}}$$ for all $m=1,2,\dots$ and all $\alpha\in[0,1]$. This may be seen by dividing the range of integration into two parts, according to whether $\beta\le \alpha+1/\log x$ or $\beta>\alpha+1/\log x$.
We now return to the task of bounding $I(2;2^r T, 0)$. Applying Lemma \[lem4.1\], we see that for any $k\ge 1$ we have $$\label{4.5}
I(2;2^r T,0) \ll_k \int_0^1 I(2+k;2^rT, \beta) \beta^{k-1} {\mathrm{d}}\beta + \frac{x \log x}{2^r T_0}.$$ We choose $k$ to be the largest integer strictly smaller than $A-3$. Since $A>D+2$, we have $$D-1\le k<A-3.$$ Applying Lemma \[lem:L-der-bound\], we find that $L^{(2+k)}(c+\beta+it,f) \ll (1+|t|)$. Furthermore, since $f \in {\mathcal F}(D)$, we have $$\frac{1}{|L(c+\beta+it,f)|} \ll \prod_{p} \Big(1 -\frac{1}{p^{c+\beta}}\Big)^{-D} \ll \Big( \frac{1}{\log x} + \beta\Big)^{-D}.$$ Thus, with this choice of $k$, it follows that $$I(2+k;2^r T, \beta) \ll x \Big( \frac{1}{\log x} + \beta\Big)^{-D} \int_{2^r T}^{2^{r+1}T} \frac{1}{(T/T_0)^{10}} {\mathrm{d}}t
\ll \frac{x}{2^r T_0} \Big( \frac{1}{\log x} +\beta\Big)^{-D} .$$ Since $k\ge D-1$, we infer that $$\int_0^1 I(2+k;2^r T, \beta) \beta^{k-1} {\mathrm{d}}\beta \ll \frac{x\log x}{2^r T_0},$$ by a similar argument to the one leading to . In conclusion, $$I(2;2^{r} T, 0 ) \ll \frac{x\log x}{2^r T_0}.$$ Combining this with , and summing over all $r\ge 0$, we obtain $$\label{4.6}
\bigg| \int_{\sigma=c,\, |t|>T} \bigg(\frac{L^{\prime}}{L}\bigg)'(s,f) \cdot \frac{x^s}{s} \cdot \bigg( \frac{e^{s/T_0} -1}{s/T_0} \bigg)^{10}
{\mathrm{d}}s \bigg| \ll \frac{x\log x}{T_0}.$$
Combining with summed over all $r$, we conclude that $$\label{4.10}
\bigg| \int_{\sigma=c,\, |t|>T} \bigg(\frac{L^{\prime}}{L}\bigg)'(s,f_{\widetilde \Gamma}) \cdot \frac{x^s}{s} \cdot
\bigg( \frac{e^{s/T_0} -1}{s/T_0} \bigg)^{10}
{\mathrm{d}}s \bigg| \ll \frac{x\log x}{T_0}.$$
Completing the proof.
---------------------
Combining with and , it follows that $$\sum_{p\le x} \Big( f(p) + \sum_{\substack{ \gamma \in \Gamma \\ |\gamma| \le T}} p^{i\gamma} \Big) (\log p)^2 \le xC_1(f,T) + O\Big( \frac{x\log x}{T_0} \Big).$$ Partial summation now finishes the proof of Theorem \[converse-thm\].
Proof of Lemma \[lem2.5\]
=========================
Write $g_j(n)$ for the Dirichlet series coefficients of $G_j(n)$. We claim that $|g_j(n)| \le D^j \Lambda_j(n)$ for all $n$. For $j=1$ this is just the definition of the class ${\mathcal F}(D)$. To see the claim in general, we use induction on $j$, noting that $$\label{2.5}
G_{j+1}(s) = - G_j'(s) + G_1(s) G_j(s),$$ and now comparing this with . Using this bound for $|g_j(n)|$, we have $$\bigg| \sum_{n\le X^2} \frac{g_j(n)}{n^{c+it}}\bigg|\le \sum_{n\le X^2} \frac{D^j\Lambda_j(n)}{n} \ll (\log X)^j,$$ so that $$\int_{-X}^{X} \bigg| \sum_{n\le X^2} \frac{g_j(n)}{n^{c+it}}\bigg|^2 {\mathrm{d}}t \ll X (\log X)^{2j}.$$
Next, putting $\Phi(x) = (\frac{\sin x}{x})^2$ so that the Fourier transform ${\widehat \Phi}(x)$ is supported on $[-1,1]$, we obtain $$\begin{aligned}
\int_{-X}^{X} \bigg| \sum_{n > X^2} \frac{g_j(n)}{n^{c+it}}\bigg|^2 {\mathrm{d}}t
&\ll \int_{-\infty}^{\infty} \Big| \sum_{n>X^2} \frac{g_j(n)}{n^{c+it}}\Big|^2 \Phi\Big( \frac{x}{X} \Big) {\mathrm{d}}x \\
&\ll \sum_{m,n>X^2} \frac{\Lambda_j(m)\Lambda_j(n)}{(mn)^c} X \widehat{\Phi}(X \log (m/n)). \end{aligned}$$ Since ${\widehat \Phi}$ is supported on $[-1,1]$ for a given $m >X^2$, the sum over $n$ is restricted to the range $|m-n| \ll m/X$, and so, using the variant of the Brun-Titchmarsh theorem Lemma \[lem:BT\], we deduce that the above is $$\ll X \sum_{m >X^2} \frac{\Lambda_j(m)}{m^{2c}} \sum_{|m-n|\ll m/X} \Lambda_j(n) \ll \sum_{m >X^2} \frac{\Lambda_j(m)}{m^{2c}} m (\log m)^{j-1}
\ll \Big( \frac{1}{c-1} \Big)^{2j-1}.$$ Lemma \[lem2.5\] follows upon combining these two estimates.
[99]{}
H. Delange, [*Sur des formules de Atle Selberg.*]{} Acta Arith. 19 (1971), 105–146.
A. Granville, A. Harper and K. Soundararajan, [*A new proof of Halśz’s theorem, and its consequences.*]{} Compos. Math. 155 (2019), no. 1, 126–163.
D. Koukoulopoulos, [*On multiplicative functions which are small on average.*]{} Geom. Funct. Anal., 23 (2013), no. 5, 1569–1630.
E. Landau, [*Über die Einteilung der positiven ganzen Zahlen in vier Klassen nach der Mindestzahl der zu ihrer additiven Zusammensetzung erforderlichen Quadrate*]{}, Arch. Math. Phys. (3) 13, 305–312; Collected Works, Vol. 4 Essen:Thales Verlag, 1986, 59–66.
, [*Lösung des Lehmer’schen Problems*]{}, (German) Amer. J. Math. 31 (1909), no. 1, 86–102.
H. L. Montgomery and R. C. Vaughan, [*Multiplicative number theory. I. Classical theory.*]{} Cambridge Studies in Advanced Mathematics, 97. Cambridge University Press, Cambridge, 2007.
A. Selberg, [*Note on a paper by L. G. Sathe.*]{} J. Indian Math. Soc. (N.S.) 18, (1954). 83–87.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'In the previous paper, the authors gave criteria for $A_{k+1}$-type singularities on wave fronts. Using them, we show in this paper that there is a duality between singular points and inflection points on wave fronts in the projective space. As an application, we show that the algebraic sum of $2$-inflection points (i.e. godron points) on an immersed surface in the real projective space is equal to the Euler number of $M_-$. Here $M^2$ is a compact orientable 2-manifold, and $M_-$ is the open subset of $M^2$ where the Hessian of $f$ takes negative values. This is a generalization of Bleecker and Wilson’s formula [@BW] for immersed surfaces in the affine $3$-space.'
address:
- ' Department of Mathematics, Faculty of Educaton, Gifu University, Yanagido 1-1, Gifu 151-1193, Japan'
- ' Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan '
- 'Department of Mathematics, Tokyo Institute of Technology, O-okayama, Meguro, Tokyo 152-8551, Japan'
author:
- Kentaro Saji
- Masaaki Umehara
- Kotaro Yamada
date: 'January 15, 2010'
title: |
The duality between singular points and\
inflection points on wave fronts
---
Introduction
============
We denote by ${\boldsymbol{K}}$ the real number field ${\boldsymbol{R}}$ or the complex number field ${\boldsymbol{C}}$. Let $n$ and $m$ be positive integers. A map $F\colon{}{\boldsymbol{K}}^n\to {\boldsymbol{K}}^m$ is called [*${\boldsymbol{K}}$-differentiable*]{} if it is a $C^\infty$-map when ${\boldsymbol{K}}={\boldsymbol{R}}$, and is a holomorphic map when ${\boldsymbol{K}}={\boldsymbol{C}}$. Throughout this paper, we denote by $P(V)$ the ${\boldsymbol{K}}$-projective space associated with a vector space $V$ over ${\boldsymbol{K}}$ and let $\pi:V\to P(V)$ be the canonical projection.
Let $M^{n}$ and $N^{n+1}$ be ${\boldsymbol{K}}$-differentiable manifolds of dimension $n$ and of dimension $n+1$, respectively. The projectified ${\boldsymbol{K}}$-cotangent bundle $$P(T^*N^{n+1}):=\bigcup_{p\in N^{n+1}} P(T^*_pN^{n+1})$$ has a canonical ${\boldsymbol{K}}$-contact structure. A ${\boldsymbol{K}}$-differentiable map $f : M^n
\to N^{n+1}$ is called a [*frontal*]{} if $f$ lifts to a ${\boldsymbol{K}}$-isotropic map $L_f$, i.e., a ${\boldsymbol{K}}$-differentiable map $L_f : M^n \to P(T^*N^{n+1})$ such that the image $dL_f(TM^n)$ of the ${\boldsymbol{K}}$-tangent bundle $TM^n$ lies in the contact hyperplane field on $P(T^*N^{n+1})$. Moreover, $f$ is called a [*wave front*]{} or a [*front*]{} if it lifts to a ${\boldsymbol{K}}$-isotropic immersion $L_f$. (In this case, $L_f$ is called a [*Legendrian immersion*]{}.) Frontals (and therefore fronts) generalize immersions, as they allow for singular points. A frontal $f$ is said to be [*co-orientable*]{} if its ${\boldsymbol{K}}$-isotropic lift $L_f$ can lift up to a ${\boldsymbol{K}}$-differentiable map into the ${\boldsymbol{K}}$-cotangent bundle $T^*N^{n+1}$, otherwise it is said to be [*non-co-orientable*]{}. It should be remarked that, when $N^{n+1}$ is a Riemannian manifold, a front $f$ is co-orientable if and only if there is a globally defined unit normal vector field $\nu$ along $f$.
Now we set $N^{n+1}={\boldsymbol{K}}^{n+1}$. Suppose that a ${\boldsymbol{K}}$-differentiable map $F:M^{n}\to {\boldsymbol{K}}^{n+1}$ is a frontal. Then, for each $p\in M^{n}$, there exist a neighborhood $U$ of $p$ and a map $$\nu:U\longrightarrow ({\boldsymbol{K}}^{n+1})^*\setminus \{{\boldsymbol{0}}\}$$ into the dual vector space $({\boldsymbol{K}}^{n+1})^*$ of ${\boldsymbol{K}}^{n+1}$ such that the canonical pairing $\nu\cdot dF(v)$ vanishes for any $v\in TU$. We call $\nu$ a [*local normal map*]{} of the frontal $F$. We set ${\mathcal{G}}:=\pi\circ\nu$, which is called a (local) [*Gauss map*]{} of $F$. In this setting, $F$ is a front if and only if $$L:=(F,{\mathcal{G}}):U\longrightarrow {\boldsymbol{K}}^{n+1}\times P\bigl(({\boldsymbol{K}}^{n+1})^*
\bigr)$$ is an immersion. When $F$ itself is an immersion, it is, of course, a front. If this is the case, for a fixed local ${\boldsymbol{K}}$-differentiable coordinate system $(x^1,\dots,x^{n})$ on $U$, we set $$\label{eq:nu}
\nu_p:{\boldsymbol{K}}^{n+1}\ni v \longmapsto
\det(F_{x^1}(p),\dots,F_{x^{n}}(p),v)
\in {\boldsymbol{K}}\qquad (p\in U),$$ where $F_{x^j}:=\partial F/\partial x^j$ ($j=1,\dots,n$) and $\det$ is the determinant function on ${\boldsymbol{K}}^{n+1}$. Then we get a ${\boldsymbol{K}}$-differentiable map $
\nu:U\ni p \longmapsto \nu_p\in ({\boldsymbol{K}}^{n+1})^*,
$ which gives a local normal map of $F$.
Now, we return to the case that $F$ is a front. Then it is well-known that the local Gauss map $\mathcal G$ induces a global map $$\label{eq:G}
{\mathcal{G}}:M^{n}\longrightarrow P\bigl(({\boldsymbol{K}}^{n+1})^*\bigr)$$ which is called the [*affine Gauss map*]{} of $F$. (In fact, the Gauss map ${\mathcal{G}}$ depends only on the affine structure of ${\boldsymbol{K}}^n$.)
We set $$\label{eq:h}
h_{ij}:=\nu\cdot F_{x^ix^j}=-\nu_{x^i} \cdot F_{x^j}
\qquad (i,j=1,\dots,n),$$ where $\cdot$ is the canonical pairing between ${\boldsymbol{K}}^{n+1}$ and $({\boldsymbol{K}}^{n+1})^*$, and $$F_{x^ix^j}=\frac{\partial^2 F}{\partial x^i\partial x^j},
\quad
F_{x^j}=\frac{\partial F}{\partial x^j},\quad
\nu_{x^i}=\frac{\partial \nu}{\partial x^i}.$$ Then $$\label{eq:H}
H:=\sum_{i,j=1}^{n}h_{ij}dx^i\, dx^j
\qquad \left(
dx^i\, dx^j:=\frac12(dx^i\otimes dx^j+dx^j\otimes dx^i)
\right)$$ gives a ${\boldsymbol{K}}$-valued symmetric tensor on $U$, which is called the [*Hessian form*]{} of $F$ associated to $\nu$. Here, the ${\boldsymbol{K}}$-differentiable function $$\label{eq:hessian1}
h:=\det(h_{ij}):U\longrightarrow {\boldsymbol{K}}$$ is called the [*Hessian*]{} of $F$. A point $p\in M^{n}$ is called an [*inflection point*]{} of $F$ if it belongs to the zeros of $h$. An inflection point $p$ is called [*nondegenerate*]{} if the derivative $dh$ does not vanish at $p$. In this case, the set of inflection points $I(F)$ consists of an embedded ${\boldsymbol{K}}$-differentiable hypersurface of $U$ near $p$ and there exists a non-vanishing ${\boldsymbol{K}}$-differentiable vector field $\xi$ along $I(F)$ such that $H(\xi,v)=0$ for all $v\in TU$. Such a vector field $\xi$ is called an [*asymptotic vector field*]{} along $I(F)$, and $[\xi]=\pi(\xi)\in P({\boldsymbol{K}}^{n+1})$ is called the [*asymptotic direction*]{}. It can be easily checked that the definition of inflection points and the nondegeneracy of inflection points are independent of choice of $\Omega$, $\nu$ and a local coordinate system.
In Section \[sec:prelim\], we shall define the terminology that
- a ${\boldsymbol{K}}$-differentiable vector field $\eta$ along a ${\boldsymbol{K}}$-differentiable hypersurface $S$ of $M^n$ is [*$k$-nondegenerate at*]{} $p\in S$, and
- [*$\eta$ meets $S$ at $p$ with multiplicity $k+1$.*]{}
Using this new terminology, $p(\in I(F))$ is called an [*$A_{k+1}$-inflection point*]{} or an [*inflection point of multiplicity $k+1$*]{}, if $\xi$ is $k$-nondegenerate at $p$ but does not meet $I(F)$ with multiplicity $k+1$. In Section \[sec:prelim\], we shall prove the following:
\[thm:A\] Let $F:M^{n}\to {\boldsymbol{K}}^{n+1}$ be an immersed ${\boldsymbol{K}}$-differentiable hypersurface. Then $p\in M^{n}$ is an $A_{k+1}$-inflection point $(k\leq n)$ if and only if the affine Gauss map $\mathcal G$ has an $A_k$-Morin singularity at $p$. $($See the appendix of [@SUY3] for the definition of $A_k$-Morin singularities.$)$
Though our definition of $A_k$-inflection points are given in terms of the Hessian, this assertion allows us to define $A_{k+1}$-inflection points by the singularities of their affine Gauss map, which might be more familiar to readers than our definition. However, the new notion “$k$-multiplicity” introduced in the present paper is very useful for recognizing the duality between singular points and inflection points. Moreover, as mentioned above, our definition of $A_k$-inflection points works even when $F$ is a front. We have the following dual assertion for the previous theorem:
\[prop:A\] Let $F:M^n\to {\boldsymbol{K}}^{n+1}$ be a front. Suppose that the affine Gauss map ${\mathcal{G}}:M^n\to P(({\boldsymbol{K}}^{n+1})^*)$ is a ${\boldsymbol{K}}$-immersion. Then $p\in M^{n}$ is an $A_{k+1}$-inflection point of ${\mathcal{G}}$ $(k\leq n)$ if and only if $F$ has an $A_k$-singularity at $p$. [(]{}See $(1.1)$ in [@SUY3] for the definition of $A_k$-singularities.[)]{}
In the case that ${\boldsymbol{K}}={\boldsymbol{R}}$, $n=3$ and $F$ is an immersion, an $A_{3}$-inflection point is known as a [*cusp of the Gauss map*]{} (cf. [@BGM]).
It can be easily seen that inflection points and the asymptotic directions are invariant under projective transformations. So we can define $A_{k+1}$-inflection points ($k\leq n$) of an immersion $f:M^{n}\to P({\boldsymbol{K}}^{n+2})$. For each $p\in M^{n}$, we take a local ${\boldsymbol{K}}$-differentiable coordinate system $(U;x^1,...,x^{n})(\subset M^n)$. Then there exists a ${\boldsymbol{K}}$-immersion $F:U\to {\boldsymbol{K}}^{n+2}$ such that $f=[F]$ is the projection of $F$. We set $$\label{eq:FG1}
G:U\ni p \longmapsto
F_{x^1}(p)\wedge
F_{x^2}(p)\wedge
\cdots \wedge F_{x^n}(p)
\wedge F(p)\in ({\boldsymbol{K}}^{n+2})^*.$$ Here, we identify $({\boldsymbol{K}}^{n+2})^*$ with $\bigwedge^{n+1}{\boldsymbol{K}}^{n+2}$ by $$\bigwedge\nolimits^{n+1} {\boldsymbol{K}}^{n+2}
\ni v_1\wedge \cdots \wedge v_{n+1}
\quad
\longleftrightarrow
\quad
\det(v_1, \dots, v_{n+1},*)\in ({\boldsymbol{K}}^{n+2})^*,$$ where $\det$ is the determinant function on ${\boldsymbol{K}}^{n+2}$. Then $G$ satisfies $$\label{eq:FG2}
G \cdot F=0,\qquad G\cdot dF=dG\cdot F=0,$$ where $\cdot$ is the canonical pairing between ${\boldsymbol{K}}^{n+2}$ and $({\boldsymbol{K}}^{n+2})^*$. Since, $g:=\pi\circ G$ does not depend on the choice of a local coordinate system, the projection of $G$ induces a globally defined $K$-differentiable map $$g=[G]:M^n\to P\bigl(({\boldsymbol{K}}^{n+2})^*\bigr),$$ which is called the [*dual*]{} front of $f$. We set $$h:=\det(h_{ij}):U\longrightarrow {\boldsymbol{K}}\qquad \bigl(h_{ij}:=G\cdot F_{x^ix^j}
=-G_{x^i}\cdot F_{x^j}\bigr),$$ which is called the [*Hessian*]{} of $F$. The inflection points of $f$ correspond to the zeros of $h$.
In Section \[sec:dual\], we prove the following
\[thm:B\] Let $f:M^{n}\to P({\boldsymbol{K}}^{n+2})$ be an immersed ${\boldsymbol{K}}$-differentiable hypersurface. Then $p\in M^{n}$ is an $A_k$-inflection point $(k\leq n)$ if and only if the dual front $g$ has an $A_k$-singularity at $p$.
Next, we consider the case of ${\boldsymbol{K}}={\boldsymbol{R}}$. In [@SUY1], we defined the [*tail part*]{} of a swallowtail. An $A_3$-inflection point $p$ of $f:M^{2}\to P({\boldsymbol{R}}^4)$ is called [*positive*]{} (resp. [*negative*]{}), if the Hessian takes negative (resp. positive) values on the tail part of the dual of $f$ at $p$. Let $p\in M^2$ be an $A_3$-inflection point. Then there exists a neighborhood $U$ such that $f(U)$ is contained in an affine space $A^3$ in $P({\boldsymbol{R}}^4)$. Then the affine Gauss map ${\mathcal{G}}:U\to P(A^3)$ has an elliptic cusp (resp. a hyperbolic cusp) if and only if it is positive (resp. negative) (see [@BGM p. 33]). In [@uribe], Uribe-Vargas introduced a projective invariant $\rho$ and studied the projective geometry of swallowtails. He proved that an $A_3$-inflection point is positive (resp. negative) if and only if $\rho>1$ (resp. $\rho<1$). The property that $h$ is negative is also independent of the choice of a local coordinate system. So we can define the set of negative points $$M_{-}:=\{p\in M^{2}\,;\, h(p)<0\}.$$ In Section \[sec:dual\], we shall prove the following assertion as an application.
\[thm:C\] Let $M^2$ be a compact orientable $C^\infty$-manifold without boundary, and $f:M^{2}\to P({\boldsymbol{R}}^4)$ an immersion. We denote by $i^+_2(f)$ [(]{}resp. $i^-_2(f)$[)]{} the number of positive $A_3$-inflection points [(]{}resp. negative $A_3$-inflection points[)]{} on $M^2$ [(]{}see Section \[sec:dual\] for the precise definition of $i^+_2(f)$ and $i^-_2(f)$[)]{}. Suppose that inflection points of $f$ consist only of $A_3$-inflection points. Then the following identity holds $$\label{eq:BW}
i_2^+(f)-i_2^-(f)=2\chi(M_-).$$
The above formula is a generalization of that of Bleecker and Wilson [@BW] when $f(M^2)$ is contained in an affine plane.
\[cor:1\] Under the assumption of Theorem \[thm:C\], the total number $i_2^+(f)+i_2^-(f)$ of $A_3$-inflection points is even.
In [@uribe], this corollary is proved by counting the parity of a loop consisting of flecnodal curves which bound two $A_3$-inflection points.
\[cor:2\] The same formula holds for an immersed surface in the unit $3$-sphere $S^3$ or in the hyperbolic $3$-space $H^3$.
Let $\pi:S^3\to P({\boldsymbol{R}}^4)$ be the canonical projection. If $f:M^2\to S^3$ is an immersion, we get the assertion applying Theorem \[thm:C\] to $\pi\circ f$. On the other hand, if $f$ is an immersion into $H^3$, we consider the canonical projective embedding $\iota:H^3\to S^3_+$ where $S^3_+$ is the open hemisphere of $S^3$. Then we get the assertion applying Theorem \[thm:C\] to $\pi\circ \iota\circ f$.
Finally, in Section \[sec:cusp\], we shall introduce a new invariant for $3/2$-cusps using the duality, which is a measure for acuteness using the classical cycloid.
This work is inspired by the result of Izumiya, Pei and Sano [@ips] that characterizes $A_2$ and $A_3$-singular points on surfaces in $H^3$ via the singularity of certain height functions, and the result on the duality between space-like surfaces in hyperbolic $3$-space (resp. in light-cone), and those in de Sitter space (resp. in light-cone) given by Izumiya [@iz]. The authors would like to thank Shyuichi Izumiya for his impressive informal talk at Karatsu, 2005.
Preliminaries and a proof of Theorem \[thm:A\] {#sec:prelim}
==============================================
In this section, we shall introduce a new notion “multiplicity” for a contact of a given vector field along an immersed hypersurface. Then our previous criterion for $A_k$-singularities (given in [@SUY3]) can be generalized to the criteria for $k$-multiple contactness of a given vector field (see Theorem \[thm:actual\]).
Let $M^n$ be a ${\boldsymbol{K}}$-differentiable manifold and $S(\subset M^n)$ an embedded ${\boldsymbol{K}}$-differentiable hypersurface in $M^n$. We fix $p\in S$ and take a ${\boldsymbol{K}}$-differentiable vector field $$\eta:S\supset V\ni q \longmapsto \eta_q\in T_qM^n$$ along $S$ defined on a neighborhood $V\subset S$ of $p$. Then we can construct a ${\boldsymbol{K}}$-differential vector field $\tilde \eta$ defined on a neighborhood $U\subset M^n$ of $p$ such that the restriction $\tilde \eta|_S$ coincides with $\eta$. Such an $\tilde \eta$ is called [*an extension of $\eta$*]{}. (The local existence of $\tilde \eta$ is mentioned in [@SUY3 Remark 2.2].)
\[def:admissible\] Let $p$ be an arbitrary point on $S$, and $U$ a neighborhood of $p$ in $M^n$. A ${\boldsymbol{K}}$-differentiable function ${\varphi}:U\to {\boldsymbol{K}}$ is called [*admissible*]{} near $p$ if it satisfies the following properties
1. $O:=U\cap S$ is the zero level set of ${\varphi}$, and
2. $d{\varphi}$ never vanishes on $O$.
One can easily find an admissible function near $p$. We set ${\varphi}':=d{\varphi}(\tilde\eta):U\to {\boldsymbol{K}}$ and define a subset $S_2(\subset O\subset S)$ by $$S_2:=\{q\in O \,;\,{\varphi}'(q)=0\}=\{q\in O\,;\, \eta_q\in T_qS\}.$$ If $p\in S_2$, then $\eta$ is said to [*meet $S$ with multiplicity $2$ at $p$*]{} or equivalently, $\eta$ is said to [*contact $S$ with multiplicity $2$ at $p$*]{}. Otherwise, $\eta$ is said to [*meet $S$ with multiplicity $1$ at $p$*]{}. Moreover, if $d{\varphi}'(T_pO)\ne \{{\boldsymbol{0}}\}$, $\eta$ is said to [*be $2$-nondegenerate at $p$*]{}. The $(k+1)$-st multiple contactness and $k$-nondegeneracy are defined inductively. In fact, if the $j$-th multiple contactness and the submanifolds $S_j$ have been already defined for $j=1,\dots,k$, we set $${\varphi}^{(k)}:=d{\varphi}^{(k-1)}(\tilde\eta):
U\longrightarrow {\boldsymbol{K}}\qquad ({\varphi}^{(1)}:={\varphi}')$$ and can define a subset of $S_{k}$ by $$S_{k+1}:=\{q\in S_k\,;\,{\varphi}^{(k)}(q)=0\}
=\{q\in S_k\,;\, \eta_q\in T_qS_k\}.$$ We say that $\eta$ [*meets $S$ with multiplicity $k+1$ at $p$*]{} if $\eta$ is $k$-nondegenerate at $p$ and $p\in S_{k+1}$. Moreover, if $d{\varphi}^{(k)}(T_pS_{k})\ne \{{\boldsymbol{0}}\}$, $\eta$ is called [*$(k+1)$-nondegenerate*]{} at $p$. If $\eta$ is $(k+1)$-nondegenerate at $p$, then $S_{k+1}$ is a hypersurface of $S_k$ near $p$.
\[rmk:add\] Here we did not define $1$-nondegeneracyof $\eta$. However, from now on, [*any ${\boldsymbol{K}}$-differentiable vector field $\eta$ of $M^n$ along $S$ is always $1$-nondegenerate by convention*]{}. In the previous paper [@SUY3], $1$-nondegeneracy (i.e. nondegeneracy) is defined not for a vector field along the singular set but for a given singular point. If a singular point $p\in U$ of a front $f:U\to {\boldsymbol{K}}^{n+1}$ is nondegenerate in the sense of [@SUY3], then the function $\lambda:U\to {\boldsymbol{K}}$ defined in [@SUY3 (2.1)] is an admissible function, and the null vector field $\eta$ along $S(f)$ is given. When $k\ge 2$, by definition, $k$-nondegeneracy of the singular point $p$ is equivalent to the $k$-nondegeneracy of the null vector field $\eta$ at $p$ (cf. [@SUY3]).
\[prop:indep\] The $k$-th multiple contactness and $k$-nondegeneracy are both independent of the choice of an extension $\tilde \eta$ of $\eta$ and also of the choice of admissible functions as in Definition \[def:admissible\].
We can take a local coordinate system $(U;x^1,\dots,x^n)$ of $M^n$ such that $x^n={\varphi}$. Write $$\tilde \eta:=\sum_{j=1}^n c^j \frac{\partial}{\partial x^j},$$ where $c^j$ $(j=1,\ldots,n)$ are ${\boldsymbol{K}}$-differentiable functions. Then we have that ${\varphi}'=\sum_{j=1}^n c^j {\varphi}_{x^j}=c^n$.
Let $\psi$ be another admissible function defined on $U$. Then $$\psi'=
\sum_{j=1}^n c^j \frac{\partial\psi}{\partial x^j}
=c^n \frac{\partial\psi}{\partial x^n}
={\varphi}'\frac{\partial\psi}{\partial x^n}.$$ Thus $\psi'$ is proportional to ${\varphi}'$. Then the assertion follows inductively.
Corollary 2.5 in [@SUY3] is now generalized into the following assertion:
\[thm:actual\] Let $\tilde \eta$ be an extension of the vector field $\eta$. Let us assume $1\leq k\leq n$. Then the vector field $\eta$ is $k$-nondegenerate at $p$, but $\eta$ does not meet $S$ with multiplicity $k+1$ at $p$ if and only if $${\varphi}(p)={\varphi}'(p)=\dots ={\varphi}^{(k-1)}(p)=0,\quad
{\varphi}^{(k)}(p)\ne 0$$ and the Jacobi matrix of ${\boldsymbol{K}}$-differentiable map $$\Lambda:=({\varphi},{\varphi}',\dots, {\varphi}^{(k-1)}):U\longrightarrow {\boldsymbol{K}}^k$$ is of rank $k$ at $p$, where ${\varphi}$ is an admissible ${\boldsymbol{K}}$-differentiable function and $${\varphi}^{(0)}:={\varphi},\quad
{\varphi}^{(1)}(={\varphi}'):=d{\varphi}(\tilde \eta), \quad\dots,\quad
{\varphi}^{(k)}:=d{\varphi}^{(k-1)}(\tilde \eta).$$
The proof of this theorem is completely parallel to that of Corollary 2.5 in [@SUY3].
To prove Theorem \[thm:A\] by applying Theorem \[thm:actual\], we shall review the criterion for $A_k$-singularities in [@SUY3]. Let $U^n$ be a domain in ${\boldsymbol{K}}^n$, and consider a map $\Phi:U^n\to {\boldsymbol{K}}^{m}$ where $m\ge n$. A point $p\in U^{n}$ is called a [*singular point*]{} if the rank of the differential map $d\Phi$ is less than $n$. Suppose that the singular set $S(\Phi)$ of $\Phi$ consists of a ${\boldsymbol{K}}$-differentiable hypersurface $U^n$. Then a vector field $\eta$ along $S$ is called a [*null vector field*]{} if $d\Phi(\eta)$ vanishes identically. In this paper, we consider the case $m=n$ or $m=n+1$. If $m=n$, we define a ${\boldsymbol{K}}$-differentiable function $\lambda:U^n\to {\boldsymbol{K}}$ by $$\label{eq:lambda-map}
\lambda:=\det(\Phi_{x^1},\dots,\Phi_{x^n}).$$ On the other hand, if $\Phi:U^n\to{\boldsymbol{K}}^{n+1}$ ($m=n+1$) and $\nu$ is a non-vanishing ${\boldsymbol{K}}$-normal vector field (for a definition, see [@SUY3 Section 1]) we set $$\label{eq:lambda-fr}
\lambda:=\det(\Phi_{x^1},\dots,\Phi_{x^n},\nu).$$ Then the singular set $S(\Phi)$ of the map $\Phi$ coincides with the zeros of $\lambda$. Recall that $p\in S(\Phi)$ is called [*nondegenerate*]{} if $d\lambda(p)\ne {\boldsymbol{0}}$ (see [@SUY3] and Remark \[rmk:add\]). Both of two cases and , the functions $\lambda$ are admissible near $p$, if $p$ is non-degenerate. When $S(\Phi)$ consists of nondegenerate singular points, then it is a hypersurface and there exists a non-vanishing null vector field $\eta$ on $S(\Phi)$. Such a vector field $\eta$ determined up to a multiplication of non-vanishing ${\boldsymbol{K}}$-differentiable functions. The following assertion holds as seen in [@SUY3].
\[eq:fact\_suy\] Suppose $m=n$ and $\Phi$ is a $C^\infty$-map [(]{}resp. $m=n+1$ and $\Phi$ is a front[)]{}. Then $\Phi$ has an $A_{k+1}$-Morin singularity [(]{}resp. $A_{k+1}$-singularity[)]{} at $p\in M^n$ if and only if $\eta$ is $k$-nondegenerate at $p$ but does not meet $S(\Phi)$ with multiplicity $k+1$ at $p$. [(]{}Here multiplicity $1$ means that $\eta$ meets $S(\Phi)$ at $p$ transversally, and $1$-nondegeneracy is an empty condition.[)]{}
As an application of the fact for $m=n$, we now give a proof of Theorem \[thm:A\]: Let $F:M^{n}\to {\boldsymbol{K}}^{n+1}$ be an immersed ${\boldsymbol{K}}$-differentiable hypersurface. Recall that a point $p\in M^{n}$ is called a [*nondegenerate inflection point*]{} if the derivative $dh$ of the local Hessian function $h$ (cf. ) with respect to $F$ does not vanish at $p$. Then the set $I(F)$ of inflection points consists of a hypersurface, called the [*inflectional hypersurface*]{}, and the function $h$ is an admissible function on a neighborhood of $p$ in $M^n$. A nondegenerate inflection point $p$ is called an [*$A_{k+1}$-inflection point*]{} of $F$ if the asymptotic vector field $\xi$ is $k$-nondegenerate at $p$ but does not meet $I(F)$ with multiplicity $k+1$ at $p$.
Let $\nu$ be a map given by , and ${\mathcal{G}}:M^n\to P(({\boldsymbol{K}}^{n+1})^*)$ the affine Gauss map induced from $\nu$ by . We set $$\mu:=\det(\nu_{x^1},\nu_{x^2},\dots,\nu_{x^n},\nu),$$ where $\det$ is the determinant function of $({\boldsymbol{K}}^{n+1})^*$ under the canonical identification $({\boldsymbol{K}}^{n+1})^*\cong {\boldsymbol{K}}^{n+1}$, and $(x^1,\dots,x^n)$ is a local coordinate system of $M^n$. Then the singular set $S({\mathcal{G}})$ of ${\mathcal{G}}$ is just the zeros of $\mu$. By Theorem \[thm:actual\] and Fact \[eq:fact\_suy\], our criteria for $A_{k+1}$-inflection points (resp. $A_{k+1}$-singular points) are completely determined by the pair $(\xi, I(F))$ (resp. the pair $(\eta,S({\mathcal G}))$). Hence it is sufficient to show the following three assertions (1)–(3).
1. \[ass:1\] $I(F)=S(\mathcal G)$,
2. \[ass:2\] For each $p\in I(F)$, $p$ is a nondegenerate inflection point of $F$ if and only if it is a nondegenerate singular point of ${\mathcal{G}}$.
3. \[ass:3\] The asymptotic direction of each nondegenerate inflection point $p$ of $F$ is equal to the null direction of $p$ as a singular point of ${\mathcal{G}}$.
Let $H=\sum_{i,j=1}^n h_{ij}dx^i\, dx^j$ be the Hessian form of $F$. Then we have that $$\label{eq:split}
{{\begin{pmatrix} h_{11}& \dots & h_{1n}& *\\
\vdots & \ddots & \vdots & \vdots \\
h_{n1}& \dots & h_{nn} & *\\
0 & \dots & 0 & \nu\cdot {{\vphantom{\nu}}^t{\nu}}
\end{pmatrix}}}
=
{{\begin{pmatrix} \nu_{x^1}\\ \vdots\\ \nu_{x^n}\\ \nu
\end{pmatrix}}}
(F_{x^1},\dots,F_{x^n},{{\vphantom{\nu}}^t{\nu}}),$$ where $\nu\cdot {{\vphantom{\nu}}^t{\nu}}=\sum_{j=1}^{n+1}(\nu^j)^2$ and $\nu=(\nu^1,...,\nu^n)$ as a row vector. Here, we consider a vector in ${\boldsymbol{K}}^n$ (resp. in $({\boldsymbol{K}}^n)^*$) as a column vector (resp. a row vector), and ${{\vphantom{(\cdot)}}^t{(\cdot)}}$ denotes the transposition. We may assume that $\nu(p)\cdot{{\vphantom{\nu}}^t{\nu}}(p)\ne0$ by a suitable affine transformation of ${\boldsymbol{K}}^{n+1}$, even when ${\boldsymbol{K}}={\boldsymbol{C}}$. Since the matrix $(F_{x^1},\dots,F_{x^n},{{\vphantom{\nu}}^t{\nu}})$ is regular, \[ass:1\] and \[ass:2\] follow by taking the determinant of . Also by , $\sum_{i=1}^n a_ih_{ij}=0$ for all $j=1,\dots,n$ holds if and only if $\sum_{i=1}^n a_i\nu_{x^i}={\boldsymbol{0}}$, which proves \[ass:3\].
Similar to the proof of Theorem \[thm:A\], it is sufficient to show the following properties, by virtue of Theorem \[thm:actual\].
1. \[ass:1a\] $S(F)=I({\mathcal{G}})$, that is, the set of singular points of $F$ coincides with the set of inflection points of the affine Gauss map.
2. \[ass:2a\] For each $p\in I({{\mathcal{G}}})$, $p$ is a nondegenerate inflection point if and only if it is a nondegenerate singular point of $F$.
3. \[ass:3a\] The asymptotic direction of each nondegenerate inflection point coincides with the null direction of $p$ as a singular point of $F$.
Since ${\mathcal{G}}$ is an immersion, implies that $$\begin{aligned}
I({\mathcal{G}}) &= \{p\,;\,(F_{x^1},\dots,F_{x^n},{{\vphantom{\nu}}^t{\nu}})
\text{ are linearly dependent at $p$}\}\\
&= \{p\,;\, \lambda(p)=0\}
\qquad(\lambda:=\det(F_{x^1},\dots,F_{x^n},{{\vphantom{\nu}}^t{\nu}})).
\end{aligned}$$ Hence we have \[ass:1a\]. Moreover, $h=\det(h_{ij})=\delta\lambda$ holds, where $\delta$ is a function on $U$ which never vanishes on a neighborhood of $p$. Thus \[ass:2a\] holds. Finally, by , $\sum_{j=1}^n b_j h_{ij}=0$ for $i=1,\dots,n$ if and only if $\sum_{j=1}^n b_j F_{x^j}={\boldsymbol{0}}$, which proves \[ass:3a\].
Let $\gamma(t):={{\vphantom{(x(t),y(t))}}^t{(x(t),y(t))}}$ be a ${\boldsymbol{K}}$-differentiable curve in ${\boldsymbol{K}}^2$. Then $\nu(t):=(-\dot y(t),\dot x(t))\in ({\boldsymbol{K}}^2)^*$ gives a normal vector, and $$h(t)=\nu(t)\cdot \ddot\gamma(t)=\det(\dot \gamma(t),
\ddot \gamma(t))$$ is the Hessian function. Thus $t=t_0$ is an $A_2$-inflection point if and only if $$\det(\dot \gamma(t_0),\ddot \gamma(t_0))=0,\qquad
\det(\dot \gamma(t_0),\dddot \gamma(t_0))\ne 0.$$ Considering ${\boldsymbol{K}}^2\subset P({\boldsymbol{K}}^3)$ as an affine subspace, this criterion is available for curves in $P({\boldsymbol{K}}^3)$. When ${\boldsymbol{K}}={\boldsymbol{C}}$, it is well-known that non-singular cubic curves in $P({\boldsymbol{C}}^3)$ have exactly nine inflection points which are all of $A_2$-type. One special singular cubic curve is $2y^2-3x^3=0$ in $P({\boldsymbol{C}}^3)$ with homogeneous coordinates $[x,y,z]$, which can be parameterized as $\gamma(t)=[\sqrt[3]{2}t^2,\sqrt{3}t^3,1]$. The image of the dual curve of $\gamma$ in $P({\boldsymbol{C}}^3)$ is the image of $\gamma$ itself, and $\gamma$ has an $A_2$-type singular point $[0,0,1]$ and an $A_2$-inflection point $[0,1,0]$.
These two points are interchanged by the duality. (The duality on fronts are explained in Section \[sec:dual\].)
Let $F:{\boldsymbol{K}}^3\to{\boldsymbol{K}}^4$ be a map defined by $$F(u,v,w)
=
\raisebox{0.3cm}{${}^t$}\!\!\!
\left(
w,u,v,
-u^2-\dfrac{3v^2}{2}+uw^2+vw^3
-\dfrac{w^4}{4}+\dfrac{w^5}{5}-\dfrac{w^6}{6}
\right)
\qquad (u,v,w\in {\boldsymbol{K}}).$$ If we define ${\mathcal{G}}:{\boldsymbol{K}}^3\to P({\boldsymbol{K}}^4)\cong P(({\boldsymbol{K}}^4)^{*})$ by $${\mathcal{G}}(u,v,w) =
[-2uw-3vw^2+w^3-w^4+w^5,2u-w^2,3v-w^3,1]$$ using the homogeneous coordinate system, ${\mathcal{G}}$ gives the affine Gauss map of $F$. Then the Hessian $h$ of $F$ is $$\det
{{\begin{pmatrix}
-2&0&2w\\
0&-3&3w^2\\
2w&3w^2&2u+6vw-3w^2+4w^3-5w^4
\end{pmatrix}}}
=6( 2u+6vw-w^2+4w^3-2w^4).$$ The asymptotic vector field is $\xi=(w,w^2,1)$. Hence we have $$\begin{aligned}
h&=6(2u+6vw-w^2+4w^3-2w^4),\\
h'&=12(3v+6w^2-w^3),\qquad
h''=144w,\qquad
h'''=144,
\end{aligned}$$ where $h'=dh(\xi)$, $h''=dh'(\xi)$ and $h'''=dh''(\xi)$. The Jacobi matrix of $(h,h',h'')$ at ${\boldsymbol{0}}$ is $${{\begin{pmatrix} 2 & * & * \\
0 & 36 & * \\
0 & 0 & 144
\end{pmatrix}}}.$$ This implies that $\xi$ is $3$-nondegenerate at ${\boldsymbol{0}}$ but does not meet $I(F)=h^{-1}(0)$ at $p$ with multiplicity $4$, that is, $F$ has an $A_{4}$-inflection point at ${\boldsymbol{0}}$. On the other hand, ${\mathcal{G}}$ has the ${A}_3$-Morin singularity at ${\boldsymbol{0}}$. In fact, by the coordinate change $$U=2u-w^2,\qquad V=3v-w^3,\qquad W=w,$$ it follows that ${\mathcal{G}}$ is represented by a map germ $$(U,V,W)\longmapsto
-(UW+VW^2+W^4,U,V).$$ This coincides with the typical ${A}_3$-Morin singularity given in (A.3) in [@SUY3].
Duality on wave fronts {#sec:dual}
======================
Let $P({\boldsymbol{K}}^{n+2})$ be the $(n+1)$-projective space over ${\boldsymbol{K}}$. We denote by $[x]\in P({\boldsymbol{K}}^{n+2})$ the projection of a vector $x={{\vphantom{(}}^t{(}}x^0,\dots,x^{n+1})\in {\boldsymbol{K}}^{n+2}$. Consider a $(2n+3)$-submanifold of ${\boldsymbol{K}}^{n+2}\times ({\boldsymbol{K}}^{n+2})^*$ defined by $$\widetilde C:=
\{(x,y)\in {\boldsymbol{K}}^{n+2}\times ({\boldsymbol{K}}^{n+2})^*\,;\, x\cdot y=0\},$$ and also a $(2n+1)$-submanifold of $P({\boldsymbol{K}}^{n+2})\times P(({\boldsymbol{K}}^{n+2})^*)$ $$C:=\{([x],[y])\in P({\boldsymbol{K}}^{n+2})\times P(({\boldsymbol{K}}^{n+2})^*)
\,;\, x\cdot y=0\}.$$ As $C$ can be canonically identified with the projective tangent bundle $PTP({\boldsymbol{K}}^{n+2})$, it has a canonical contact structure: Let $\pi:\widetilde C\to C$ be the canonical projection, and define a $1$-from $$\omega:=\sum_{j=0}^{n+1} \left(x^jdy^j-y^j d{x^j}\right)$$ which is considered as a $1$-form of $\widetilde C$. The tangent vectors of the curves $t\mapsto (tx,y)$ and $t\mapsto (x,ty)$ at $(x,y)\in \widetilde C$ generate the kernel of $d\pi$. Since these two vectors also belong to the kernel of $\omega$ and $\dim(\ker\omega)=2n+2$, $$\Pi:=d\pi(\ker \omega)$$ is a $2n$-dimensional vector subspace of $T_{\pi(x,y)}C$. We shall see that $\Pi$ is the contact structure on $C$, which is defined to be [*the canonical contact structure*]{}: Let $U$ be an open subset of $C$ and $s:U\to {\boldsymbol{K}}^{n+2}
\times ({\boldsymbol{K}}^{n+2})^*$ a section of the fibration $\pi$. Since $d\pi\circ ds$ is the identity map, it can be easily checked that $\Pi$ is contained in the kernel of the $1$-form $s^*\omega$. Since $\Pi$ and the kernel of the $1$-form $s^*\omega$ are the same dimension, they coincide. Moreover, suppose that $p=\pi(x,y)\in C$ satisfies $x^i\ne 0$ and $y^j\ne 0$. We then consider a map of ${\boldsymbol{K}}^{n+1}\times ({\boldsymbol{K}}^{n+1})^*
\cong {\boldsymbol{K}}^{n+1}\times {\boldsymbol{K}}^{n+1}$ into ${\boldsymbol{K}}^{n+2}\times({\boldsymbol{K}}^{n+2})^*\cong {\boldsymbol{K}}^{n+2}\times {\boldsymbol{K}}^{n+2}$ defined by $$(a^0,\dots,a^{n},b^0,\dots,b^n)
\mapsto
(a^0,\dots,a^{i-1},1,a^{i+1},\dots,a^n,b^0,\dots,b^{j-1},1,b^{j+1},\dots,b^n),$$ and denote by $s_{i,j}$ the restriction of the map to the neighborhood of $p$ in $C$. Then one can easily check that $$s_{i,j}^*
\left[
\omega\wedge \left(\bigwedge\nolimits^{n}d\omega\right)
\right]$$ does not vanish at $p$. Thus $s_{i,j}^*\omega$ is a contact form, and the hyperplane field $\Pi$ defines a canonical contact structure on $C$. Moreover, the two projections from $C$ into $P({\boldsymbol{K}}^{n+2})$ are both Legendrian fibrations, namely we get a double Legendrian fibration. Let $f=[F]:M^{n}\to P({\boldsymbol{K}}^{n+2})$ be a front. Then there is a Legendrian immersion of the form $L=([F],[G]):M^{n}\to C$. Then $g=[G]:M^{n}\to P(({\boldsymbol{K}}^{n+2})^*)$ satisfies and . In particular, $L:=\pi(F,G):M^{n}\to C$ gives a Legendrian immersion, and $f$ and $g$ can be regarded as mutually dual wave fronts as projections of $L$.
Since our contact structure on $C$ can be identified with the contact structure on the projective tangent bundle on $P({\boldsymbol{K}}^{n+2})$, we can apply the criterion of $A_k$-singularities as in Fact \[eq:fact\_suy\]. Thus a nondegenerate singular point $p$ is an $A_k$-singular point of $f$ if and only if the null vector field $\eta$ of $f$ (as a wave front) is $(k-1)$-nondegenerate at $p$, but does not meet the hypersurface $S(f)$ with multiplicity $k$ at $p$. Like as in the proof of Theorem \[thm:A\], we may assume that ${{\vphantom{F}}^t{F}}(p)\cdot F(p)\ne0$ and $G(p)\cdot {{\vphantom{G}}^t{G}}(p)\ne0$ simultaneously by a suitable affine transformation of ${\boldsymbol{K}}^{n+2}$, even when ${\boldsymbol{K}}={\boldsymbol{C}}$. Since $(F_{x^1},\dots,F_{x^n}, F, {{\vphantom{G}}^t{G}})$ is a regular $(n+2)\times (n+2)$-matrix if and only if $f=[F]$ is an immersion, the assertion immediately follows from the identity $$\label{eq:split2}
{{\begin{pmatrix}
h_{11}& \dots & h_{1n}& 0 & *\\
\vdots & \ddots & \vdots & \vdots & \vdots \\
h_{n1}& \dots & h_{nn} & 0 & *\\
0 & \dots & 0 & 0 & G\cdot {{\vphantom{G}}^t{G}} \\
* & \dots & * & {{\vphantom{F}}^t{F}}\cdot F & 0
\end{pmatrix}}}=
{{\begin{pmatrix}
G_{x^1}\\ \vdots\\ G_{x^n}\\ G \\ {{\vphantom{F}}^t{F}}
\end{pmatrix}}}
(F_{x^1},\dots,F_{x^n}, F, {{\vphantom{G}}^t{G}}).$$
Let $g:M^2\to P(({\boldsymbol{R}}^4)^*)$ be the dual of $f$. We fix $p\in M^2$ and take a simply connected and connected neighborhood $U$ of $p$.
Then there are lifts $\hat f,\hat g:U\to S^3$ into the unit sphere $S^3$ such that $$\hat f\cdot \hat g=0,\qquad
d\hat f(v)\cdot \hat g=d\hat g(v)\cdot \hat f=0 \quad (v\in TU),$$ where $\cdot$ is the canonical inner product on ${\boldsymbol{R}}^4\supset S^3$. Since $\hat f\cdot \hat f=1$, we have $$d\hat f(v)\cdot \hat f(p)
=0\qquad (v\in T_pM^2).$$ Thus $$d\hat f(T_pM^2)=\{\zeta \in S^3\,;\,
\zeta\cdot \hat f(p)=\zeta \cdot \hat g(p)=0\},$$ which implies that $df(TM^2)$ is equal to the limiting tangent bundle of the front $g$. So we apply (2.5) in [@SUY2] for $g$: Since the singular set $S(g)$ of $g$ consists only of cuspidal edges and swallowtails, the Euler number of $S(g)$ vanishes. Then it holds that $$\chi(M_+)+\chi(M_-)=\chi(M^2)=\chi(M_+)-\chi(M_-)
+i_2^+(f)-i_2^-(f),$$ which proves the formula.
When $n=2$, the duality of two fronts in the unit $2$-sphere $S^2$ (as the double cover of $P({\boldsymbol{R}}^3)$) plays a crucial role for obtaining the classification theorem in [@MU] for flat fronts with embedded ends in ${\boldsymbol{R}}^3$. Also, a relationship between the number of inflection points and the number of double tangents on certain class of simple closed regular curves in $P({\boldsymbol{R}}^3)$ is given in [@tu3]. (For the geometry and a duality of fronts in $S^2$, see [@A].) In [@porteous], Porteous investigated the duality between $A_k$-singular points and $A_k$-inflection points when $k=2,3$ on a surface in $S^3$.
Cuspidal curvature on $3/2$-cusps {#sec:cusp}
=================================
Relating to the duality between singular points and inflection points, we introduce a curvature on $3/2$-cusps of planar curves:
Suppose that $(M^2,g)$ is an oriented Riemannian manifold, $\gamma:I\to M^2$ is a front, $\nu(t)$ is a unit normal vector field, and $I$ an open interval. Then $t=t_0\in I$ is a $3/2$-cusp if and only if $\dot\gamma(t_0)={\boldsymbol{0}}$ and $\Omega(\ddot\gamma(t_0),\dddot\gamma(t_0))\ne 0$, where $\Omega$ is the unit $2$-form on $M^2$, that is, the Riemannian area element, and the dot means the covariant derivative. When $t=t_0$ is a $3/2$-cusp, $\dot\nu(t)$ does not vanish (if $M^2={\boldsymbol{R}}^2$, it follows from Proposition $\mathrm A'$). Then we take the (arclength) parameter $s$ near $\gamma(t_0)$ so that $|\nu'(s)|=\sqrt{g(\nu'(s),\nu'(s))}=1$ ($s\in I$), where $\nu'=d\nu/ds$. Now we define the [*cuspidal curvature*]{} $\mu$ by $$\mu:=
2\operatorname{sgn}(\rho)\left. \sqrt{\left |{ds}/{d\rho}\right |}
\right|_{s=s_0} \qquad (\rho=1/\kappa_g),$$ where we choose the unit normal $\nu(s)$ so that it is smooth around $s=s_0$ ($s_0=s(t_0)$). If $\mu>0$ (resp. $\mu<0$), the cusp is called [*positive*]{} (resp. [*negative*]{}). It is interesting phenomenon that the left-turning cusps have negative cuspidal curvature, although the left-turning regular curves have positive geodesic curvature (see Figure \[cusps\]).
![a positive cusp and a negative cusp[]{data-label="cusps"}](cusp23.eps){width="4cm"}
$(\mu>0)\qquad \qquad (\mu<0)$
Then it holds that $$\label{eq:general3}
\mu= \left.\frac{\Omega(\ddot \gamma(t),\dddot \gamma(t))}
{|\ddot \gamma(t)|^{5/2}}\right|_{t=t_0}
=
\left. 2\frac{\Omega(\nu(t),\dot \nu(t))}{ \sqrt{|\Omega(\ddot\gamma(t),\nu(t))|}}
\right|_{t=t_0}.$$ We now examine the case that $(M^2,g)$ is the Euclidean plane ${\boldsymbol{R}}^2$, where $\Omega(v,w)$ ($v,w\in {\boldsymbol{R}}^2$) coincides with the determinant $\det(v,w)$ of the $2\times 2$-matrix $(v,w)$. A [*cycloid*]{} is a rigid motion of the curve given by $c(t):=a(t-\sin t,1-\cos t)$ ($a>0$), and here $a$ is called the [*radius*]{} of the cycloid. The cuspidal curvature of $c(t)$ at $t\in 2\pi \mathbf Z$ is equal to $-1/\sqrt{a}$. In [@ume], the second author proposed to consider the curvature as the inverse of radius of the cycloid which gives the best approximation of the given $3/2$-cusp. As shown in the next proposition, $\mu^2$ attains this property:
Suppose that $\gamma(t)$ has $3/2$-cusp at $t=t_0$. Then by a suitable choice of the parameter $t$, there exists a unique cycloid $c(t)$ such that $$\gamma(t)-c(t)=o((t-t_0)^3),$$ where $o((t-t_0)^3)$ denotes a higher order term than $(t-t_0)^3$. Moreover, the square of the absolute value of cuspidal curvature of $\gamma(t)$ at $t=t_0$ is equal to the inverse of the radius of the cycloid $c$.
Without loss of generality, we may set $t_0=0$ and $\gamma(0)={\boldsymbol{0}}$. Since $t=0$ is a singular point, there exist smooth functions $a(t)$ and $b(t)$ such that $\gamma(t)=t^2(a(t),b(t))$. Since $t=0$ is a $3/2$-cusp, $(a(0),b(0))\ne {\boldsymbol{0}}$. By a suitable rotation of $\gamma$, we may assume that $b(0)\ne 0$ and $a(0)=0$. Without loss of generality, we may assume that $b(0)>0$. By setting $s=t\sqrt{b(t)}$, $\gamma(s)=\gamma(t(s))$ has the expansion $$\gamma(s)=(\alpha s^3, s^2)+o(s^3) \qquad (\alpha\ne 0).$$ Since the cuspidal curvature changes sign by reflections on ${\boldsymbol{R}}^2$, it is sufficient to consider the case $\alpha>0$. Then, the cycloid $$c(t):=\frac{2}{9\alpha^2}(t-\sin t,1-\cos t)$$ is the desired one by setting $s=t/(3\alpha)$.
It is well-known that the cycloids are the solutions of the brachistochrone problem. We shall propose to call the number $1/|\mu|^2$ the [*cuspidal curvature radius*]{} which corresponds the radius of the best approximating cycloid $c$.
During the second author’s stay at Saitama University, Toshizumi Fukui pointed out the followings: Let $\gamma(t)$ be a regular curve in ${\boldsymbol{R}}^2$ with non-vanishing curvature function $\kappa(t)$. Suppose that $t$ is the arclength parameter of $\gamma$. For each $t=t_0$, there exists a unique cycloid $c$ such that a point on $c$ gives the best approximation of $\gamma(t)$ at $t=t_0$ (namely $c$ approximates $\gamma$ up to the third jet at $t_0$). The angle $\theta(t_0)$ between the axis (i.e. the normal line of $c$ at the singular points) of the cycloid and the normal line of $\gamma$ at $t_0$ is given by $$\label{c:theta}
\sin \theta=\frac{\kappa^2}{\sqrt{\kappa^4+\dot\kappa^2}},$$ and the radius $a$ of the cycloid is given by $$\label{c:k}
a:=\frac{\sqrt{\kappa^4+\dot\kappa^2}}{|\kappa|^3}.$$ One can prove and by straightforward calculations. The cuspidal curvature radius can be considered as the limit.
[99]{} V. I. Arnold, [*The geometry of spherical curves and the algebra of quaternions*]{}, Uspekhi Mat. Nauk [**50**]{}(1995) 3-68, English transl. in Russian Math. Surveys [**50**]{} (1995).
T. Banchoff, T. Gaffney and C. McCrory, [Cusps of Gauss Mappings]{}, Pitman publ. (1982).
D. Bleecker and L. Wilson, [*Stability of Gauss maps*]{}, Illinois J. of Math. [**22**]{} (1978) 279–289.
S. Izumiya, D. Pei and T. Sano, [*Singularities of hyperbolic Gauss maps*]{}, Proc. London Math. Soc. [**86**]{} (2003), 485–512
S. Izumiya, [*Legendrian dualities and spacelike hypersurfaces in the lightcone*]{}, Moscow Math. J. [**9**]{} (2009), 325–357.
S. Murata and M. Umehara, [ *Flat surfaces with singularities in Euclidean $3$-space*]{}, J. of Differential Geometry, [**82**]{} (2009), 279–316.
I. R. Porteous, [ *Some remarks on duality in $S^3$*]{}, Geometry and topology of caustics, Banach Center Publ. [**50**]{}, Polish Acad. Sci., Warsaw, (2004), 217–226.
K. Saji, M. Umehara and K. Yamada, [*The geometry of fronts*]{}, Ann. of Math. [**169**]{} (2009), 491–529.
, [*Behavior of corank one singular points on wave fronts*]{}, Kyushu J. of Mathematics [**62**]{} (2008), 259–280.
, [*$A_k$-singularities of wave fronts*]{}, Math. Proc. Cambridge Philosophical Soc., [**146**]{} (2009), 731-746.
G. Thorbergsson and M. Umehara, [*Inflection points and double tangents on anti-convex curves in the real projective plane*]{}, Tôhoku Math. J. [**60**]{} (2008), 149–181. M. Umehara, , (in Japanese), The world of Singularities (ed. H. Arai, T. Sunada and K. Ueno) Nippon-Hyoron-sha Co., Ltd. (2005), 50–64. R. Uribe-Vargas, [*A projective invariant for swallowtails and godrons, and global theorems on the flecnodal curve*]{}, Moscow Math. J. [**6**]{}, (2006), 731–768.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The interplay between two basic quantities – quantum communication and information – is investigated. Quantum communication is an important resource for quantum states shared by two parties and is directly related to entanglement. Recently, the amount of [*local information*]{} that can be drawn from a state has been shown to be closely related to the non-local properties of the state. Here we consider both formation and extraction processes, and analyze informational resources as a function of quantum communication. The resulting diagrams in [*information space*]{} allow us to observe phase-like transitions when correlations become classical.'
author:
- 'Jonathan Oppenheim$^{(1)(2)}$, Michał Horodecki$^{(2)}$, and Ryszard Horodecki$^{(2)}$'
title: 'Are there phase transitions in information space?'
---
Quantum communication (QC) – the sending of qubits between two parties – is a primitive concept in quantum information theory. Entanglement cannot be created without it, and conversely, entanglement between two parties can be used to communicate quantum information through teleportation [@teleportation]. The amount of quantum communication needed to prepare a state, and the amount of quantum communication that a state enables one to perform, are fundamental properties of states shared by two parties. This amount, is identical to the entanglement cost $\EF$ [@cost] and entanglement of distillation $\ED$ [@BBPS1996] respectively. Perhaps surprisingly, these two quantities are different [@Vidal-cost2002]. There are even states which are “bound” in the sense that quantum communication is needed to create it, but nothing can be obtained from it [@bound]. Yet QC is a distinct notion from entanglement. For example, one may need to use a large amount of QC while creating some state of low $E_c$, in order to save some other resource. In the present paper we will consider such a situation. The second primitive resource of interest will be [*information*]{} which quantifies how pure a state is. The motivation comes from (both classical or quantum) thermodynamics: it is known that bits that are in a pure state can be used to extract physical work from a single heat bath [@Szilard], and conversely work is required to reset mixed states to a pure form [@Landauer; @Bennett82].
For distant parties, in order to use information to perform such tasks, it must first be localized. In [@OHHH2001] we considered how much information (i.e. pure states) can be localized (or drawn) from a state shared between two parties. Thus far, the amount of information needed to prepare a state has not been considered, a possible exception being in [@Bennett-nlwe] where it was noted that there was a thermodynamical irreversibility between preparation and measurement for ensembles of certain pure product states. However, given the central role of quantum communication and information, it would be of considerable importance to understand the interplay between these two primitive resources. In this Letter, we attempt to lay the foundation for this study by examining how much information is needed to prepare a state and how much can be extracted from it as a function of quantum communication. For a given state, this produces a curve in [*information space*]{}. The shapes of the curve fall into a number of distinctive categories which classify the state and only a small number of parameters are needed to characterize them. The curves for pure states can be calculated exactly, and they are represented by a one parameter family of lines of constant slope. The diagrams exhibit features reminiscent of thermodynamics, and phase-like transitions (cf. [@dorit-phase]) are observed.
An important quantity that emerges in this study is the [*information surplus $\DF$*]{}. It quantifies the additional information that is needed to prepare a state when quantum communication resources are minimized. $\DF$ tells us how much information is dissipated during the formation of a state and is therefore closely related to the irreversibility of state preparation and therefore, to the difference between the entanglement of distillation and entanglement cost. When it is zero, there is no irreversibility in entanglement manipulations. Examples of states with $\DF=0$ include pure states, and states with an optimal decomposition [@BBPS1996] which is locally orthogonal.
Consider two parties in distant labs, who can perform local unitary operations, dephasing[@dephasing], and classical communication. It turns out to be simpler to substitute measurements with dephasing operations, since we no longer need to keep track of the informational cost of reseting the measuring device. (This cost was noted by Landauer [@Landauer] and used by Bennett [@Bennett82] to exorcize Maxwell’s demon.) The classical communication channel can also be thought of as a dephasing channel. Finally, we allow Alice and Bob to add noise (states which are proportional to the identity matrix) since pure noise contains no information. Note that we are only interesting in accounting for resources that are “used up” during the preparation procedure. For example, a pure state which is used and then reset to its original state at the end of the procedure, does not cost anything. Consider now the information extraction process of [@OHHH2001]. If the two parties have access to a quantum channel, and share a state $\rab$, they can extract all the information from the state I=n-S() \[eq:info\] where $n$ is the number of qubits of the state, and $S(\rab)$ is its Von Neumann entropy. Put another way, the state can be compressed, leaving $I$ pure qubits. However, if two parties do not have access to a quantum channel, and can only perform local operations and communicate classically (LOCC), then in general, they will be able to draw less local information from the state. In [@OHHH2001] we defined the notion of the [*deficit*]{} $\DD$ to quantify the information that can no longer be drawn under LOCC. For pure states, it was proven that $\DD$ was equal to the amount of distillable entanglement in the state.
Let us now turn to formation processes and define $\DFP$ as follows. Given an amount of quantum communication $Q$, the amount of information (pure states) needed to prepare the state $\rab$ under LOCC is given by $\IFP$. Clearly at least $\EF$ bits of quantum communication are necessary. In general, $\IFP$ will be greater than the information content $I$. The surplus is then =-I .
The two end points are of particular interest I.e. $\DF\equiv\DF(\EF)$ where quantum communication is minimized, and $\DF(\ER)=0$ where we use the quantum channel enough times that $\IF(\ER)=I$. Clearly $
\EF\leq \ER \leq min\{S(\r_A),S(\r_B)\}
$ where $\r_A$ is obtained by tracing out on Bob’s system. This rough bound is obtained by noting that at a minimum, Alice or Bob can prepare the entire system locally, and then send it to the other party through the quantum channel (after compressing it). We will obtain a tight bound later in this paper.
The general procedure for state preparation is that Alice uses a [*classical instruction set*]{} (ancilla in a classical state) with probability distribution matching that of the decomposition which is optimal for a given $Q$. Since the instruction set contains classical information, it can be passed back and forth between Alice and Bob. Additionally they need $n$ pure standard states. The pure states are then correlated with the ancilla, and then sent. The ancilla need not be altered by this procedure, and can be reset and then reused and so at worse we have $\IF\equiv \IF(\EF)\leq n$ and $
\DF \leq S(\rab).
$ We will shortly describe how one can do better by extracting information from correlations between the ancilla and the state.
The pairs $(Q, \IFP)$ form curves in [*information space*]{}. In Figure \[generic\]
we show a typical curve which we now explain. Since we will be comparing the formation curves to extraction curves, we will adopt the convention that $\IFP$ and $Q$ will be plotted on the negative axis since we are using up resources. It can be shown that $\IFP$ is concave, monotonic and continuous. To prove concavity, we take the limit of many copies of the state $\rab$. Then given any two protocols, we can always toss a coin weighted with probabilities $p$ and $1-p$ and perform one of the protocols with this probability. There will always be a protocol which is at least as good as this. Monotonicity is obvious (additional quantum communication can only help), and continuity follows from monotonicity, and the existence of the probabilistic protocol.
The probabilistic protocol can be drawn as a straight line between the points $(\ER,\IF(\ER))$ and $(\EF,\IF(\EF))$. There may however exist a protocol which has a lower $\IFP$ than this straight line, i.e. the curve $\IFP$ satisfies I+(Q-)
Let us now look at extraction processes. The idea is that we draw both local information (pure separable states), and distill singlets. The singlets allow one to perform teleportations, so that we are in fact, extracting the potential to use a quantum channel. We can also consider the case where we use the quantum channel to assist in the information extraction process. We can therefore write the extractable information $\IDP$ as a function of $Q$. When $Q$ is positive, we distill singlets at the same time as drawing information, and when $Q$ is negative, we are using the quantum channel $Q$ times to assist in the extraction (see also Figure \[generic\]).
There appear to be at least three special points on the curve. The first, is the point $\ID\equiv\ID(0)$. This was considered in [@OHHH2001] when we draw maximal local information without the aid of a quantum channel. Another special point is the usual entanglement distillation procedure $\IG=\ID(\ED)$. The quantity $\IG$ is the amount of local information extractable from the “garbage” left over from distillation. $\IG$ can be negative as information may need to be added to the system in order to distill all the available entanglement. Finally, $I=\ID(\ER)$ is the point where we use the quantum channel enough times that we can extract all the available information. This is the same number of times that the quantum channel is needed to prepare the state without any information surplus since both procedures are now reversible.
Just as with the formation curve, $\IDP$ is convex, continuous and monotonic. For $Q\geq 0$ there is an upper bound on the extraction curve due to the classical/quantum complementarity of [@compl]. I+Q \[eq:compl\] It arises because one bit of local information can be drawn from each distilled singlet, destroying the entanglement. One might suppose that the complementarity relation (\[eq:compl\]) can be extended into the region $Q< 0$. Perhaps surprisingly, this is not the case, and we have found that a small amount of quantum communication can free up a large amount of information. In Figure \[various\]a we plot the region occupied by pure states.
For extraction processes, pure states saturate the bound of Eq. (\[eq:compl\]) [@compl]. For formation processes they are represented as points.
In general, if $\DF=0$ then $\EF=\ED$. Examples include mixtures of locally orthogonal pure states[@termo]. The converse is not true, at least for single copies, as there are separable states such as those of [@Bennett-nlwe] which have $\DF\neq 0$, and $\DD\neq 0$.
It therefore appears that $\DF$ is not a function of the entropy of the state, or of the entanglement, but rather, shows how chaotic the quantum correlations are. It can also be thought of as the information that is dissipated during a process, while $\DD$ can be thought of as the [*bound information*]{} which cannot be extracted under LOCC. Figure \[various\]b-d shows the curves for some different types of states. It is interesting the extent to which one can classify the different states just by examining the diagrams in information space.
The quantities we are discussing have (direct or metaphoric) connections with thermodynamics. Local information can be used to draw physical work, and quantum communication has been likened to quantum logical work [@termo]. One is therefore tempted to investigate whether there can be some effects similar to phase transitions. Indeed, we will demonstrate such an effect for a family of mixed states where the transition is of second order, in that the derivative of our curves will behave in a discontinuous way.
To this end we need to know more about $\ER$ and $I_f$. Consider the notion of [*-orthogonality*]{} (cf. [@termo]). One says that $\varrho^i$ is -orthogonal, if by LOCC Alice and Bob can transformed $\sum_ip_i|i\>_{A'}\<i|\varrho^i_{AB}$ into $|0\>_{A'}\<0|\otimes \sum_ip_i\varrho^i_{AB}$ and vice versa; $|i\>_{A'}$ is the basis of Alice’s ancilla. In other words, Alice and Bob are able to correlate the state $\varrho_i$ to orthogonal states of a local ancilla as well as reset the correlations. Consider a state $\varrho_{AB}$ that can be -decomposed, i.e. it is a mixture of -orthogonal states $\varrho=\sum_ip_i\varrho_i$. The decomposition suggests a scheme for reversible formation of $\varrho$. Alice prepares locally the state $\varrho_{A'AB}=\sum_ip_i|i\>_{A'}\<i|\varrho^i_{AB}$. This costs $n_{A'AB}-S(\varrho_{A'AB})$ bits of information. Conditioned on $i$, Alice compresses the halves of $\varrho_i$, and sends them to Bob via a quantum channel. This costs $\sum_ip_i S(\varrho_B)$ qubits of quantum communication. Then, since the $\varrho_i$ are -orthogonal, Alice and Bob can reset the ancilla, and return $n_{A'} $ bits. One then finds, in this protocol, formation costs $n_{AB}-S(\varrho_{AB})$ bits, hence it is reversible. Consequently $\ER\leq \sum_ip_i S(\varrho_B)$, hence () \_X\_ip\_i S(\_X\^i) ,X=A,B \[eq:er-bound\] where the infimum runs over all -orthogonal decompositions of $\rab$.
We can also estimate $\IF$ by observing that the optimal decomposition for entanglement cost is compatible with -orthogonal decompositions, i.e. it is of the form $\{ p_i q_{ij}, \psi_{ij}\}$ where $\sum_j q_{ij} |\psi_{ij}\>\<\psi_{ij}|=\varrho_i$. Now, Alice prepares locally the state $\varrho_{A'A''AB}=\sum_ip_iq_{ij} |i\>_{A'}\<i|\otimes
|j\>_{A''}\<j| \otimes |\psi_{ij}\>_{AB}\<\psi_{ij}|$. Conditional on $ij$, Alice compresses the halves of $\psi_{ij}$’s and sends them to Bob. This costs on average $\EF$ qubits of communication. So far Alice borrowed $n_{A'A''AB}- S(\varrho_{A'A''AB})$ bits. Alice and Bob then reset and return ancilla $A'$ (this is possible due to -orthogonality of $\varrho_i$) and also return ancilla $A''$ without resetting. The amount of bits used is $n_{AB} - (S(\varrho_{AB})-
\sum_ip_iS(\varrho_i))$, giving \_ip\_iS(\_i)S() \[eq:Delta-bound\] where, again, the infimum runs over the same set of decompositions as in Eq. (\[eq:er-bound\]) providing a connection between $\DF$ and $\ER$. In the procedure above, collective operations were used only in the compression stage. In such a regime the above bounds are tight. There is a question, whether by some sophisticated collective scheme, one can do better. We conjecture that it is not the case, supported by the remarkable result of [@HaydenJW2002]. The authors show that an ensemble of nonorthogonal states cannot be compressed to less than $S(\varrho)$ qubits even at the expense of classical communication. In our case orthogonality is replaced by -orthogonality, and classical communication by resetting. We thus assume equality in Eqs. (\[eq:er-bound\]), (\[eq:Delta-bound\]). Thus for a state that is not -decomposable (this holds for all two qubit states that do not have a product eigenbasis) we have $\DF=S(\rab)$, $\ER=\min\{S(\varrho_A),S(\varrho_B)\}$.
Having fixed two extremal points of our curves, let us see if there is a protocol which is better than the probabilistic one (a straight line on the diagram). We need to find some intermediate protocol which is cheap in both resources. The protocol is suggested by the decomposition $\varrho=\sum_ip_i\varrho_i$ where $\varrho_i$ are themselves -orthogonal mixtures of [*pure*]{} states. Thus Alice can share with Bob each $\varrho_i$ at a communication cost of $Q=\sum_ip_i \EF(\varrho_i)$. If the states $\varrho_i$ are not -orthogonal, Alice cannot reset the instruction set, so that the information cost is $I=n-\sum_ip_iS(\varrho_i)$. We will now show by example, that this may be a very cheap scenario. Consider = p|\_+\_+|+(1-p)|\_-\_-|, p\[eq:bellmix\] with $\psi_\pm={1\over \sqrt2}(|00\>\pm|11\>)$. When $p\not=0$ we have $\ER=1$, $\IF=2$, $E_c=H({1\over 2} + \sqrt{p(1-p)})$ [@Vidal-cost2002] where $H(x)=-x\log x - (1-x) \log (1-x)$ is the binary entropy; thus our extreme points are $(1,2-H(p))$ and $(E_c, 2)$. For $p=0$ the state has $\DD=0$ hence the formation curve is just a point. We can however plot it as a line $I=1$ (increasing $Q$ will not change $I$). Now, we decompose the state as $\varrho=2p \varrho_s + (1-2p)|\psi_-\>\<\psi_-|$, where $\varrho_s$ is an equal mixture of -orthogonal states $|00\>$ and $|11\>$. The intermediate point is then $(1-2p,2-2p)$. A family of diagrams with changing parameter $p$ is plotted in Fig. (\[fig:phase\]). The derivative $\chi(Q)={\partial Q\over \partial I}$ has a singularity at $p=1/2$. Thus we have something analogous to a second order phase transition. The quantity $\chi(Q)$ may be analogous to a quantity such as the magnetic susceptibility. The transition is between states having $\DD=0$ (classically correlated)[@OHHH2001] and states with $\DD\not=0$ which contain quantum correlations. It would be interesting to explore these transitions and diagrams further, and also the trade-off between information and quantum communication. To this end, the quantity $\DFP+Q-\EF$ appears to express this tradeoff. Finally, we hope that the presented approach may clarify an intriguing notion in quantum information theory, known as the [*thermodynamics of entanglement*]{} [@popescu-rohrlich; @termo; @thermo-ent2002].
[**Acknowledgments**]{}: This work supported by EU grant EQUIP, Contract IST-1999-11053. J.O. acknowledges Lady Davis, and grant 129/00-1 of the ISF. M.H. and R.H. thank C. Tsallis for discussions.
[17]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, , , ****, (), .
, ****, () .
, , , ****, (), .
, , , ****, (), .
, ****, ().
, ****, ().
, ****, ().
, , , , ****, (), .
, ****, () .
, .
.
, , , , , .
, , , ****, (), .
, , , .
, ****, (), .
, , , , .
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Based on a geometrical argument introduced by Żukowski, a new multisetting Bell inequality is derived, for the scenario in which many parties make measurements on two-level systems. This generalizes and unifies some previous results. Moreover, a necessary and sufficient condition for the violation of this inequality is presented. It turns out that the class of non-separable states which do not admit local realistic description is extended when compared to the two-setting inequalities. However, supporting the conjecture of Peres, quantum states with positive partial transposes with respect to all subsystems do not violate the inequality. Additionally, we follow a general link between Bell inequalities and communication complexity problems, and present a quantum protocol linked with the inequality, which outperforms the best classical protocol.'
author:
- Koji Nagata
- 'Wies[ł]{}aw Laskowski'
- Tomasz Paterek
title: Bell inequality with an arbitrary number of settings and its applications
---
Introduction
============
Any theory based on classical concepts, such as locality and realism, predicts bounds on the correlations between measurement outcomes obtained in space-separation [@BELL]. These bounds are known as Bell inequalities (see [@REVIEWS] for reviews). Profoundly, the correlations measured on certain quantum states violate Bell inequalities, implying incompatibility between the quantum and classical worldviews. Which are these non-classical states of quantum mechanics? Here, we present a tool which allows one to extend the class of non-classical states, and gives further evidence that there may exist many-particle entangled states whose correlations admit a local realistic description.
Despite their fundamental role, with the emergence of quantum information [@NC], Bell inequalities have found practical applications. Quantum advantages of certain protocols, like quantum cryptography [@CRYPTOGRAPHY] or quantum communication complexity [@BRUKNER_PRL], are linked with Bell inequalities. Thus, new inequalities lead to new schemes. As an example, we present communication complexity problem associated with the new multisetting inequality.
Specifically, based on a geometrical argument by Żukowski [@ZUKOWSKI_PLA], a Bell inequality for many observers, each choosing between arbitrary number of dichotomic observables, is derived. Many previously known inequalities are special cases of this new inequality, e.g. Clauser-Horne-Shimony-Holt inequality [@CHSH] or tight two-setting inequalities [@MERMIN]. The new inequalities are maximally violated by the Greenberger-Horne-Zeilinger (GHZ) states [@GHZ]. Many other states violate them, including the states which satisfy two-settings inequalities [@ZBLW] and bound entangled states [@DUR]. This is shown using the necessary and sufficient condition for the violation of the inequalities. Finally, it is proven that the Bell operator has only two non-vanishing eigenvalues which correspond to the GHZ states, and thus has a very simple form. This form is utilized to show that quantum states with positive partial transposes [@PPT] with respect to all subsystems (in general the necessary but not sufficient condition for entanglement [@HORODECKI]) do not violate the new inequalities. This is further supporting evidence for a conjecture by Peres that positivity of partial transposes could lead us to the existence of a local realistic model [@PERES].
The paper is organized as follows. In section II we present the multisetting inequality. In section III the necessary and sufficient condition for a violation of the inequality is derived, and examples of non-classical states are given. Next, we support the conjecture by Peres in section IV, and follow the link with communication complexity problems in section V. Section VI summarizes this paper.
Multisetting Bell Inequalities
==============================
Consider $N$ separated parties making measurements on two-level systems. Each party can choose one of $M$ dichotomic, of values $\pm 1$, observables. In this scenario parties can measure $M^N$ correlations $E_{m_1...m_N}$, where the index $m_n=0,...,M-1$ denotes the setting of the $n$th observer. A general Bell expression, which involves these correlations with some coefficients $c_{m_1...m_N}$, can be written as: $$\sum_{m_1, ..., m_N =0}^{M-1} c_{m_1...m_N} E_{m_1...m_N} = \vec C \cdot \vec E.
\label{GENERAL_BELL}$$ In what follows we assume certain form of coefficients $c_{m_1...m_N}$, and compute local realistic bound as a maximum of a scalar product $|\vec C \cdot \vec E^{LR}|$. The components of vector $\vec E^{LR}$ have the usual form: $$E_{m_1...,m_N}^{LR} = \int d \lambda \rho(\lambda) I_{m_1}^1(\lambda)...I_{m_N}^N(\lambda),
\label{E_LR}$$ where $\lambda$ denotes a set of hidden variables, $\rho(\lambda)$ their distribution, and $I_{m_n}^{n}(\lambda) = \pm 1$ the predetermined result of $n$th observer under setting $m_n$. The quantum prediction for the Bell expression (\[GENERAL\_BELL\]) is given by a scalar product of $\vec C \cdot \vec E^{QM}$. The components of $\vec E^{QM}$, according to quantum theory, are given by: $$E_{m_1...m_N}^{QM} = {\rm Tr}\left( \rho \vec m_1 \cdot \vec \sigma^1 \otimes ... \otimes \vec m_N \cdot \vec \sigma^N \right),$$ where $\rho$ is a density operator (general quantum state), $\vec \sigma^n = (\sigma_x^n,\sigma_y^n,\sigma_z^n)$ is a vector of local Pauli operators for $n$th observer, and $\vec m_n$ denotes a normalized vector which parameterizes observable $m_n$ for the $n$th party.
Assume that local settings are parameterized by a single angle: $\phi_{m_n}^n$. In the quantum picture we restrict observable vectors $\vec m_n$ to lie in the equatorial plane: $$\vec m_n \cdot \vec \sigma^n = \cos\phi_{m_n}^n \sigma_x^n + \sin\phi_{m_n}^n \sigma_y^n.$$ Take the coefficients $c_{m_1...m_N}$ in a form: $$c_{m_1...m_N} = \cos(\phi_{m_1}^1 + ... + \phi_{m_N}^N),$$ with the angles given by: $$\phi_{m_n}^n = \frac{\pi}{M} m_n + \frac{\pi}{2MN} \eta.
\label{ANGLES}$$ The number $\eta=1,2$ is fixed for a given experimental situation, i.e. $M$ and $N$, and equals: $$\eta = [M+1]_2 [N]_2 + 1,
\label{ETA}$$ where $[x]_2$ stands for $x$ modulo $2$. The local realistic bound is given by a maximal value of the scalar product $|\vec C \cdot \vec E^{LR}|$. The maximum is attained for deterministic local realistic models, as they correspond to the extremal points of a correlation polytope. Thus, the following inequality appears: $$\begin{aligned}
&&|\vec C \cdot \vec E^{LR}| \le \label{INEQ_DETER} \\
&&\max_{I^1_0, ..., I^N_{M-1} = \pm 1} \left\{ \sum_{m_1,...,m_N=0}^{M-1} \! \! \! \! \! \!
\cos(\phi_{m_1}^1 + ... + \phi_{m_N}^N)
I_{m_1}^1...I_{m_N}^N \right\},
\nonumber\end{aligned}$$ where we have shortened the notation $I_{m_n}^n \equiv I_{m_n}^n(\lambda)$. Since $\cos(\phi_{m_1}^1 + ... + \phi_{m_N}^N) = {\rm Re} \left( \prod_{n=1}^N \exp{(i \phi_{m_n}^n)} \right)$ and the predetermined results, $I_{m_n}^n = \pm 1$, are real, the right-hand side of this inequality can be written as: $$\sum_{m_1,...,m_N=0}^{M-1}
{\rm Re} \left( \prod_{n=1}^N \exp{(i \phi_{m_n}^n)} I_{m_n}^n \right).$$ Moreover, since inequality (\[INEQ\_DETER\]) involves the sum of all possible products of local results respectively multiplied by the cosines of all possible sums of local angles, the right-hand side can be further reduced to involve the product of sums: $${\rm Re} \left( \prod_{n=1}^N \sum_{m_n=0}^{M-1}\exp{(i \phi_{m_n}^n)} I_{m_n}^n \right).$$ Inserting the angles (\[ANGLES\]) into this expression results in: $${\rm Re} \left( \exp{(i \frac{\pi}{2M} \eta)}
\prod_{n=1}^N \sum_{m_n=0}^{M-1}\exp{(i \frac{\pi}{M}m_n)} I_{m_n}^n \right),
\label{RE_ANGLES}$$ where the factor $\exp{(i \frac{\pi}{2M} \eta)}$ comes from the term $\frac{\pi}{2MN} \eta$ in (\[ANGLES\]), which is the same for all parties.
One can decompose a complex number given by the sum in (\[RE\_ANGLES\]) into its modulus $R_n$, and phase $\Phi_n$: $$\sum_{m_n=0}^{M-1}\exp{(i \frac{\pi}{M}m_n)} I_{m_n}^n = R_n e^{i\Phi_n}.
\label{VECTOR}$$ We maximize the length of this vector on the complex plane. The length of the sum of any two complex numbers $|z_1 + z_2|^2$ is given by the law of cosines as $|z_1|^2+|z_2|^2 + 2 |z_1| |z_2| \cos \varphi$, where $\varphi$ is the angle between the corresponding vectors. To maximize the length of the sum one should choose the summands as close as possible to each other. Since in our case all vectors being summed are rotated by multiples of $\frac{\pi}{M}$ from each other, the simplest optimal choice is to put all $I_{m_n}^n = 1$. In this case one has: $$R_n^{\max} = \left| \sum_{m_n=0}^{M-1}\exp{(i \frac{\pi}{M}m_n)} \right|
= \left| \frac{2}{1-\exp{(i\frac{\pi}{M})}} \right|,$$ where the last equality follows from the finite sum of numbers in the geometric progression (any term in the sum is given by the preceding term multiplied by $e^{i
\pi/M}$). The denominator inside the modulus can be transformed to $\exp{(i\frac{\pi}{2M})} \left[ \exp{(-i\frac{\pi}{2M})} - \exp{(i\frac{\pi}{2M})} \right]$, which reduces to $- 2i \exp{(i\frac{\pi}{2M})} \sin\left(\frac{\pi}{2M}\right)$. Finally, the maximal length reads: $$R_n^{\max} = \frac{1}{\sin\left(\frac{\pi}{2M}\right)},$$ where the modulus is no longer needed since the argument of sine is small. Moreover, since the local results for each party can be chosen independently, the maximal length $R_n^{\max}$ does not depend on particular $n$, i.e. $R_n^{\max} = R^{\max}$.
Since $R^{\max}$ is a positive real number its $N$th power can be put to multiply the real part in (\[RE\_ANGLES\]), and one finds $|\vec C \cdot \vec E^{LR}|$ to be bounded by: $$|\vec C \cdot \vec E^{LR}| \le
\left[ \sin \left(\frac{\pi}{2M} \right) \right]^{-N} \! \! \! \!
\cos \left( \frac{\pi}{2M} \eta + \Phi_1 + ... + \Phi_N \right),$$ where the cosine comes from the phases of the sums in (\[RE\_ANGLES\]). These phases can be found from the definition (\[VECTOR\]). As only vectors rotated by a multiple of $\frac{\pi}{M}$ are summed (or subtracted) in (\[VECTOR\]), each phase $\Phi_n$ can acquire only a restricted set of values. Namely: $$\Phi_n =
\Bigg\{
\begin{array}{lc}
\frac{\pi}{2M} + \frac{\pi}{M}k & \textrm{ for } M \textrm{ even}, \\
& \\
\frac{\pi}{M}k & \textrm{ for } M \textrm{ odd},
\end{array}$$ with $k=0,...,2 M-1$, i.e. for $M$ even, $\Phi_n$ is an odd multiple of $\frac{\pi}{2 M}$; and for $M$ odd, $\Phi_n$ is an even multiple of $\frac{\pi}{2 M}$. Thus, the sum $\Phi_1 + ... + \Phi_N$ is an even multiple of $\frac{\pi}{2 M}$, except for $M$ even and $N$ odd. Keeping in mind the definition of $\eta$, given in (\[ETA\]), one finds the argument of $\cos \left( \frac{\pi}{2M}\eta + \Phi_1 + ... + \Phi_N \right)$ is always odd multiple of $\frac{\pi}{2 M}$, which implies the maximum value of the cosine is equal to $\cos \left(\frac{\pi}{2M} \right)$. Finally, the multisetting Bell inequality reads: $$|\vec C \cdot \vec E^{LR}| \le \left[ \sin \left(\frac{\pi}{2M}\right) \right]^{-N} \cos \left(\frac{\pi}{2M} \right).
\label{INEQUALITY}$$ This inequality, when reduced to two parties choosing between two settings each, recovers the famous Clauser-Horne-Shimony-Holt inequality [@CHSH]. For higher number of parties, still choosing between two observables, it reduces to tight two-setting inequalities [@MERMIN]. When $N$ observers choose between three observables the inequalities of Żukowski and Kaszlikowski are obtained [@ZUK_KASZ], and for continuous range of settings ($M \to \infty$) it recovers the inequality of Żukowski [@ZUKOWSKI_PLA].
Quantum Violations
==================
In this section we present a Bell operator associated with the inequality (\[INEQUALITY\]). Next, it is used to derive the necessary and sufficient condition for the violation of the inequality. Using this condition we recover already known results and present some new ones.
The form of the coefficients $c_{m_1...m_N} = \cos(\phi_{m_1}^1 + ... + \phi_{m_N}^N)$ we have chosen is exactly the same as the quantum correlation function $E_{m_1...m_N}^{GHZ} = \cos(\phi_{m_1}^1 + ... + \phi_{m_N}^N)$ for the Greenberger-Horne-Zeilinger state: $$|\psi^+ \rangle = \frac{1}{\sqrt{2}}
\Big[ |0\rangle_1 ... |0\rangle_N + |1\rangle_1 ... |1\rangle_N \Big],$$ where the vectors $|0 \rangle_n$ and $|1 \rangle_n$ are the eigenstates of local $\sigma_z^n$ operator of the $n$th party. For this state the two vectors $\vec C$ and $\vec E^{GHZ}$ are equal (thus parallel), which means that the state $|\psi^+ \rangle$ maximally violates inequality (\[INEQUALITY\]). The value of the left hand side of (\[INEQUALITY\]) is given by the scalar product of $\vec E^{GHZ}$ with itself: $$\vec E^{GHZ} \cdot \vec E^{GHZ}
= \sum_{m_1,...,m_N=0}^{M-1} \cos^2(\phi_{m_1}^1 + ... + \phi_{m_N}^N).
\label{COS_SQUARE}$$ Using the trigonometric identity $\cos^2 \alpha = \frac{1}{2}(1+\cos2\alpha)$ one can rewrite this expression into the form: $$\vec E^{GHZ} \cdot \vec E^{GHZ}
= \frac{1}{2}M^N + \frac{1}{2}\sum_{m_1,...,m_N=0}^{M-1}
\! \! \! \! \! \! \! \! \cos[2(\phi_{m_1}^1 + ... + \phi_{m_N}^N)].$$ As before, the second term can be written as a real part of a complex number. Putting the values of angles (\[ANGLES\]) one arrives at: $$\frac{1}{2} {\rm Re}
\left( \exp{(i \frac{\pi}{M} \eta)} \prod_{n=1}^N \sum_{m_n=0}^{M-1}\exp{(i \frac{2\pi}{M}m_n)}\right).
\label{COMPLEX_ROOTS}$$ Note that $e^{i\frac{2\pi}{M}}$ is a primitive complex $M$th root of unity. Since all complex roots of unity sum up to zero the above expression vanishes, and a maximal quantum value of the left hand side of (\[INEQUALITY\]) equals: $$\vec E^{GHZ} \cdot \vec E^{GHZ} = \frac{1}{2}M^N.$$ If instead of $| \psi^+ \rangle$ one chooses the state $| \psi^- \rangle = \frac{1}{\sqrt{2}}
[ |0\rangle_1 ... |0\rangle_N - |1\rangle_1 ... |1\rangle_N ]$, for which the correlation function is given by $E_{m_1...m_N}^{GHZ-} = - \cos(\phi_{m_1}^1 + ... + \phi_{m_N}^N)$, one arrives at a minimal value of the Bell expression, equal to $-\frac{1}{2}M^N$, as the vectors $\vec C$ and $\vec E^{GHZ-}$ are exactly opposite. Since we take a modulus in the Bell expression, both states lead to the same violation.
The Bell operator associated with the Bell expression (\[INEQUALITY\]) is defined as: $$\mathcal{B'} \equiv \! \! \! \! \! \! \!
\sum_{m_1...m_N=0}^{M-1} \! \! \! \! \! \! c_{m_1...m_N}
\vec m_1 \cdot \vec \sigma^1 \otimes ... \otimes \vec m_N \cdot \vec \sigma^N.
\label{B}$$ Its average in the quantum state $\rho$ is equal to the quantum prediction of the Bell expression, for this state. We shall prove that it has only two eigenvalues $\pm \frac{1}{2}M^N$, and thus is of the simple form: $$\mathcal{B} \equiv
\mathcal{B}(N,M) = \frac{1}{2}M^N \left[ |\psi^+ \rangle \langle \psi^+ | -
|\psi^- \rangle \langle \psi^- |\right].
\label{BELL_OPERATOR}$$
Both operators $\mathcal{B}$ and $\mathcal{B'}$ are defined in the Hilbert-Schmidt space with the trace scalar product. To prove their equivalence one should check if the conditions: $${\rm Tr}(\mathcal{B'} \mathcal{B}) =
{\rm Tr}(\mathcal{B} \mathcal{B}) =
{\rm Tr}(\mathcal{B'} \mathcal{B'}),
\label{TRACES}$$ are satisfied. Geometrically speaking, these conditions mean that the “length” and “direction” of the operators are the same.
The trace ${\rm Tr}(\mathcal{B'} \mathcal{B})$ involves the traces ${\rm Tr}\left(|\psi^{\pm} \rangle \langle \psi^{\pm} | \vec m_1 \cdot \vec \sigma^1 \otimes ... \otimes \vec m_N \cdot \vec \sigma^N \right)$, which are the quantum correlation functions (averages of the product of local observables) for the GHZ states, and thus are given by $\pm \cos(\phi_{m_1}^1 + ... + \phi_{m_N}^N)$. Their difference doubles the cosine, which is then multiplied by the same cosine coming from the coefficients $c_{m_1...m_N}$. Thus the main trace takes the form: $${\rm Tr}(\mathcal{B} \mathcal{B'}) =
M^N \! \! \! \! \! \! \sum_{m_1...m_N=0}^{M-1} \! \! \! \! \!
\cos^2(\phi_{m_1}^1 + ... + \phi_{m_N}^N)
= \frac{1}{2} M^{2N},$$ where the last equality sign follows from the considerations below Eq. (\[COS\_SQUARE\]).
The middle trace of (\[TRACES\]) is given by ${\rm Tr}(\mathcal{B} \mathcal{B}) = \frac{1}{2} M^{2N}$, which directly follows from the orthonormality of the states $| \psi^{\pm} \rangle$.
The last trace of (\[TRACES\]) is more involved. Inserting decomposition (\[B\]) into ${\rm Tr}(\mathcal{B'} \mathcal{B'})$ gives: $$\begin{aligned}
&& \sum_{\substack{
m_1...m_N, \\
m_1'...m_N'=0}}^{M-1}
\! \! \! \! \!
\cos(\phi_{m_1}^1 + ... + \phi_{m_N}^N) \cos(\phi_{m_1'}^1 + ... + \phi_{m_N'}^N) \\
&& \times {\rm Tr}[(\vec m_1 \cdot \vec \sigma^1)(\vec m_1' \cdot \vec \sigma^1) ]...
{\rm Tr}[(\vec m_N \cdot \vec \sigma^N)(\vec m_N' \cdot \vec \sigma^N) ]\end{aligned}$$ The local traces are given by: $${\rm Tr}[(\vec m_n \cdot \vec \sigma^n)(\vec m_n' \cdot \vec \sigma^n) ]
= 2 \vec m_n \cdot \vec m_n' = 2 \cos(\phi_{m_n}^n - \phi_{m_n'}^n).$$ Thus, the factor $2^N$ appears in front of the sums. We write all the cosines (of sums and differences) in terms of individual angles, insert these decompositions into ${\rm Tr}(\mathcal{B'} \mathcal{B'})$, and perform all the multiplications. Note that whenever the final product term involves at least one expression like $\cos\phi_{m_n}^n \sin \phi_{m_n}^n = \frac{1}{2}
\sin(2 \phi_{m_n}^n)$ (or for the primed angles) its contribution to the trace vanish after the summations \[for the reasons discussed in Eq. (\[COMPLEX\_ROOTS\])\]. Moreover, in the decomposition of $\cos(\phi_{m_n}^n - \phi_{m_n'}^n) = \cos\phi_{m_n}^n \cos\phi_{m_n'}^n +
\sin\phi_{m_n}^n \sin\phi_{m_n'}^n$ only the products of the same trigonometric functions appear. In order to contribute to the trace they must be multiplied by again the same functions. Since the decompositions of cosines of sums only differ in angles (primed or unprimed) and not in the individual trigonometric functions, the only contributing terms come from the product of exactly the same individual trigonometric functions in the decomposition of $\cos(\phi_{m_1}^1 + ... + \phi_{m_N}^N)$ and $\cos(\phi_{m_1'}^1 + ... + \phi_{m_N'}^N)$. There are $2^{N-1}$ such products, as many as the number of terms in the decomposition of $\cos(\phi_{m_1}^1 + ... + \phi_{m_N}^N)$. Each product involves $2N$ squared individual trigonometric functions. Each of these functions can be written in terms of cosines of a double angle, e.g. $\sin^2 \phi_{m_n}^n =
\frac{1}{2}(1-\cos(2\phi_{m_n}^n))$, and the last cosine does not contribute to the sum \[again due to (\[COMPLEX\_ROOTS\])\]. Finally the trace reads: $${\rm Tr}(\mathcal{B'} \mathcal{B'}) = 2^N
\! \! \! \! \!
\sum_{ \substack{
m_1...m_N, \\ m_1'...m_N'=0}}^{M-1}
\! \! \! \! \!
2^{N-1} \frac{1}{2^{2N}} = \frac{1}{2} M^{2N}.$$ Thus, equations (\[TRACES\]) are all satisfied, i.e. both operators $\mathcal{B}$ and $\mathcal{B'}$ are equal. Only the states which have contributions in the subspace spanned by $| \psi^{\pm} \rangle$ can violate the inequality (\[INEQUALITY\]).
*Necessary and sufficient condition for the violation of the inequality*. The expected quantum value of the Bell expression, using Bell operator, reads: $${\rm Tr}(\mathcal{B}(N,M)\rho) =
\frac{M^N}{2} \left[ {\rm Tr}(|\psi^+ \rangle \langle \psi^+ |\rho) -
{\rm Tr}(|\psi^- \rangle \langle \psi^- | \rho)\right].
\label{AVERAGED_BELL_OP}$$ The violation condition is obtained after maximization, for a given state, over the position of the $xy$ plane, in which the observables lie.
An arbitrary state (density operator) of $N$ qubits can be decomposed using local Pauli operators as: $$\rho = \frac{1}{2^N} \sum_{\mu_1...\mu_N=0}^{3} T_{\mu_1...\mu_N}
\sigma_{\mu_1} \otimes ... \otimes \sigma_{\mu_N},$$ where the set of averages $T_{\mu_1...\mu_N} = {\rm Tr}[\rho(\sigma_{\mu_1} \otimes ... \otimes \sigma_{\mu_N})]$ forms the so-called correlation tensor. The correlation tensors of the projectors $|\psi^\pm \rangle \langle \psi^\pm |$ are denoted by $T_{\nu_1...\nu_N}^{\pm}$. Using the linearity of the trace operation and the fact that the trace of the tensor product is given by the product of local traces, one can write ${\rm Tr}(|\psi^\pm \rangle \langle \psi^\pm | \rho)$ in terms of correlation tensors: $$\frac{1}{2^{2N}} \! \! \! \!
\sum_{\substack{
\mu_1...\mu_N,\\
\nu_1...\nu_N=0}}^3
\! \! \! \!
T_{\nu_1...\nu_N}^{\pm} T_{\mu_1...\mu_N}
{\rm Tr}(\sigma_{\mu_1} \sigma_{\nu_1})...{\rm Tr}(\sigma_{\mu_N} \sigma_{\nu_N}). \nonumber$$ Since each of the $N$ local traces ${\rm Tr}(\sigma_{\mu_n} \sigma_{\nu_n}) = 2 \delta_{\mu_n \nu_n}$, the global trace is given by: $${\rm Tr}(|\psi^\pm \rangle \langle \psi^\pm | \rho)
= \frac{1}{2^{N}} \sum_{\mu_1...\mu_N=0}^3
T_{\mu_1...\mu_N}^{\pm} T_{\mu_1...\mu_N}.
\label{TRACING}$$ The nonvanishing correlation tensor components of the GHZ states $| \psi^{\pm} \rangle$ are the same in the $z$ plane: $T_{z...z}^{\pm} = 1$ for $N$ even; and are exactly opposite in the $xy$ plane: $T_{i_1...i_N}^{+} = - T_{i_1...i_N}^{-} = (-1)^\xi$ with $2\xi$ indices equal to $y$ and all remaining equal to $x$. Inserting the traces (\[TRACING\]) into the averaged Bell operator (\[AVERAGED\_BELL\_OP\]) one finds that the components in the $z$ plane cancel out, and components in the $xy$ plane double themselves. Finally, the necessary and sufficient condition to satisfy the inequality is given by: $$\left( \frac{M}{2}\right)^N \max \sum_{i_1...i_N \in I_\xi} (-1)^{\xi} T_{i_1...i_N} \le
B_{LR}(N,M),
\label{NS}$$ where the maximization is performed over the choice of local coordinate systems, $I_\xi$ includes all sets of indices $i_1...i_N$ with 2$\xi$ indices equal to $y$ and the rest equal to $x$, and $$B_{LR}(N,M) = \left[ \sin \left(\frac{\pi}{2M}\right) \right]^{-N} \cos \left(\frac{\pi}{2M} \right)$$ denotes the local realistic bound.
We now present examples of states, which violate the new inequality. As a measure of violation, $V(N,M)$, we take the average (quantum) value of the Bell operator in a given state, divided by the local realistic bound: $$V(N,M) = \frac{\langle \mathcal{B}(N,M) \rangle_{\rho}}{B_{LR}(N,M)}.$$
*GHZ state*. First, let us simply consider $| \psi^{\pm} \rangle$. For the case of two settings per side one recovers previously known results [@MERMIN; @WW_BI; @ZB]: $$V(N,2) = 2^{(N-1)/2}.$$ For three settings per side the result of Żukowski and Kaszlikowski is obtained [@ZUK_KASZ]: $$V(N,3) = \frac{1}{\sqrt{3}}\left(\frac{3}{2} \right)^{N}.$$ For the continuous range of settings one recovers [@ZUKOWSKI_PLA]: $$V(N,\infty) = \frac{1}{2}\left(\frac{\pi}{2} \right)^{N}.$$ In the intermediate (unexplored before) regime one has: $$V(N,M) = \frac{1}{2 \cos \left(\frac{\pi}{2M} \right)}\left(M \sin \left(\frac{\pi}{2M} \right) \right)^{N}.$$ For a fixed number of parties $N > 3$ the violation increases with the number of local settings. Surprisingly, the inequality implies for the cases of $N=2$ and $N=3$ that the violation decreases when the number of local settings grows. This behaviour is shown in the Fig. \[M\_PLOT\].
![Violation factor as a function of number of local settings, $M$, for the $N$-qubit GHZ state.[]{data-label="M_PLOT"}](geom){width="50.00000%"}
The violation of local realism always grows with increasing number of parties.
*Generalized GHZ state.* Consider the GHZ state with free real coefficients: $$| \psi \rangle =
\cos\alpha | 0 \rangle_1 ... | 0 \rangle_N + \sin\alpha | 1 \rangle_1 ... | 1 \rangle_N.$$ Its correlation tensor in the $xy$ plane has the following nonvanishing components: $T_{x...x} = \sin2\alpha$, and the components with 2$\xi$ indices equal to $y$ and the rest equal to $x$ take the value of $(-1)^{\xi} \sin2\alpha$ (there are $2^{N-1}-1$ such components). Thus, all $2^{N-1}$ terms contribute to the violation condition (\[NS\]). The violation factor is equal to $V(N,M)=\frac{M^N}{2 B_{LR}(N,M)} \sin2\alpha$. For $N>3$ and $M>2$ the violation is bigger than the violation of standard two-setting inequalities [@ZB]. Moreover, some of the states $| \psi \rangle$, for small $\alpha$ and odd $N$, do not violate *any* two-settings correlation function Bell inequality [@ZBLW], and violate the multisetting inequality.
*Bound entangled state.* Interestingly, the inequality can reveal non-classical correlations of a bound entangled state introduced by Dür [@DUR]: $$\rho_N = \frac{1}{N+1} \left( |\phi \rangle \langle \phi| + \frac{1}{2} \sum_{k=1}^N (P_k + \tilde P_k) \right),$$ with $|\phi \rangle = \frac{1}{\sqrt{2}} \left[ |0 \rangle_1 ... | 0 \rangle_N + e^{i \alpha_N} |1 \rangle_1 ... | 1\rangle_N \right]$ ($\alpha_N$ is an arbitrary phase), and $P_k$ being a projector on the state $|0 \rangle_1 ... |1 \rangle_k ... |0 \rangle_N$ with “1” on the $k$th position ($\tilde P_k$ is obtained from $P_k$ after replacing “0” by “1” and vice versa). As originally shown in [@DUR] this state violates Mermin-Klyshko inequalities for $N \ge 8$. The new inequality predicts the violation factor of: $$V(N,M) = \frac{1}{N+1} \frac{M^N ~\cos \alpha_N}{2B_{LR}(N,M)},
\label{viol-bes}$$ which comes from the contribution of the GHZ-like state $| \phi \rangle$ to the bound entangled state. One can follow Ref. [@KASZLI] and change the Bell-operator (\[BELL\_OPERATOR\]) such that the state $|\phi \rangle$ becomes its eigenstate. The new operator, $\tilde{\mathcal{B}} (N,M)$, is obtained after applying local unitary transformations $U = |0\rangle \langle 0| + e^{i\alpha_N/N} |1\rangle \langle 1|$ to the operator (\[BELL\_OPERATOR\]), i.e. $\tilde{\mathcal{B}} (N,M) = U^{\otimes N} \mathcal{B} U^{\dagger \otimes N}$. The violation factor of the new inequality is higher than (\[viol-bes\]), and equal to: $$\tilde{V}(N,M) = \frac{1}{N+1} \frac{M^N}{2B_{LR}(N,M)}.
\label{TILDE_VIS}$$ If one sets $M=3$ it appears that the number of parties sufficient to see the violation (\[TILDE\_VIS\]) reduces to $N \ge 7$ [@KASZLI]. On the other hand the result of [@ADITI] shows that the infinite range of settings further reduces the number of parties to $N \ge 6$. Using the new inequality, $M=5$ settings per side suffice to already violate local realism with $N \ge 6$ parties.
Positive Partial Transpose
==========================
In this section it is shown that all the states with positive partial transpose with respect to all subsystems satisfy the multisetting inequality (\[INEQUALITY\]). This result further supports the conjecture by Peres, that all such states can admit local realistic description [@PERES]. First we briefly review partial transpositions, next present an inequality that all such states must satisfy, and finally compare it with the Bell inequality (\[INEQUALITY\]).
The partial transpose of an operator on a Hilbert space ${H}_1\otimes{H}_2$ is defined by: $$\begin{aligned}
\left(\sum_l A^1_l\otimes A^2_l\right)^{T_1}=\sum_l{A_l^1}^{T} \otimes A_l^2,\end{aligned}$$ where the superscript $T$ denotes transposition in the given basis. The positivity of partial transpose is found to be a necessary condition for separability [@PPT; @HORODECKI]. The operator obtained by the partial transpose of any separable state is positive (PPT - positive partial transpose). In the bipartite case of two qubits or qubit-qutrit system, the PPT criterion is also sufficient for separability.
In the multipartite case the situation complicates as one can have many different partitions into set of particles, for example four particle system $1234$ can be split e.g. into $12-34$ or $1-2-34$. Suppose one splits $N$ particles into $p$ groups, take as an example the split into three groups $1-2-34$. The state is called $p$-PPT if it has positive *all possible* partial transposes. Fortunately, positivity of partial transpose with respect to certain set of subsystems is the same as positivity with the respect to all remaining subsystems. In the example one should check the positivity of operator obtained after transposition of subsystem $1$, next $2$, and finally $34$.
All the $p$-PPT states were recently shown to satisfy the following inequalities [@NAGATA]: $${\rm Tr}\left[ \left( |\psi^{\pm} \rangle \langle \psi^{\pm}|
- (1-2^{2-p}) |\psi^{\mp} \rangle \langle \psi^{\mp}| \right) \rho \right] \le 2^{1-p},$$ i.e. if $|\psi^+ \rangle$ appears in the first term within the trace, $|\psi^- \rangle$ appears in the second term, and vice versa. Omitting the positive factor $2^{2-p} {\rm Tr}\left( |\psi^{\mp} \rangle \langle \psi^{\mp}| \rho \right)$ one arrives at the Bell operator form: $$\Big| {\rm Tr}\left[ \left( |\psi^+ \rangle \langle \psi^+|
- |\psi^- \rangle \langle \psi^-| \right) \rho \right] \Big| \le 2^{1-p}.
\label{PPT}$$ Following the conjecture by Peres let us put $p=N$. We shall show that if inequality (\[PPT\]) is satisfied, with $p=N$, then also the Bell inequality (\[INEQUALITY\]) is not violated. Using the form of the Bell operator (\[BELL\_OPERATOR\]) the upper bound of the Bell inequality, for $N$-PPT states, is found to read: $$|{\rm Tr}(\mathcal{B}(N,M) \rho_{N-{\rm PPT}})| \le \left( \frac{M}{2} \right)^N,
\label{NPPT_BOUND}$$ and it can never reach the local realistic bound $B_{LR}(M,N)$. This is shown using the violation factor: $$V_{N-{\rm PPT}}(N,M) = \left(\frac{M}{2} \right)^N \frac{[\sin \left(\frac{\pi}{2M} \right)]^N}{\cos \left(\frac{\pi}{2M} \right)}.$$ Since $\sin \left(\frac{\pi}{2M} \right) \le \frac{\pi}{2M}$ and $\cos \left(\frac{\pi}{2M}\right) \ge \frac{1}{\sqrt{2}}$, where we have put $M=2$ as a minimal amount of settings for which Bell inequality makes sense, the violation factor is bounded by $V_{N-{\rm PPT}}(N,M) \le \sqrt{2} (\pi/4)^N$. The simplest system on which one can perform partial transposes consists of $N=2$ particles, thus $V_{N-{\rm PPT}}(N,M) \le \sqrt{2} (\pi/4)^2 \simeq 0.87$. None of the $N$-PPT states violates the Bell inequality (\[INEQUALITY\]). It is worth mentioning that for $M=2$ setting case the violation $V_{N-{\rm PPT}}(N,2) = 2^{(1-N)/2}$ confirms the results of Werner and Wolf [@WW_BI], who gave the conjecture of Peres a sharp mathematical form [@WW_SHARP].
Communication Complexity
========================
Bell inequalities describe a performance of quantum communication complexity protocols [@BRUKNER_PRL]. In this section we follow this general link and present communication complexity problems associated with the inequality (\[INEQUALITY\]). It is proven that the quantum protocol outperforms the best classical protocol for arbitrary number of parties and observables.
In the communication complexity problems (CCP) one studies the information exchange between participants *locally* performing computations, in order to accomplish a *globally* defined task [@YAO]. Let us focus on a variant of a CCP, in which each of $N$ separated partners receives arguments, $y_n=\pm 1$ and $x_n = 0,...,M-1$, of some globally defined function, $\mathcal{F} \equiv \mathcal{F}(y_1,x_1,...,y_N,x_N)$. The $y_n$ inputs are assumed to be randomly distributed, and $x_n$ inputs can in general be distributed according to a weight $\mathcal{W}(x_1,...,x_2)$. The goal is to maximize the probability that Alice arrives at the correct value of the function, under the restriction that $N-1$ bits of overall communication are allowed. Before participants receive their inputs they are allowed to do anything from which they can derive benefit. In particular, they can share some correlated strings of numbers in the classical scenario or entangled states in the quantum case.
*The problem.* Following [@BRUKNER_PRL] one chooses for a task-function: $$\begin{aligned}
\mathcal{F} &=& y_1...y_N {\rm Sign}[\cos(\phi_{x_1}^1 + ... + \phi_{x_N}^N)] = \pm 1,
\label{FF}\end{aligned}$$ with the angles defined by (\[ANGLES\]). According to the angles definition the cosine can never be zero, so the problem is well-defined for all $N$ and $M$. Additionally, the $x_n$ inputs are distributed with the weight: $$\mathcal{W}(x_1,...,x_2) = (1/\mathcal{N}) |\cos(\phi_{x_1}^1 + ... + \phi_{x_N}^N)|,
\label{WW}$$ where the normalization factor is given by $\mathcal{N} = \sum_{x_1...x_N=0}^{M-1}|\cos(\phi_{x_1}^1 + ... + \phi_{x_N}^N)|$. After the communication takes place, if Alice misses some of the random variables $y_n$, her “answer” can only be random. Thus, in an optimal protocol each party must communicate one bit. There are only two communication structures which lead to a non-random answer: (i) a star – each party transmits one bit directly to Alice, and (ii) a chain – sequence of a peer-to-peer exchanges with Alice at the end. The task is to maximize the probability of correct answer $\mathcal{A} \equiv \mathcal{A}(y_1,x_1,...,y_N,x_N)$. Since both $\mathcal{A}$ and $\mathcal{F}$ are dichotomic variables this amounts in maximizing: $$P_{{\rm correct}}
= \frac{1}{2^N}
\sum_{{\bf y},{\bf x}} \mathcal{W}(x_1,...,x_2)
P_{{\bf y},{\bf x}}(\mathcal{A} \mathcal{F} = 1),$$ where $\frac{1}{2^N}$ describes (random) distribution of $y_n$’s, and $P_{{\bf y},{\bf x}}(\mathcal{A} \mathcal{F} = 1)$ is a probability that $\mathcal{A} = \mathcal{F}$ for given inputs ${\bf y} \equiv (y_1,...,y_N)$ and ${\bf x} \equiv (x_1,...,x_N)$. It is useful to express the last probability in terms of an average value of a product $\langle \mathcal{A} \mathcal{F} \rangle_{{\bf y},{\bf x}}$, i.e. $P_{{\bf y},{\bf x}}(\mathcal{A} \mathcal{F} = 1) = \frac{1}{2}[ 1 + \langle \mathcal{A} \mathcal{F} \rangle_{{\bf y},{\bf x}}]$. Since $\mathcal{F}$ is independent of $\mathcal{A}$, and for given inputs it is constant, one has $P_{{\bf y},{\bf x}}(\mathcal{A} \mathcal{F} = 1) = \frac{1}{2}[ 1 + \mathcal{F} \langle \mathcal{A} \rangle_{{\bf y},{\bf x}}]$. Finally the probability of correct answer reads $P_{{\rm correct}} = \frac{1}{2}[1 + (\mathcal{F},\mathcal{A})]$, and it is in one-to-one correspondence with a “weighted” scalar product (average success): $$(\mathcal{F},\mathcal{A})
= \frac{1}{2^N}\sum_{{\bf y},{\bf x}}
\mathcal{W}(x_1,...,x_2) \mathcal{F} \langle \mathcal{A} \rangle_{{\bf y},{\bf x}}.$$ Using the definitions (\[WW\]) for $\mathcal{W}$ and (\[FF\]) for $\mathcal{F}$ one gets: $$(\mathcal{F},\mathcal{A})
= \frac{1}{2^N} \frac{1}{\mathcal{N}} \sum_{{\bf y},{\bf x}}
y_1...y_N \cos(\phi_{x_1}^1 + ... + \phi_{x_N}^N) \langle \mathcal{A} \rangle_{{\bf y},{\bf x}},
\label{SUCCESS}$$ with angles given by (\[ANGLES\]). We focus our attention on maximization of this quantity.
*Classical scenario.* In the *best* classical protocol each party locally computes a bit function $e_n = y_n f(x_n,\lambda)$, with $f(x_n,\lambda) = \pm 1$, where $\lambda$ denotes some previously shared classical resources. Next, the bit is sent to Alice, who puts as an answer the product $\mathcal{A}_c = y_1 f(x_1,\lambda) e_2...e_N = y_1 ... y_N f(x_1,\lambda)...f(x_N,\lambda)$. The same answer can be reached in the chain strategy, simply the $n$th party sends $e_n = y_n f(x_n,\lambda) e_{n-1}$. For the given inputs the procedure is always the same, i.e. $\langle \mathcal{A}_c \rangle_{{\bf y},{\bf x}} = \mathcal{A}_c$. To prove the optimality of this protocol, one follows the proof of Ref. [@EXP_CCP], with the only difference that $x_n$ is a $M$-valued variable now. This, however, does not invalidate any of the steps of [@EXP_CCP], and we will not repeat that proof.
Inserting the product form of $\mathcal{A}_c$ into the average success (\[SUCCESS\]), using the fact that $y_n^2=1$, and summing over all $y_n$’s one obtains: $$(\mathcal{F},\mathcal{A}_c) =
\frac{1}{\mathcal{N}} \sum_{x_1...x_N=0}^{M-1} \! \! \! \! \!
\cos(\phi_{x_1}^1 + ... + \phi_{x_N}^N)
f(x_1,\lambda)...f(x_N,\lambda),$$ which has the same structure as local realistic expression (\[INEQ\_DETER\]). Thus, the highest classically achievable average success is given by a local realistic bound: $\max (\mathcal{F},\mathcal{A}) = (1/\mathcal{N}) B_{LR}(N,M)$.
*Quantum scenario.* In the quantum case participants share a $N$-party entangled state $\rho$. After receiving inputs each party measures $x_n$th observable on the state, where the observables are enumerated as in the Bell inequality (\[INEQUALITY\]). This results in a measurement outcome, $f_n$. Each party sends $e_n = y_n f_n$ to Alice, who then puts as an answer a product $\mathcal{A}_{q} = y_1...y_N f_1 ... f_N$. For the given inputs the average answer reads $\langle \mathcal{A}_{q} \rangle_{{\bf y},{\bf x}} = y_1...y_N \langle f_1 ... f_N \rangle
= y_1...y_N E_{x_1...x_N}^{\rho}$, and the maximal average success is given by a quantum bound of: $$(\mathcal{F},\mathcal{A}_q) =
\frac{1}{\mathcal{N}} \sum_{x_1...x_N=0}^{M-1} \! \! \! \! \!
\cos(\phi_{x_1}^1 + ... + \phi_{x_N}^N)
E_{x_1...x_N}^{\rho}.$$ The average advantage of quantum versus classical protocol can be quantified by a factor $(\mathcal{F},\mathcal{A}_q)/(\mathcal{F},\mathcal{A}_c)$ which is equal to a violation factor, $V(N,M)$, introduced before. Thus, all the states which violate the Bell inequality (including bound entangled state) are a useful resource for the communication complexity task. Optimally one should use the GHZ states $| \psi^{\pm} \rangle$, as they maximally violate the inequality.
Alternatively, one can compare the probabilities of success, $P_{\textrm{correct}}$, in quantum and classical case. Clearly, one outperforms classical protocols for every $N$ and every $M$. As an example, in Table \[TABLE\_ADV\] we gather the ratios between quantum and classical success probabilities for small number of participants.
$N \backslash M$ 2 3 4 5 $\infty$
------------------ -------- -------- -------- -------- ----------
2 1.1381 1.1196 1.1009 1.1002 1.0909
3 1.3333 1.2919 1.2815 1.2773 1.2709
4 1.3657 1.4395 1.4038 1.4258 1.4192
5 1.6000 1.5582 1.5467 1.5418 1.5336
: The ration between probabilities of success in quantum and classical case $P_{\textrm{correct}}^{QM} / P_{\textrm{correct}}^{cl}$ for the communication complexity problem with $N$ observers and $M$ settings. Quantum protocol uses GHZ state.[]{data-label="TABLE_ADV"}
One can ask about a CCP with no random inputs $y_n$. Since the numbers $x_n$ already represent $\lg M$ bits of information, and only one bit can be communicated, this looks like a plausible candidate for a quantum advantage. However, in such a case a classical answer cannot be put as a product of outcomes of local computations (compare [@EXP_CCP]), and thus there is no Bell inequality which would describe the best classical protocol. Since classical performance of *all* CCPs which can lead to quantum advantage is given by some Bell inequality [@BRUKNER_PRL], the task without $y_n$’s cannot lead to quantum advantage.
Summary
=======
We presented a multisetting Bell inequality, which unifies and generalizes many previous results. Examples of quantum states which violate the inequality were given. It was also proven that all the states with positive partial transposes with respect to all subsystems cannot violate the inequality. Finally, the states which violate it were shown to reduce the communication complexity of computation of certain globally defined function. The Bell inequality presented is the only inequality which incorporates arbitrary number of settings for arbitrary number of observers making measurements on two-level systems, to date.
We thank M. [Ż]{}ukowski for valuable discussions. W.L. and T.P. are supported by Foundation for Polish Science and MNiI Grant no. 1 P03B 049 27. The work is part of the VI-th EU Framework programme QAP (Qubit Applications) Contract No. 015848.
[9]{}
J. S. Bell, Physics [**1**]{}, 195 (1964).
J. F. Clauser and A. Shimony, Rep. Prog. Phys. [**41**]{}, 1881 (1978); D. M. Greenberger, M. A. Horne, A. Shimony, and A. Zeilinger, Am. J. Phys. [**58**]{}, 1131 (1990); T. Paterek, W. Laskowski, and M. Żukowski, Mod. Phys. Lett. A [**21**]{}, 111 (2006).
M. A. Nielsen and I. L. Chuang, [*Quantum Computation and Quantum Information*]{} (Cambridge University Press, Cambridge, England, 2000).
A. K. Ekert, Phys. Rev. Lett. [**67**]{}, 661 (1991); V. Scarani and N. Gisin, Phys. Rev. Lett. [**87**]{}, 117901 (2001); A. Acin, N. Gisin, and V. Scarani, Quant. Inf. Comp. [**3**]{}, 563 (2003).
. Brukner, M. Żukowski, J.-W. Pan, and A. Zeilinger, Phys. Rev. Lett. [**92**]{}, 127901 (2004).
M. Żukowski, Phys. Lett. A [**177**]{}, 290 (1993).
J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Lett. [**23**]{}, 880 (1969).
N. D. Mermin, Phys. Rev. Lett. [**65**]{}, 1838 (1990); M. Ardehali, Phys. Rev. A [**46**]{}, 5375 (1992); A. V. Belinskii and D. N. Klyshko, Phys. Usp. [**36**]{}, 653 (1993).
D. M. Greenberger, M. A. Horne, and A. Zeilinger, in [*Bell’s Theorem, Quantum Theory and Conceptions of the Universe*]{}, edited by M. Kafatos (Kluwer Academic, Dordrecht, The Netherlands, 1989).
M. Żukowski, [Č]{}. Brukner, W. Laskowski, and M. Wie[ś]{}niak, Phys. Rev. Lett. [**88**]{}, 210402 (2002).
W. Dür, Phys. Rev. Lett. [**87**]{}, 230402 (2001).
A. Peres, Phys. Rev. Lett. [**77**]{}, 1413 (1996); M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A [**223**]{}, 1 (1996).
M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. [**80**]{}, 5239 (1998).
A. Peres, Found. Phys, [**29**]{}, 589 (1999).
M. Żukowski and D. Kaszlikowski, Phys. Rev. A [**56**]{}, R1682 (1997).
R. F. Werner and M. M. Wolf, Phys. Rev. A [**64**]{}, 032112 (2001).
M. Żukowski and Č. Brukner, Phys. Rev. Lett. [**88**]{}, 210401 (2002).
D. Kaszlikowski, L. C. Kwek, J. Chen, and C. H. Oh, Phys. Rev. A [**66**]{}, 052309 (2002).
A. Sen (De), U. Sen, and M. Żukowski, Phys. Rev. A [**66**]{}, 062318 (2002).
K. Nagata, Phys. Rev. A [**66**]{}, 064101 (2002).
R. F. Werner and M. M. Wolf, Phys. Rev. A [**61**]{}, 062102 (2000).
A. C.-C. Yao, in [*Proceedings of the 11th Annual ACM Symposium on Theory of Computing*]{} (ACM Press, New York, 1979).
P. Trojek, C. Schmid, M. Bourennane, [Č]{}. Brukner, M. Żukowski, and H. Weinfurter, Phys. Rev. A [**72**]{}, 050305(R) (2005).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We report the identification, in an Extreme Ultraviolet Explorer (EUVE) spectrum, of a hot white dwarf companion to the 3rd magnitude late-B star $\theta$ Hya (HR3665, HD79469). This is the second B star$+$white dwarf binary to be conclusively identified; Vennes, Berghöfer and Christian (1997), and Burleigh and Barstow (1998) had previously reported the spectroscopic discovery of a hot white dwarf companion to the B5V star y Pup (HR2875). Since these two degenerate stars must have evolved from main sequence progenitors more massive than their B star companions, they can be used to place observational lower limits on the maximum mass for white dwarf progenitors, and to investigate the upper end of the initial-final mass relation. Assuming a pure hydrogen composition, we constrain the temperature of the white dwarf companion to $\theta$ Hya to lie between 25,000K and 31,000K. We also predict that a third bright B star, 16 Dra (B9.5V), might also be hiding an unresolved hot white dwarf companion.'
author:
- 'M. R. Burleigh'
- 'M. A. Barstow'
date: 'Received ; accepted '
title: '$\theta$ Hya: Spectroscopic identification of a second B star$+$white dwarf binary'
---
Introduction
============
Prior to the extreme ultraviolet (EUV) surveys of the ROSAT Wide Field Camera (WFC, Pye et al. 1995) and NASA’s Extreme Ultraviolet Explorer (EUVE, Bowyer et al. 1996), only a handful of binary systems consisting of a normal star (spectral type K5 or earlier) plus a degenerate white dwarf had been identified. Some of these systems, like the prototype Sirius (A1V$+$DA), are relatively nearby and wide enough that the white dwarf can be readily resolved from its bright companion. Most of these types of binary, however, are all but unidentifiable optically since the normal stellar companion completely swamps the flux coming from the white dwarf. The detection by ROSAT and EUVE of EUV radiation with the spectral signature of a hot white dwarf originating from apparently normal, bright main sequence stars, therefore, gave a clue to the existence of a previously unidentified population of Sirius-type binaries, and around 20 new systems have now been identified (e.g. Barstow et al. 1994, Burleigh, Barstow and Fleming 1997, Burleigh 1998 and Vennes, Christian and Thorstensen 1998, hereafter VCT98). In each case, far-ultraviolet spectra taken with the International Ultraviolet Explorer (IUE) were used to confirm the identifications. This technique proved excellent for finding systems where the normal star is of spectral type $\sim$A5 or later, since the hot white dwarf is actually the brighter component in this wavelength regime ($\sim$1200$-$$\sim$2000[Å]{}). Unfortunately, even at far-UV wavelengths, stars of spectral types early-A, B and O will completely dominate any emission from smaller, fainter, unresolved companions, rendering them invisible to IUE.
$\theta$ Hya (HR3665, HD79469) and y Pup (HR2875) are two bright B stars unexpectedly detected in the ROSAT and EUVE surveys. Their soft X-ray and EUV colours are similar to known hot white dwarfs, so it was suspected that, like several other bright normal stars in the EUV catalogues, they were hiding hot white dwarf companions. However, for these two systems it was, of course, not possible to use IUE or HST to make a positive identification, and instead we had to wait for EUVE’s spectrometers to make a pointed observation of each star. y Pup was observed in 1996, and the formal discovery of its hot white dwarf companion was reported by Vennes, Berghöfer and Christian (1997), and Burleigh and Barstow (1998). $\theta$ Hya was observed by EUVE in February 1998 and the EUV continuum distinctive of a hot white dwarf was detected (Figure 1). This spectrum is presented, analysed and discussed in this letter.
---------------- -------------- ---------- ------------ -------------- -------------- ------------ ------ ------ ------
WFC PSPC EUVE
ROSAT No. Name S1 S2 (0.1-0.4keV) (0.4-2.4keV) 100Å 200Å 400Å 600Å
RE J0914$+$023 $\theta$ Hya 52$\pm$7 148$\pm$12 124$\pm$24 0.0 122$\pm$15 0.0 0.0 0.0
---------------- -------------- ---------- ------------ -------------- -------------- ------------ ------ ------ ------
White dwarf companions to B stars are of significant importance since they must have evolved from massive progenitors, perhaps close to the maximum mass for white dwarf progenitor stars, and they are likely themselves to be much more massive than the mean for white dwarfs in general (0.57M$_\odot$, Finley, Koester and Basri 1997). The value of the maximum mass feasible for producing a white dwarf is a long-standing astrophysical problem. Weidemann (1987) gives the upper limit as 8M$_\odot$ in his semi-empirical initial-final mass relation. Observationally, the limit is best set by the white dwarf companion to y Pup, which must have evolved from a progenitor more massive than B5 (6$-$6.5M$_\odot$).
The main sequence star $\theta$ Hya
===================================
$\theta$ Hya is a V$=$3.88 high proper motion star; [*Hipparcos*]{} measures the proper motion components as 112.57$\pm$1.41 and $-$306.07$\pm$1.20 milli-arcsecs. per year. $\theta$ Hya was originally classified as a $\lambda$ Boo chemically peculiar star, although from ultraviolet spectroscopy Faraggiani, Gerbaldi and Böhm (1990) later concluded that $\theta$ Hya was not in fact chemically peculiar, a finding backed up by Leone and Catanzaro (1998). Their derived abundances from high resolution optical spectroscopy are almost coincident with expected main sequence abundances. The [*SIMBAD*]{} database, Morgan, Harris and Johnson (1953) and Cowley et al. (1969) give the spectral type as B9.5V. VCT98 note that it is a fast rotator (v$_{rot}$sin i$\sim$100 km s$^{-1}$), and that the detection of HeI at 4471[Å]{} also suggests a B star classification.
Detection of EUV radiation from $\theta$ Hya in the ROSAT WFC and EUVE surveys
==============================================================================
The ROSAT EUV and X-ray all-sky surveys were conducted between July 1990 and January 1991; the mission and instruments are described elsewhere (e.g. Trümper 1992, Sims et al. 1990). $\theta$ Hya is associated with the relatively bright WFC source RE J0914$+$023. The same EUV source was later detected in the EUVE all-sky survey (conducted between July 1992 and January 1993). This source is also coincident with a ROSAT PSPC soft X-ray detection. The count rates from all three instruments are given in Table 1. The WFC count rates are taken from the revised 2RE Catalogue (Pye et. al. 1995), which was constructed using improved methods for source detection and background screening. The EUVE count rates are taken from the revised Second EUVE Source Catalog (Bowyer et al. 1996). The PSPC count rate was obtained via the World Wide Web from the on-line ROSAT All Sky Survey Bright Source Catalogue maintained by the Max Planck Institute in Germany (Voges et al. 1996) [^1]. As with y Pup (RE J0729$-$388), the EUV and soft X-ray colours and count rate ratios are similar to known hot white dwarfs. The EUV radiation is too strong for it to be the result of UV leakage into the detectors (see the discussion in Burleigh and Barstow 1998). $\theta$ Hya is also only seen in the soft 0.1$-$0.4 kev PSPC band; only one (rather unusual) white dwarf has ever been detected at higher energies (KPD0005$+$5105, Fleming, Werner and Barstow 1993), while most active stars are also hard X-ray sources. Indeed, in a survey to find OB-type stars in the ROSAT X-ray catalogue by Berghöfer, Schmitt and Cassinelli (1996), only three of the detected B stars are not hard X-ray sources: y Pup (confirmed B5V$+$white dwarf), $\theta$ Hya and 16 Dra (B9.5V, see later). Therefore, Burleigh, Barstow and Fleming (1997) and VCT98 suggested that $\theta$ Hya, like y Pup and nearly twenty other bright, apparently normal stars in the EUV catalogues, might be hiding a hot white dwarf companion.
EUVE pointed observation and data reduction
===========================================
$\theta$ Hya was observed by EUVE in dither mode in four separate observations in 1998 February/March for a total exposure time of $\approx$210,000 secs. We have extracted the spectra from the images ourselves using standard IRAF procedures. Our general reduction techniques have been described in earlier work (e.g. Barstow et al. 1997).
The target was detected in both the short (70$-$190[Å]{}) and medium (140$-$380[Å]{}) wavelength spectrometers (albeit weakly), but not the long wavelength (280$-$760[Å]{}) spectrometer. To improve the signal/noise, we have co-added the four separate observations, binned the short wavelength data by a factor four, and the medium wavelength data by a factor 16. The resultant spectrum, shown in Figure 2, reveals the now familiar EUV continuum expected from a hot white dwarf in this spectral region.
The only stars other than white dwarfs whose photospheric EUV radiation has been detected by the ROSAT WFC and EUVE are the bright B giants $\beta$ CMa (B1II$-$III, Cassinelli et al. 1996) and $\epsilon$ CMa (B2II, Cohen et al. 1996). The photospheric continuum of $\epsilon$ CMa is visible down to $\sim$300[Å]{}, although no continuum flux from $\beta$ CMa is visible below the HeI edge at 504[Å]{}. Both stars also have strong EUV and X-ray emitting winds, and in $\epsilon$ CMa emission lines are seen in the short and medium wavelength spectrometers from e.g. high ionisation species of iron. Similarly, strong narrow emission features of e.g. oxygen, nickel and calcium are commonly seen in EUV spectra of active stars and RS CVn systems. Since no such features are visible in the $\theta$ Hya EUVE spectrum, we can categorically rule out a hot wind or a hidden active late-type companion to $\theta$ Hya as an alternative source of the EUV radiation.
Analysis of the hot white dwarf
===============================
--------- ----------- ----------- ----------------- --------------------------
log $g$ $M_{WD}$ $R_{WD}$ $R_{WD}$ ($R_{
WD}$/$D$)$^2$
M$_\odot$ R$_\odot$ $\times$10$^6$m [*where D$=$39.5pc*]{}
7.5 0.30 0.017 11.832 9.4243$\times$10$^{-23}$
8.0 0.55 0.013 9.048 5.5111$\times$10$^{-23}$
8.5 0.83 0.009 6.264 2.6414$\times$10$^{-23}$
9.0 1.18 0.006 4.176 1.1740$\times$10$^{-23}$
--------- ----------- ----------- ----------------- --------------------------
: Hamada-Salpeter zero-temperature mass-radius relation
We have attempted to match the EUV spectrum of $\theta$ Hya with a grid of hot white dwarf$+$ISM model atmospheres, in order to constrain the possible atmospheric parameters (temperature and surface gravity) of the degenerate star and the interstellar column densities of HI, HeI and HeII. Unfortunately, there are no spectral features in this wavelength region to give us an unambiguous determination of [*T$_{eff}$*]{} and [*log g*]{}. However, by making a range of assumptions to reduce the number of free parameters in our models, we can place constraints on some of the the white dwarf’s physical parameters. Our method is similar to that used in the analysis of the white dwarf companion to y Pup (Burleigh and Barstow 1998).
Firstly, we assume that the white dwarf has a pure-hydrogen atmosphere. This is a reasonable assumption to make, since Barstow et al. (1993) first showed that for [*T$_{eff}$*]{}$<$40,000K hot white dwarfs have an essentially pure-H atmospheric compostion. We can then fit a range of models, each fixed at a value of the surface gravity [*log g*]{}. However, before we can do this we need to know the normalisation parameter of each model, which is equivalent to (Radius$_{\it WD}$$/$Distance)$^2$. We can use the [*Hipparcos*]{} parallax of 4.34$\pm$0.97 milli-arcsecs., translating to a distance of 39.5$\pm$1.5 parsecs, together with the Hamada-Salpeter zero-temperature mass-radius relation, to give us the radius of the white dwarf corresponding to each value of the surface gravity (see Table 2).
--------- -------------------------- ---------------------------- ------------------- -------------------
log $g$ $T_{eff}$ (K) $N_{HI}$ $\times$10$^{18}$ $N_{HeI}$ $N_{HeII}$
& 90% range & 90% range $\times$10$^{17}$ $\times$10$^{17}$
7.5 25,800 (25,500$-$26,200) 2.7 (1.1$-$4.9) 3.1 1.1
8.0 26,800 (26,400$-$27,100) 4.6 (3.2$-$6.3) 5.2 1.9
8.5 28,500 (28,200$-$28,900) 6.6 (5.3$-$7.9) 7.4 2.8
9.0 30,800 (30,400$-$31,200) 8.0 (7.0$-$9.1) 9.0 3.3
--------- -------------------------- ---------------------------- ------------------- -------------------
: WD parameters and interstellar column densities
We can also reduce the number of unknown free parameters in the ISM model. From EUVE spectroscopy, Barstow et al. (1997) measured the line-of-sight interstellar column densities of HI, HeI and HeII to a number of hot white dwarfs. They found that the mean H ionisation fraction in the local ISM was 0.35$\pm$0.1, and the mean He ionisation fraction was 0.27$\pm$0.04. From these estimates, and assuming a cosmic H/He abundance, we calculate the ratio $N_{HI}$/$N_{HeI}$ in the local ISM$=$8.9, and $N_{HeI}$/$N_{HeII}$$=$2.7. We can then fix these column density ratios in our model, leaving us with just two free parameters - temperature and the HI column density.
The model fits at a range of surface gravities from log $g=$7.5$-$9.0 are summarized in Table 3. Note that our range of fitted temperatures is in broad agreement with those of VCT98, who modelled the EUV and soft X-ray photometric data for $\theta$ Hya on the assumption that the source was indeed a hot white dwarf.
Discussion
==========
We have analysed the EUVE spectrum of the B9.5V star $\theta$ Hya which confirms that it has a hot white dwarf companion, and constrains the degenerate star’s temperature to lie between $\approx$25,500K and $\approx$31,000K. This is the second B star$+$hot white dwarf binary to be spectroscopically identified, following y Pup (HR2875), a B5 main sequence star. The white dwarf in the $\theta$ Hya system must have evolved from a progenitor more massive than B9.5V ($\approx$3.4M$_\odot$).
Although EUVE spectra provide us with little information with which to constrain a white dwarf’s surface gravity, and hence its mass, we can use a theoretical initial-final mass relation between main sequence stars and white dwarfs, e.g. that of Wood (1992), to calculate the mass of a white dwarf if the progenitor was only slightly more massive than $\theta$ Hya:
$M_{WD}$$=$Aexp(B$\times$$M_{MS}$)
where A$=$0.49M$_\odot$ and B$=$0.094M$_\odot$$^{-1}$.
For $M_{MS}$$=$3.4M$_\odot$, we find $M_{WD}$$=$0.68M$_\odot$. This would suggest the surface gravity of the white dwarf log g$>$8.0.
Data from [*Hipparcos*]{} indicates possible micro-variations in the proper motion of $\theta$ Hya across the sky, suggesting that the binary period may be $\sim$10 years or more. Indeed, VCT98 measured marginal variations in the B star’s radial velocity. Clearly, more measurements at regular intervals in future years might help to pin down the binary period.
A third B star$+$white dwarf binary in the EUV catalogues?
==========================================================
EUVE has now spectroscopically identified two B star$+$hot white dwarf binaries from the EUV all-sky surveys. As mentioned previously in Section 3, in a survey of X-ray detections of OB stars, Berghöfer, Schmitt and Cassinelli (1996) found just three B stars which were soft X-ray sources only: y Pup and $\theta$ Hya, which have hot white dwarf companions responsible for the EUV and soft X-ray emissions, and 16 Dra (B9.5V, $=$HD150100, $=$ADS10129C, V$=$5.53).
16 Dra is one member of a bright resolved triple system (with HD150118, A1V, and HD150117, B9V). Hipparcos parallaxes confirm all three stars lie at the same distance, $\approx$120 parsecs. 16 Dra is also a WFC and EUVE source (RE J1636$+$528), and it is so similar to y Pup and $\theta$ Hya that we predict it also has a hot white dwarf companion, most likely unresolved. Unfortunately, it is a much fainter EUV source than either y Pup or $\theta$ Hya, and would require a significant exposure time to be detected by EUVE’s spectrometers ($\sim$400$-$500 ksecs).
However, we can estimate the approximate temperature of this white dwarf, and the neutral hydrogen column density to 16 Dra, using the ROSAT photometric data points: WFC S1 count rate $=$12$\pm$4 c/ksec, S2$=$46$\pm$11 c/ksec, and PSPC soft band count rate $=$72$\pm$15 c/ksec. We adopt a similar method to the analysis of $\theta$ Hya described earlier, using the Hamada-Salpeter zero-temperature mass-radius relation and the Hipparcos parallax to constrain the normalisation (equivalent to ($R_{WD}$/$D$)$^2$). Although we cannot constrain the value of the surface gravity using this method, we find that the white dwarf’s temperature is likely to be between 25,000$-$37,000K, and the neutral hydrogen column density N$_{HI}$$<$4$\times$10$^{19}$ atoms cm$^{-2}$.
Matt Burleigh and Martin Barstow acknowledge the support of PPARC, UK. We thank Detlev Koester (Kiel) for the use of his white dwarf model atmosphere grids. This research has made use of the [*SIMBAD*]{} database operated by CDS, Strasbourg, France.
Barstow M.A., et al., 1993, MNRAS, 264, 16
Barstow M.A., et al., 1994, MNRAS, 270, 499
Barstow M.A., Dobbie P.D., Holberg J.B., Hubeny I., Lanz T., 1997, MNRAS, 286, 58
Berghöfer T.W., Schmitt J.H.M.M., Cassinelli J.P., 1997, A&AS, 118, 481
Bowyer S., et al., 1996, ApJS, 102, 129
Burleigh M.R., Barstow M.A., Fleming T.A., 1997, MNRAS, 287, 381
Burleigh M.R., 1998, in ‘Ultraviolet Astrophysics Beyond the IUE Final Archive’, ESA Publication SP-413, 229
Burleigh M.R., Barstow M.A., 1998, MNRAS, 295, L15
Cassinelli J.P., et al., 1996, ApJ, 460, 949
Cohen D.H., et al., 1996, ApJ, 460, 506
Cowley A., Cowley C., Jaschek M., Jaschek C., 1969, AJ, 74, 375
Faraggiani R., Gerbaldi M., Böhm C., 1990, A&A, 235, 311
Finley D.S., Koester D., Bsri G., 1997, ApJ, 488, 375
Fleming T.A., Werner K., Barstow M.A., 1993, ApJ, 416, 79
Hamada T., Salpeter E.E., 1961, ApJ, 134, 683
Jeffries R.D., Burleigh M.R., Robb R.M., 1996, A&A, 305, L45
Leone F., Catanzaro G., 1998, A&A, 331, 627
Morgan W.W., Harris D.L., Johnson H.L., 1953, ApJ, 118, 92
Pye J.P., et al., 1995, MNRAS, 274, 1165
Sims M.R., et al., 1990, Opt. Eng., 29, 649
Trümper J., 1992, QJRAS, 33, 165
Vennes S., Berghöfer T., Christian D.J., 1987, ApJ, 491, L85
Vennes S., Christian D.J., Thorstensen, J.R., 1998, ApJ, 502, 763 (VCT98)
Voges W., et al., 1996, IAUC 6420
Weidemann V., 1987, A&A, 188, 74
Wood M.A., 1992, 386, 539
[^1]: http://www.rosat.mpe-garching.mpg.de/survey/rass-bsc/cat.html
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Matter exhibits phases and their transitions. These transitions are classified as first-order phase transitions (FOPTs) and continuous ones. While the latter has a well-established theory of the renormalization group, the former is only qualitatively accounted for by classical theories of nucleation, since their predictions often disagree with experiments by orders of magnitude. A theory to integrate FOPTs into the framework of the renormalization-group theory has been proposed but seems to contradict with extant wisdom and lacks numerical evidence. Here we show that universal hysteresis scaling as predicted by the renormalization-group theory emerges unambiguously when the theory is combined intimately with the theory of nucleation and growth in the FOPTs of the paradigmatic two-dimensional Ising model driven by a linearly varying externally applied field below its critical point. This not only provides a new method to rectify the nucleation theories, but also unifies the theories for both classes of transitions and FOPTs can be studied using universality and scaling similar to their continuous counterpart.'
author:
- Fan Zhong
title: 'Universal scaling in first-order phase transitions mixed with nucleation and growth'
---
Matter as a many-body system exists in various phases and/or their coexistence and its diversity comes from phase changes. It thus exhibits just phases and their transitions. These transitions are classified as first-order phase transitions (FOPTs) and continuous ones [@Fisher67], the latter including second and higher orders. Whereas the phases can be studied by a well-established framework and the continuous phase transitions have a well-established theory of the renormalization group (RG) that has predicted precise results in good agreement with experiments [@Barmatz], the FOPTs gain a different status in statistical physics.
FOPTs proceed through either nucleation and growth or spinodal decomposition [@Gunton83; @Bray; @Binder2]. Although classical theories of nucleation [@Becker; @Becker1; @Becker2; @Zeldovich; @books; @books1; @books2; @Oxtoby92; @Oxtoby921; @Oxtoby922; @Oxtoby923] and growth [@Avrami; @Avrami1; @Avrami2] correctly account for the qualitative features of a transition, an agreement in the nucleation rate of even several orders of magnitude between theoretical predictions and experimental and numerical results is regarded as a feat [@Oxtoby92; @Oxtoby921; @Oxtoby922; @Oxtoby923; @Filion; @Filion1; @Filion2]. A lot of improvements have thus been proposed and tested in the two-dimensional (2D) Ising model whose exact solution is available. In a multidroplet regime [@Rikvold94] in which many droplets nucleate and grow, by combining with Avrami’s growth law [@Avrami; @Avrami1; @Avrami2], a field-theoretically-corrected nucleation theory [@Langer67; @Gunther; @Harris84; @Prestipino] was shown to produce quite well—with only one adjustable parameter—the results of hysteresis loop areas obtained from Monte Carlo simulations at temperatures below the critical temperature $T_c$ even in the case of a sinusoidally varying applied external field [@Sides98; @Sides99; @Ramos; @Zhongc].
However, it is well-known that classical nucleation theories are not applicable in spinodal decompositions in which the critical droplet for nucleation is of the size of the lattice constant and thus no nucleation is needed [@Gunton83]. In contrary to the mean-field case, for systems with short-range interactions, although sharply defined spinodals that divide the two regimes of the apparently different dynamic mechanisms do not exist [@Gunton83; @Bray; @Binder2], one can nevertheless assume existence of fluctuation shifted underlying spinodals called “instability” points. Expanding around them below $T_c$ of a usual $\phi^4$ theory for critical phenomena then results in a $\phi^3$ theory for the FOPT due to the lack of the up–down symmetry in the expansion [@Zhongl05; @zhong16]. An RG theory for the FOPT can then be set up in parallel to that for the critical phenomena, giving rise to universality and dynamic scaling characterized by analogous “instability” exponents. The primary qualitative difference is that the nontrivial fixed points of such a theory are imaginary in values and are thus usually considered to be unphysical, though the instability exponents are real. Yet, counter-intuitively, imaginariness is physical in order for the $\phi^{3}$ theory to be mathematically convergent, since the system becomes unstable at the instability points upon renormalization and analytical continuation is necessary [@Zhonge12]. Moreover, the degrees of freedom that need finite free energy costs for nucleation are coarse-grained away with the costs, indicating irrelevancy of nucleation to the scaling [@Zhonge12]. Although no clear evidence of an overall power-law relationship was found for the magnetic hysteresis in a sinusoidally oscillating field in two dimensions [@Thomas; @Sides98; @Sides99] in contrast to previous work [@Char; @Liang16], recently, with properly logarithmic corrections, a dynamic scaling near a temperature other than the equilibrium transition point was again found numerically for the cooling FOPTs in the 2D Potts model [@Pelissetto16]. However, no theoretical explanation was offered to the scaling [@Liang16].
Here, we propose an idea that the instability point is reached when the time scale of the nucleation and growth matches that of the driving arising from the linearly varying field $H$ with a rate $R$. Integrating the theory of nucleation and growth with the $\phi^3$ RG theory of scaling for FOPTs, we are then able to construct a finite-time scaling form for the magnetization. It is found to describe remarkably well the numerical simulations of the 2D Ising model with universal instability exponents and scaling functions for two simulated temperatures below $T_c$ after allowing for a single additional universal logarithmic factor. Because the scaling form contains all essential elements of nucleation and growth including the Boltzmann factor that is the origin of the large discrepancy between nucleation theories and experiments, the scaling provides a method to rectify it. More importantly, our results offer unambiguous evidence for the $\phi^3$ theory and thus one can study the universality and scaling of FOPTs similar to their continuous counterpart.
Crucial in our analysis is the theory of finite-time scaling [@Gong; @Gong1; @Huang; @Feng], whose essence is a constant finite time scale $t_R=\zeta_RR^{-z/r}$ arising from the linear driving, where $z$ and $r$ are dynamic instability exponents and $\zeta_R$ denotes the proportional coefficient independent on $R$. This single externally imposed time scale enables one to probe effectively a process in which a system takes a long time to equilibrate, as is the present case of nucleation and growth. This is because the system can then readily follow the short time instead of the long equilibration one. As a consequence, the system is controlled by the driving and exhibits finite-time scaling, similar to its spatial counterpart, finite-size scaling, in which a system has a smaller size than its correlation length. Moreover, even if crossover occurs when the equilibration time becomes shorter than, finite-time scaling can still well describe the situation [@Huang; @Feng]. By contrast, a sinusoidal driving has two controlling parameters, the amplitude and frequency, and thus complicates the process [@Feng; @Zhonge; @Zhonge1].
Let’s start with the nucleation and growth of up spins in a sea of fluctuating down spins in the Ising model at a temperature $T$ below $T_c$. Upon applying a constant up field $H$ to the system, the magnetization $M$ at time $t$ is given by $$M(H,T,t)=M_{\rm eq}(T)-2 M_{\rm eq}(T)\exp\left[-(t/t_0)^{d+1}\right],\label{mt}$$ in the multidroplet regime [@Rikvold94], with a nucleation and growth time scale $t_0=\zeta_0H^{-(K+d)/(d+1)}\exp\{\Xi/[(d+1)H^{d-1}]\}$, where $M_{\rm eq}$ stands for the equilibrium spontaneous magnetization, $\zeta_0(T)=[\Omega_dv^dB/(d+1)]^{-1/(d+1)}$ is a temperature-dependent constant, $\Omega_d(T)$ is a shape factor, $B(T)$ is an adjustable parameter, $v(T)$ is the interface velocity of a growing droplet for a unit applied field in the Lifshitz-Allen-Cahn approximation [@Lifshitz; @Lifshitz1; @Gunton83], $K=3$ for the 2D kinetic Ising model, and $\Xi=\Omega_d\sigma_0^2/(2M_{\rm eq}k_{\rm B}T)$ with $k_{\rm B}$ being the Boltzmann constant and $\sigma_0$ the surface tension along a primitive lattice vector [@Gunther; @Rikvold94; @Harris84].
The central idea is that scaling emerges around the field $H_s$ that satisfies $t_0=t_R$. It divides regimes in which either nucleation and growth or the intrinsic fluctuations governed by the $\phi^3$ fixed point is dominant and thus is identified with the instability point of the theory, which was originally suggested to separate nucleation and growth from spinodal decomposition. This condition results in $$H_s^{d-1}(-\ln R+\kappa \ln H_s+b)=r\Xi/[(d+1)z],\label{hs}$$ with $\kappa\equiv r(K+d)/[z(d+1)]$ and $b\equiv r\ln(\zeta_R/\zeta_0)/z$, which is proportional to the ratio of the coefficients of the two time scales. For the field driven case, $r=z+\beta\delta/\nu$ with $\beta$, $\delta$, and $\nu$ being the instability exponents for the magnetization, the magnetic field, and the correlation length, respectively [@Zhongl05; @zhong16]. The corresponding $M_s$ is given by the magnetization at $H_s$ obtained from Eq. (\[mt\]) with $t$ replaced by $H/R$, i.e., $$M_s=M_{\rm eq}-2 M_{\rm eq}\exp\left[-(\zeta_0R)^{-(d+1)}H_s^{K+2d+1}e^{-\Xi/H_s^{d-1}}\right].\label{ms}$$ Why this is called $M_s$ will become clear shortly.
Several remarks are in order. First, the instability points so obtained depend on the rate $R$. This is reasonable as they rely on the probing scales as previous studies have shown [@Binder78; @Kawasaki; @Kaski]. Only in the case in which the first two terms on the left hand side of Eq. (\[hs\]) can be neglected can one arrive at a constant $H_s$. Second, as $R\rightarrow0$, $H_s\rightarrow0$ and $M_s\rightarrow M_{\rm eq}$, viz., the equilibrium transition point and magnetization, rather than the mean-field spinodal since the range of interactions is short. This is again reasonable in view of the new physical meaning of the instability point; because, at the equilibrium transition point, only nucleation and growth is possible though the transition may take a time longer than the age of the universe. Note that $M_s\rightarrow M_{\rm eq}$ instead of the initial state $-M_{\rm eq}$ as $R\rightarrow0$ since nucleation and growth have been considered. Third, if the last two terms on the left hand side of Eq. (\[hs\]) are neglected for sufficiently low $R$, one reaches [@Thomas] $H_s\sim (-\ln R)^{-1/(d-1)}$, which vanishes only for so extremely low $R$ that is not feasible numerically or experimentally [@Sides99; @Zhongc]. As a result, $H_s$ is always finite practically [@Zhonge]. Fourth, the recently found logarithmic time factor [@Pelissetto16] should be an approximated form of (\[hs\]) as the scaling found there is peculiar in that the normalized energy is not rescaled both in curve crossing and in scaling collapses different from usual scaling in critical phenomena [@cumulant]. So should those found numerically in Ref. [@Zhongc].
In the $\phi^3$ theory, near the instability point, scaling exists similar to critical phenomena. Under a linearly varying external field, the finite-time scaling form is [@Zhongl05; @zhong16], $$M(H,T,R)-M_s=R^{\beta/r\nu}f\left((H-H_s)R^{-\beta\delta/r\nu}\right),\label{ftsu}$$ where $f$ is a universal scaling function. A salient feature is the finite $H_s$ and $M_s$. Now, because the instability points are determined by Eqs. (\[hs\]) and (\[ms\]), it is then natural to postulate that the scaling form changes to $$Y(H,T,R)= R^{\beta/r\nu}f\left(XR^{-\beta\delta/r\nu}(-\ln R)^{-3/2}\right),\label{ftsf}$$ with $X(H,T,R)\equiv H^{d-1}(-\ln R+\kappa \ln H+b)-r\Xi/[(d+1)z]$ and $Y(H,T,R)\equiv M(H,T,R)-M_{\rm eq}+2 M_{\rm eq}\exp\{-\zeta_0^{-(d+1)}H^{K+2d+1}R^{-(d+1)}e^{-\Xi/H^{d-1}}\}$. In (\[ftsf\]), we have included a logarithmic factor with an exponent $-3/2$. It may stem from either the $\phi^3$ theory or the neglected higher order terms in the nucleation rate [@Gunther]. At present, we have no definite theory to explain it. However, we find that this single factor is sufficient for good scaling collapses for both temperatures we simulate and is thus universal for at least the model in two dimensions too.
From the scaling form (\[ftsf\]), at $X=0$, one recovers naturally $H_s$ that obeys Eq. (\[hs\]). Yet, the magnetization now satisfies $$M(H_s,T,R)=M_s(T,R)+ R^{\beta/r\nu}f(0)\label{ftsm}$$ similar to the one obtained from (\[ftsu\]) though $M_s$ defined in Eq. (\[ms\]) is rate dependent. However, at $M=M_s$ or $Y=0$, one can only find $X|_{Y=0}=aR^{\beta\delta/r\nu}(-\ln R)^{3/2}$ for $f(a)=0$, different from the usual form $H|_{M=M_s}=H_s+aR^{\beta\delta/r\nu}$ obtained from Eq. (\[ftsu\]).
We employ the 2D Ising model to verify the scaling form (\[ftsf\]). It contains only two unknown parameters, $\zeta_0$ and $\zeta_R$ or $b$. Although $v$ and $B$ that define $\zeta_0$ were estimated in the model at $T=0.8T_c$ [@Sides99], $B$ was found by adjusting it to match the data without considering scaling. We therefore regard $\zeta_0$ as an adjustable parameter. The other parameters, $\Xi$ and $K$, which crucially affect the nucleation and growth, are known for the same model. $M_{\rm eq}$, along with $\Xi$, $\Omega_d$, and $\sigma_0$, is even exactly known. For the universal exponents, $\delta$ is exactly known to be six [@Cardy85]. This gives $\nu=-5/2$ using the exact result $\beta=1$ for the $\phi^3$ theory [@zhong16]. The former two differ their three loop results by five percents or so. In contrast, $z$ is only estimated to two loop orders [@zhong16]. Yet, it determines almost everything as seen from Eqs. (\[hs\]) and (\[ftsf\]). So, we have to adjust it to find the best results.
The procedure is as follows. Given a $z$, we guess a value of $b$ and solve out $H_s$ from Eq. (\[hs\]), find the corresponding $M(H_s)$ for a series of $R$ from the simulated magnetization curves, and then fit them according to Eqs. (\[ftsm\]) and (\[ms\]). The correct $b$ must lead to the right $\beta/r\nu$ for the given $z$ or to $d + 1$ if the power of $R$ in the exponent instead of $\beta/r\nu$ is regarded as a parameter to be fitted. The fitted out $\zeta_0$ can then be plugged in Eq. (\[ftsf\]) to verify the results. Remarkably, the two time scale coefficients can even be found.
![image](hm433e.eps){width="\linewidth"}
Figure \[rgt\] shows the results for $z = 1.5$. We use $\beta/r\nu$ and $d + 1$ to demonstrate the results of the fits to Eqs. (\[ftsm\]) and (\[ms\]) for two different temperatures. The other ones that are not displayed show similar behaviour. One sees from \[rgt\](a) and \[rgt\](e) that, as the data of large rates $R$ are omitted in the fits, the exponents approach $-0.103$ and $3$ correctly. In \[rgt\](e), including of smaller rates again drives the exponent away from $3$, to which we shall come back later on. Using the fitted results of $\zeta_0$ and the ranges of rates that produce the correct exponents, we rescale the magnetization curves shown in \[rgt\](b) and \[rgt\](f) according to the scaling form (\[ftsf\]) and plot the results in \[rgt\](d) and \[rgt\](h). The peaks of the rescaled curves stem from the competition between $M$ and the part of nucleation and growth in $Y$ and lie in the late stages of the transition as seen in \[rgt\](c) and \[rgt\](g). One sees that the rescaled curves collapse onto each other almost perfectly even relatively far away from the instability point at $X = 0$ where the $\phi^3$ theory is developed [@Zhongl05; @zhong16] and even for rates beyond. This strongly validates the scaling form.
Similar scaling collapses appear for $z$ bigger than $1.5$ and even up to $2.5$ plus. We choose $1.5$ because the scaling functions for the two temperatures are nearly parallel. This can be seen in Figs. \[rgt\](d) and \[rgt\](h), where identical scales are employed. The two rescaled curves only displace with each other by less than $0.01$ in $f(0)$. This slight difference may result from the neglected higher order terms in the nucleation rate [@Gunther], which may also be a source of the extra logarithmic factor as mentioned. We note, however, that without the logarithmic factor, the rescaled curves already cross at $X = 0$ and $Y(H_s,T,R)R^{-\beta/r\nu}=f(0)$ almost perfectly in agreement with Eq. (\[ftsm\])) as demonstrated in Figs. \[rgt\](a) and \[rgt\](e). This indicates that at least the predominant contribution of the nucleation and growth has been taken into account in the present theory.
![\[trt0\](Color online) (a) The driving time scale $t_R$ versus $R$ and (b) the nucleation and growth time scale $t_0$ versus the field $H$ for the two nearly identical rates (filled black symbols) in (a) at $T = 0.8T_c$ (squares/orange) and $T \approx 0.6T_c$ (circles/olive). The two horizontal lines are $t_0(H_s) = t_R$, corresponding to the two filled symbols in (a). The two vertical arrows separate the intrinsic fluctuation regime on the left from the nucleation and growth regime on the right. Lines connecting symbols are only a guide to the eye.](trt0433.eps){width="\linewidth"}
Figure \[trt0\] displays different time scales. From its definition, $t_R$ (or $t_0$) increases as $R$ (or $H$) decreases and diverges as $R$ (or $H$) vanishes. However, $t_0$ climbs up exponentially fast than $t_R$ as can be seen in the figure. This implies that there exists always a finite-time regime in which the driving dominates the dynamics no matter how small the driving rate is. From the dependences of $t_R$ and $t_0$ on $R$ and $H$, it is evident that $H_s$ increases with $R$. As a consequence, large rates drive the transition to take place at large fields as exemplified in Figs. \[rgt\](b) and (f). In addition, because the free-energy cost for nucleation increases significantly as $T$ is lowered, $t_0$ increases rapidly as $T$ decreases, though $\zeta_0$ and $\zeta_R$ only change moderately, from respectively $36.3$ and $107.4$ at $T = 0.8T_c$ to $59.9$ and $80.5$ at $T \approx 0.6T_c$, with reverse temperature dependences as Fig. \[trt0\] displays. Therefore, the transition occur at a large field and the hysteresis goes up as $T$ is lowered, as can also be seen in Figs. \[rgt\](b) and (f).
![\[hsms\](Color online) The instability points and the magnetization at Hs for (a) $T = 0.8T_c$ and (b) for $T \approx 0.6T_c$. $M(H_s)$ (squares) lies on the magnetization curves (lines) whereas $M_s$ (circles) does not. In (a) and (b), the field rates $R$ increase from left to right. Three more largest rates are drawn for both temperatures as compared to Figs. \[rgt\](b) and (f). Two more smallest rates are also plotted in (b). The color codes are identical with Fig. \[rgt\]. Note that the instability point of the smallest rate in (b) exceeds the magnetization at $H_s$.](hsms433.eps){width="\linewidth"}
Figure \[hsms\] illustrates the magnetization at $H_s$, $M(H_s)$, and the instability points $(H_s, M_s)$. Their differences are just $f(0)R^{\beta/r\nu}$ from Eq. (\[ftsm\]). One sees that scaling and universality persist though the instability points appear somehow far away. It is clear that $H_s$ decreases while $M_s$ increases as $R$ is reduced as expected. A unique feature is that, for the low temperature, $M(H_s)$ and $M_s$ rise sharply for low $R$ and thus low $H_s$ and cross each other. We believe this is the reason why the small rates deviate from scaling at the temperature shown in Fig. \[rgt\](e). There are two possible causes. One is that the nucleation barrier is large and the transition fluctuates a lot. Even though we have $20$ million samples, the average magnetization may still have sufficient fluctuations. Another possible limitation to the accuracy of $M$ may come from finite size effects. Anyway, such a small shift in the $H$ position can thus give rise to a large deviation in $M(H_s)$ in comparison with the theoretical $H_s$. For the large rates, their magnetizations may be too nonequilibrium to show scaling. Indeed, they are found to depend slightly on the initial state.
In addition, owing to the difference between $M(H_s)$ and $M_s$, the time scale coefficient we have found is different from the value $5.59$ obtained from its definition at $T = 0.8T_c$. Conversely, the present value of $36.3$ leads back to $B = 0.000 092 0$, about $270$ times smaller as compared to $0.025 15$. This appears not so absurd as deviations of orders of magnitude are common in the field. For example, to fit the results at the same temperature in the single droplet regime, $B$ must be more than $2 000$ times bigger [@Zhongc].
We have constructed and verified a theory for the dynamics of first-order phase transitions by integrating the theory of nucleation and growth with the $\phi^3$ RG theory for dynamic scaling and universality in first-order phase transitions and the theory of finite-time scaling. The theory relies on the time scale of nucleation and growth and the time scale of driving and offers a new physical interpretation of the instability points and different regimes in the dynamics. On the one hand, despite being interwoven with nonuniversal nucleation and growth, scaling and universality have been unambiguously verified in the 2D Ising model below its critical temperature. As a consequence, first-order phase transitions can be studied similar to their continuous counterpart and the theories for both kinds of transitions can be unified. On the other hand, the intimate relationship with scaling and universality provides a new way to accurately determine nucleation and growth.
I thank Xuanmin Cao and Weilun Yuan for their useful discussions. This work was supported by National Natural Science Foundation of China (Grant No. 11575297).
[99]{} M. E. Fisher, Physics [**3**]{}, 255 (1967). M. Barmatz, I. Hahn, J. A. Lipa, and R. V. Duncan, Rev. Mod. Phys. [**79**]{}, 1 (2007). J. D. Gunton, M. San Miguel, and P. S. Sahni, in [*Phase Transitions and Critical Phenomena*]{}, eds. C. Domb and J. L. Lebowitz Vol. 8, 267 (Academic, London, 1983). A. J. Bray, Adv. Phys. [**43**]{}, 357 (1994). K. Binder and P. Fratzl, in [*Phase Transformations in Materials*]{}, ed. G. Kostorz, 409 (Wiley, Weinheim, 2001). M. Volmer and A. Weber, Z. Phys. Chem. (Leipzig) [**119**]{}, 277 (1926). L. Farkas, [*ibid.*]{} [**125**]{}, 236 (1927). R. Becker and W. Döring, Ann. Phys. (Leipzig) [**24**]{}, 719 (1935). Ya. B. Zeldovich, Acta Physicochim. USSR [**18**]{}, 1 (1943). 251 (1990).F. F. Abraham, [*Homogeneous Nucleation Theory*]{} (Academic, New York, 1974). P. Debenedetti, [*Metastable Liquids*]{} (Princeton University Press, Princeton, NJ 1996). D. Kashchiev, [*Nucleation: Basic Theory with Applications*]{} (Butterworth-Heinemann, Oxford, 2000).
D. W. Oxtoby, Acc. Chem. Res. [**31**]{}, 91 (1998). J. D. Gunton, J. Stat. Phys. [**95**]{}, 903 (1999). S. Auer and D. Frenkel, Annu. Rev. Phys. Chem. [**55**]{}, 333 (2004). R. P. Sear, J. Phys.: Condens. Matter [**19**]{}, 033101 (2007).
A. N. Kolmogorov, Bull. Acad. Sci. USSR, Class Sci., Math. Nat. [**3**]{}, 355 (1937) W. A. Johnson and P. A. Mehl, Trans. Am. Inst. Min. Metall. Eng. [**135**]{}, 416 (1939). M. Avrami, J. Chem. Phys. [**7**]{}, 1103 (1939). S. Auer and D. Frenkel, Nature (London) [**409**]{}, 1020 (2001). T. Kawasaki and H. Tanaka, Proc. Nat. Acad. Sci. [**107**]{}, 14036 (2010). L. Filion, R. Ni, D. Frenkel, and M. Dijkstra, J. Chem. Phys. [**134**]{}, 134901 (2011). P. A. Rikvold, H. Tomita, S. Miyashita, and S. W. Sides, Phys. Rev. E [**49**]{}, 5080 (1994). J. S. Langer, Ann. Phys. (N. Y.) [**41**]{}, 108 (1967).
N. J. Günther, D. J. Wallace, and D. A. Nicole, J. Phys. A [**13**]{}, 1755 (1980). C. K. Harris, J. Phys. A [**17**]{}, L143 (1984). S. Prestipino, A. Laio, and E. Tosatti, Phys. Rev. Lett. [**108**]{}, 225701 (2012).
S. W. Sides, P. A. Rikvold, and M. A. Novotny, Phys. Rev. E [**57**]{}, 6512 (1998). S. W. Sides, P. A. Rikvold, and M. A. Novotny, Phys. Rev. E [**59**]{}, 2710 (1999). R. A. Ramos, P. A. Rikvold, and M. A. Novotny, Phys. Rev. B [**59**]{}, 9053 (1999). F. Zhong, arXiv:1704.03350 (2017).
F. Zhong and Q. Z. Chen, Phys. Rev. Lett. [**95**]{}, 175701 (2005). F. Zhong, Front. Phys. [**12**]{}, 126402 (2017) \[arXiv1205.1400 (2012)\].
F. Zhong, Phys. Rev. E [**86**]{}, 022104 (2012). P. B. Thomas and D. Dhar, J. Phys. A [**26**]{}, 3973 (1993). For a review, see K. Chakrabarti and M. Acharyya, Rev. Mod. Phys. [**71**]{}, 847 (1998). N. Liang and F. Zhong, Phys. Rev. E [**95**]{}, 032124 (2017). A. Pelissetto and E. Vicari, Phys. Rev. Lett. [**118**]{}, 030602 (2017).
S. Gong, F. Zhong, X. Huang, and S. Fan, New J. Phys.[**12**]{}, 043036 (2010). F. Zhong, in [*Applications of Monte Carlo Method in Science and Engineering*]{}. ed. S. Mordechai, 469 (Intech, 2011). Available at http://www.intechopen.com/books/applications-of-monte-carlo-method-in-science-and-engineering/finite-time-scaling-and-its-applications-to-continuous-phase-transitions. Y. Huang, S. Yin, B. Feng, and F. Zhong, Phys. Rev. B [**90**]{}, 134108 (2014). B. Feng, S. Yin, and F. Zhong, Phys. Rev. B [**94**]{}, 144103 (2016). Zhong Fan, Zhang Jinxiu, and Liu Xiao, Phys. Rev. E [**52**]{}, 1399 (1995). F. Zhong, Phys. Rev. B [**66**]{}, 060401(R) (2002).
I. Lifshitz, Sov. Phys. JETP [**15**]{}, 939-942 (1962) \[Zh. Eksp. Teor. Fiz. [**42**]{}, 1354 (1962)\].S. Allen and J. Cahn, Acta Metall. [**27**]{}, 1084 (1979).
C. Billoted and K. Binder, Z. Phys. B [**32**]{}, 195 (1979).K. Kawasaki, T. Imaeda, and J. D. Gunton, in [*Perspectives in Statistical Physics*]{}, ed. H. J. Raveché, 201 (North Holland, Amsterdam, 1981). K. Kaski, K. Binder, and J. D. Gunton, Phys. Rev. B [**29**]{}, 3996 (1984).K. Binder, Phys. Rev. Lett. [**47**]{}, 693 (1981); Z. Phys. B [**43**]{}, 119 (1981). J. L. Cardy, Phys. Rev. Lett. [**54**]{}, 1354 (1985).
| {
"pile_set_name": "ArXiv"
} |
---
bibliography:
- 'bibliography.bib'
---
| {
"pile_set_name": "ArXiv"
} |
---
author:
-
title: Higgs Physics
---
Introduction
============
A major goal of particle physics program at high energy frontier, currently being pursued at CERN Large Hadron Collider (LHC), is to unravel nature of electroweak symmetry breaking (EWSB). While existence of massive electroweak gauge bosons ($W^\pm,Z$), together with successful description of their behavior by non-abelian gauge theory, requires some form of EWSB to be present in nature, the underlying dynamics remained unknown for several decades. An appealing theoretical suggestion for such dynamics is Higgs mechanism [@higgs-mechanism], which implies existence of one or more Higgs bosons (depending on specific model considered). Therefore, search for a Higgs boson was considered a major cornerstone in physics program of LHC.
The spectacular discovery of a Higgs-like particle with a mass around $\MH \simeq \simMH \gev$, which has been announced by ATLAS [@ATLASdiscovery] CMS [@CMSdiscovery], marks a milestone of an effort that has been ongoing for almost half a century and opens up a new era of particle physics. Both ATLAS CMS reported a clear excess in two photon channel, as well as in $ZZ^{(*)}$ channel. discovery was further corroborated, though not with high significance, by the $WW^{(*)}$ channel by final Tevatron results [@TevHiggsfinal]. Latest ATLAS/CMS results can be found in .
Many theoretical models employing Higgs mechanism in order to account for electroweak symmetry breaking have been studied in literature, of which the most popular ones are Standard Model (SM) [@sm] and Minimal Supersymmetric Standard Model (MSSM) [@mssm]. The newly discovered particle can be interpreted as SM Higgs boson. The MSSM has a richer Higgs sector, containing two neutral $\cp$-even, one neutral $\cp$-odd two charged Higgs bosons. The newly discovered particle can also be interpreted as light (or the the heavy) $\cp$-even state [@Mh125]. Among alternative theoretical models beyond SM MSSM, the most prominent are the Two Higgs Doublet Model (THDM) [@thdm], non-minimal supersymmetric extensions of SM (e.g. extensions of MSSM by an extra singlet superfield [@NMSSM-etc]), little Higgs models [@lhm] models with more than three spatial dimensions [@edm].
We will discuss Higgs boson sector in SM MSSM. This includes their agreement with recently discovered particle around $\sim \simMH \gev$ searches for supersymmetric (SUSY) Higgs bosons at LHC. While the LHC, after discovery of a Higgs-like boson, will be able to measure some of its properties, a “cleaner” experimental environment, such as at ILC, will be needed to measure all Higgs boson characteristics [@lhcilc; @lhc2fc; @lhc2tsp].
The SM Higgs
==============
Higgs: Why How?
----------------
We start with looking at one of most simple Lagrangians, one of QED: $$\begin{aligned}
\cL_{\rm QED} &= -\ed{4} F_{\mu\nu} F^{\mu\nu}
+ \bar\psi (i \ga^\mu D_\mu - m) \psi~.\end{aligned}$$ Here $D_\mu$ denotes covariant derivative $$\begin{aligned}
D_\mu &= \partial_\mu + i\,e\,A_\mu~.\end{aligned}$$ $\psi$ is electron spinor, $A_\mu$ is photon vector field. QED Lagrangian is invariant under local $U(1)$ gauge symmetry, $$\begin{aligned}
\psi &\to e^{-i\al(x)}\psi~, \\
A_\mu &\to A_\mu + \ed{e} \partial_\mu \al(x)~.
\label{gaugeA}\end{aligned}$$ Introducing a mass term for photon, $$\begin{aligned}
\cL_{\rm photon~mass} &= \edz m_A^2 A_\mu A^\mu~,\end{aligned}$$ however, is not gauge-invariant. Applying yields $$\begin{aligned}
\edz m_A^2 A_\mu A^\mu &\to \edz m_A^2 \KKL
A_\mu A^\mu + \frac{2}{e} A^\mu \partial_\mu \al
+ \ed{e^2} \partial_\mu \al \, \partial^\mu \al \KKR~.\end{aligned}$$
A way out is Higgs mechanism [@higgs-mechanism]. The simplest implementation uses one elementary complex scalar Higgs field $\Phi$ that has a vacuum expectation value $v$ (vev) that is constant in space time. The Lagrangian of new Higgs field reads $$\begin{aligned}
\cL_\Phi &= \cL_{\Phi, {\rm kin}} + \cL_{\Phi, {\rm pot}}\end{aligned}$$ with $$\begin{aligned}
\cL_{\Phi, {\rm kin}} &= (D_\mu \Phi)^* \, (D^\mu \Phi)~, \\
-\cL_{\Phi, {\rm pot}} &= V(\Phi) = \mu^2 |\Phi|^2 + \la |\Phi|^4~.\end{aligned}$$ Here $\la$ has to be chosen positive to have a potential bounded from below. $\mu^2$ can be either positive or negative, where we will see that $\mu^2 < 0$ yields desired vev, as will be shown below. The complex scalar field $\Phi$ can be parametrized by two real scalar fields $\phi$ and $\eta$, $$\begin{aligned}
\Phi(x) &= \ed{\wz} \phi(x) e^{i \eta(x)}~,\end{aligned}$$ yielding $$\begin{aligned}
%V(\Phi) &\sim
V(\phi) &= \frac{\mu^2}{2} \phi^2 + \frac{\la}{4} \phi^4~.\end{aligned}$$ Minimizing potential one finds $$\begin{aligned}
\frac{{\rm d}V}{{\rm d}\phi}_{\big| \phi = \phi_0} &=
\mu^2 \phi_0 + \la \phi_0^3 \stackrel{!}{=} 0~.\end{aligned}$$ Only for $\mu^2 < 0$ this yields desired non-trivial solution $$\begin{aligned}
\phi_0 &= \sqrt{\frac{-\mu^2}{\la}} \KL = \langle \phi \rangle =: v \KR~.\end{aligned}$$ The picture simplifies more by going to “unitary gauge”, $\al(x) = -\eta(x)/v$, which yields a real-valued $\Phi$ everywhere. The kinetic term now reads $$\begin{aligned}
(D_\mu \Phi)^* \, (D^\mu \Phi) &\to
\edz (\partial_\mu \phi)^2 + \edz e^2 q^2 \phi^2 A_\mu A^\mu~,
\label{LphiA}\end{aligned}$$ where $q$ is charge of Higgs field, which can now be expanded around its vev, $$\begin{aligned}
\phi(x) &= v \; + \; H(x)~.
\label{vH}\end{aligned}$$ The remaining degree of freedom, $H(x)$, is a real scalar boson, the Higgs boson. Higgs boson mass self-interactions are obtained by inserting into Lagrangian (neglecting a constant term), $$\begin{aligned}
-\cL_{\rm Higgs} &= \edz \mH^2 H^2 + \frac{\kappa}{3!} H^3
+ \frac{\xi}{4!} H^4~,\end{aligned}$$ with $$\begin{aligned}
\mH^2 = 2 \la v^2, \quad
\kappa = 3 \frac{\mH^2}{v}, \quad
\xi = 3 \frac{\mH^2}{v^2}~.\end{aligned}$$ Similarly, can be inserted in , yielding (neglecting the kinetic term for $\phi$), $$\begin{aligned}
\cL_{\rm Higgs-photon} &= \edz m_A^2 A_\mu A^\mu + e^2 q^2 v H A_\mu A^\mu
+ \edz e^2 q^2 H^2 A_\mu A^\mu\end{aligned}$$ where second third term describe interaction between the photon one or two Higgs bosons, respectively, first term is the photon mass, $$\begin{aligned}
\mA^2 &= e^2 q^2 v^2~.
\label{mA}\end{aligned}$$ Another important feature can be observed: coupling of photon to the Higgs is proportional to its own mass squared.
Similarly a gauge invariant Lagrangian can be defined to give mass to the chiral fermion $\psi = (\psi_L, \psi_R)^T$, $$\begin{aligned}
\cL_{\rm fermion~mass} &= y_\psi \psi_L^\dagger \, \Phi \, \psi_R + {\rm c.c.}~,\end{aligned}$$ where $y_\psi$ denotes dimensionless Yukawa coupling. Inserting $\Phi(x) = (v + H(x))/\wz$ one finds $$\begin{aligned}
\cL_{\rm fermion~mass} &= m_\psi \psi_L^\dagger \psi_R
+ \frac{m_\psi}{v} H\, \psi_L^\dagger \psi_R
+ {\rm c.c.}~,\end{aligned}$$ with $$\begin{aligned}
m_\psi &= y_\psi \frac{v}{\wz}~.\end{aligned}$$ Again important feature can be observed: by construction the coupling of fermion to Higgs boson is proportional to its own mass $m_\psi$.
The “creation” of a mass term can be viewed from a different angle (see also ). The interaction of gauge field or fermion field with the scalar background field, i.e. vev, shifts masses of these fields from zero to non-zero values. This is shown graphically in for gauge boson (a) fermion (b) field.
(60,50)(80,40) (0,25)(50,25)[3]{}[6]{} (55,25)(70,25) (-15,30)[$V$]{} (-15,50)[$(a)$]{}
(60,10)(40,40) (0,25)(50,25)[3]{}[6]{} (75,30)[$+$]{}
(60,10)(15,40) (0,25)(50,25)[3]{}[6]{} (25,25)(12,50)[3]{} (25,25)(38,50)[3]{} (9,53)(15,47) (9,47)(15,53) (35,53)(41,47) (35,47)(41,53) (15,80)[$v$]{} (50,80)[$v$]{} (75,30)[$+$]{}
(60,10)(-10,40) (0,25)(75,25)[3]{}[9]{} (20,25)(8,50)[3]{} (20,25)(32,50)[3]{} (55,25)(43,50)[3]{} (55,25)(67,50)[3]{} (5,53)(11,47) (5,47)(11,53) (29,53)(35,47) (29,47)(35,53) (40,53)(46,47) (40,47)(46,53) (64,53)(70,47) (64,47)(70,53) (08,80)[$v$]{} (43,80)[$v$]{} (57,80)[$v$]{} (92,80)[$v$]{} (110,30)[$+ \cdots$]{}
\
(60,80)(80,20) (0,25)(50,25) (55,25)(70,25) (-15,32)[$f$]{} (-15,50)[$(b)$]{}
(60,80)(40,20) (0,25)(50,25) (75,30)[$+$]{}
(60,80)(15,20) (0,25)(25,25) (25,25)(50,25) (25,25)(25,50)[3]{} (22,53)(28,47) (22,47)(28,53) (43,70)[$v$]{} (75,30)[$+$]{}
(60,80)(-10,20) (0,25)(25,25) (25,25)(50,25) (50,25)(75,25) (25,25)(25,50)[3]{} (50,25)(50,50)[3]{} (22,53)(28,47) (22,47)(28,53) (47,53)(53,47) (47,47)(53,53) (43,70)[$v$]{} (78,70)[$v$]{} (110,30)[$+ \cdots$]{}
\
The shift in propagators reads (with $p$ being external momentum and $g = e q$ in ): $$\begin{aligned}
&(a) \; &\ed{p^2} \; \to \; \ed{p^2} +
\sum_{k=1}^{\infty} \ed{p^2} \KKL \KL \frac{g v}{2} \KR \ed{p^2} \KKR^k
&= \ed{p^2 - m_V^2} %\quad
{\rm ~with~} m_V^2 = g^2 \frac{v^2}{4} ~, \\
&(b) \; &\ed{\pslash} \; \to \; \ed{\pslash} +
\sum_{k=1}^{\infty} \ed{\pslash} \KKL \KL \frac{y_\psi v}{2} \KR
\ed{\pslash} \KKR^k
&= \ed{\pslash - m_\psi} %\quad
{\rm ~with~} m_\psi = y_\psi \frac{v}{\wz} ~.\end{aligned}$$
SM Higgs Theory
---------------
We now turn to electroweak sector of SM, which is described by the gauge symmetry $SU(2)_L \times U(1)_Y$. bosonic part of the Lagrangian is given by $$\begin{aligned}
\cL_{\rm bos} &= -\ed{4} B_{\mu\nu} B^{\mu\nu}
- \ed{4} W_{\mu\nu}^a W^{\mu\nu}_a
+ |D_\mu \Phi|^2 - V(\Phi), \\
V(\Phi) &= \mu^2 |\Phi|^2 + \la |\Phi|^4~.\end{aligned}$$ $\Phi$ is a complex scalar doublet with charges $(2, 1)$ under SM gauge groups, $$\begin{aligned}
\Phi &= \VL \phi^+ \\ \phi^0 \VR~,\end{aligned}$$ and electric charge is given by $Q = T^3 + \edz Y$, where $T^3$ third component of weak isospin. We furthermore have $$\begin{aligned}
D_\mu &= \partial_\mu + i g \frac{\tau^a}{2} W_{\mu\,a}
+ i g^\prime \frac{Y}{2} B_\mu ~, \\
B_{\mu\nu} &= \partial_\mu B_\nu - \partial_\nu B_\mu ~, \\
W_{\mu\nu}^a &= \partial_\mu W_\nu^a - \partial_\nu W_\mu^a
- g f^{abc} W_{\mu\,b} W_{\nu\,c}~.\end{aligned}$$ $g$ $g^\prime$ are $SU(2)_L$ $U(1)_Y$ gauge couplings, respectively, $\tau^a$ are Pauli matrices, $f^{abc}$ are the $SU(2)$ structure constants.
Choosing $\mu^2 < 0$ minimum of Higgs potential is found at $$\begin{aligned}
\langle \Phi \rangle &= \ed{\wz} \VL 0 \\ v \VR
\quad {\rm with} \quad
v:= \sqrt{\frac{-\mu^2}{\la}}~.\end{aligned}$$ $\Phi(x)$ can now be expressed through vev, Higgs boson and three Goldstone bosons $\phi_{1,2,3}$, $$\begin{aligned}
\Phi(x) &= \ed{\wz} \VL \phi_1(x) + i \phi_2(x) \\
v + H(x) + i \phi_3(x) \VR~.\end{aligned}$$ Diagonalizing mass matrices of gauge bosons, one finds that the three massless Goldstone bosons are absorbed as longitudinal components of three massive gauge bosons, $W_\mu^\pm, Z_\mu$, while the photon $A_\mu$ remains massless, $$\begin{aligned}
W_\mu^\pm &= \ed{\wz} \KL W_\mu^1 \mp i W_\mu^2 \KR ~,\\
Z_\mu &= \cw W_\mu^3 - \sw B_\mu ~,\\
A_\mu &= \sw W_\mu^3 + \cw B_\mu ~.\end{aligned}$$ Here we have introduced weak mixing angle $\theta_W = \arctan(g^\prime/g)$, $\sw := \sin \theta_W$, $\cw := \cos \theta_W$. Higgs-gauge boson interaction Lagrangian reads, $$\begin{aligned}
\cL_{\rm Higgs-gauge} &= \KKL \MW^2 W_\mu^+ W^{-\,\mu}
+ \edz \MZ^2 Z_\mu Z^\mu \KKR
\KL 1 + \frac{H}{v} \KR^2 \non \\
&\quad - \edz \MH^2 H^2 - \frac{\kappa}{3!} H^3 - \frac{\xi}{4!} H^4~,\end{aligned}$$ with $$\begin{aligned}
\MW &= \edz g v, \quad
\MZ = \edz \sqrt{g^2 + g^{\prime 2}} \; v, \\
(\MHSM := ) \; \MH &= \sqrt{2 \la}\; v, \quad
\kappa = 3 \frac{\MH^2}{v}, \quad
\xi = 3 \frac{\MH^2}{v^2}~.\end{aligned}$$ From measurement of gauge boson masses couplings one finds $v \approx 246 \gev$. Furthermore two massive gauge boson masses are related via $$\begin{aligned}
\frac{\MW}{\MZ} &= \frac{g}{\sqrt{g^2 + g^{\prime 2}}} \; = \; \cw~.\end{aligned}$$
We now turn to fermion masses, where we take top- and bottom-quark masses as a representative example. Higgs-fermion interaction Lagrangian reads $$\begin{aligned}
\label{SMfmass}
\cL_{\rm Higgs-fermion} &= y_b Q_L^\dagger \, \Phi \, b_R \; + \;
y_t Q_L^\dagger \, \Phi_c \, t_R + {\rm ~h.c.}\end{aligned}$$ $Q_L = (t_L, b_L)^T$ is left-handed $SU(2)_L$ doublet. Going to the “unitary gauge” Higgs field can be expressed as $$\begin{aligned}
\Phi(x) &= \ed{\wz} \VL 0 \\ v + H(x) \VR~,
\label{SMPhi}\end{aligned}$$ and it is obvious that this doublet can give masses only to the bottom(-type) fermion(s). A way out is definition of $$\begin{aligned}
\Phi_c &= i \si^2 \Phi^* \; = \; \ed{\wz} \VL v + H(x) \\ 0 \VR~,
\label{SMPhic}\end{aligned}$$ which is employed to generate top(-type) mass(es) in . Inserting , (\[SMPhic\]) into yields $$\begin{aligned}
\cL_{\rm Higgs-fermion} &= \mb \bar b b \KL 1 + \frac{H}{v} \KR
+ \mt \bar t t \KL 1 + \frac{H}{v} \KR\end{aligned}$$ where we have used $\bar \psi \psi = \psi_L^\dagger \psi_R + \psi_R^\dagger \psi_L$ and $\mb = y_b v/\wz$, $\mt = y_t v/\wz$.
The mass of SM Higgs boson, $\MHSM$, is in principle a free parameter in model. However, it is possible to derive bounds on $\MHSM$ derived from theoretical considerations [@RGEla1; @RGEla2; @RGEla3] from experimental precision data [@lepewwg; @gfitter].
Evaluating loop diagrams as shown in middle right of yields renormalization group equation (RGE) for $\la$, $$\begin{aligned}
\frac{{\rm d}\la}{{\rm d} t} &=
\frac{3}{8 \pi^2} \KKL \la^2 + \la y_t^2 - y_t^4
+ \ed{16} \KL 2 g^4 + (g^2 + g^{\prime 2})^2 \KR \KKR~,
\label{RGEla}\end{aligned}$$ with $t = \log(Q^2/v^2)$, where $Q$ is energy scale.
(90,80)(60,-10) (0,50)(25,25)[3]{} (0,0)(25,25)[3]{} (50,50)(25,25)[3]{} (50,0)(25,25)[3]{} (-15,65)[$H$]{} (-15,-5)[$H$]{} (70,-5)[$H$]{} (70,65)[$H$]{} (30,45)[$\la$]{}
(90,80)(10,-10) (0,50)(25,25)[3]{} (0,0)(25,25)[3]{} (75,50)(50,25)[3]{} (75,0)(50,25)[3]{} (37.5,25)(12.5,0,360)[3]{} (-15,65)[$H$]{} (-15,-5)[$H$]{} (47,57)[$H$]{} (110,-5)[$H$]{} (110,65)[$H$]{}
(50,80)(-40,2.5) (0,0)(25,25)[3]{} (0,75)(25,50)[3]{} (50,50)(75,75)[3]{} (50,25)(75,0)[3]{} (25,25)(50,25) (50,25)(50,50) (50,50)(25,50) (25,50)(25,25) (-15,100)[$H$]{} (-15,-5)[$H$]{} (53,77)[$t$]{} (110,-5)[$H$]{} (110,100)[$H$]{}
\
For large $\MH^2 \propto \la$ reduces to $$\begin{aligned}
\frac{{\rm d} \la}{{\rm d} t} &= \frac{3}{8 \pi^2} \la^2 \quad
\Rightarrow \quad \la(Q^2) = \frac{\la(v^2)}
{1 - \frac{3 \la(v^2)}{8 \pi^2} \log \KL \frac{Q^2}{v^2} \KR}~.\end{aligned}$$ For $\frac{3 \la(v^2)}{8 \pi^2} \log \KL \frac{Q^2}{v^2} \KR = 1$ one finds that $\la$ diverges (it runs into “Landau pole”). Requiring $\la(\La) < \infty$ yields an upper bound on $\MH^2$ depending up to which scale $\La$ the Landau pole should be avoided, $$\begin{aligned}
\la(\La) < \infty \quad \Rightarrow \quad
\MH^2 \le \frac{8 \pi^2 v^2}{3 \log \KL \frac{\La^2}{v^2} \KR}~.
\label{MHup}\end{aligned}$$
For small $\MH^2 \propto \la$, on other hand, reduces to $$\begin{aligned}
\frac{{\rm d} \la}{{\rm d} t} &= \frac{3}{8 \pi^2}
\KKL -y_t^4 + \ed{16} \KL 2 g^4 + (g^2 + g^{\prime 2})^2 \KR \KKR \\
\Rightarrow \quad \la(Q^2) &= \la(v^2) \frac{3}{8 \pi^2}
\KKL -y_t^4 + \ed{16} \KL 2 g^4 + (g^2 + g^{\prime 2})^2 \KR \KKR
\log\KL\frac{Q^2}{v^2}\KR~.\end{aligned}$$ Demanding $V(v) < V(0)$, corresponding to $\la(\La) > 0$ one finds a lower bound on $\MH^2$ depending on $\La$, $$\begin{aligned}
\la(\La) > 0 \quad \Rightarrow \quad
\MH^2 \; > \; \frac{v^2}{4 \pi^2}
\KKL -y_t^4 + \ed{16} \KL 2 g^4 + (g^2 + g^{\prime 2})^2 \KR \KKR
\log\KL\frac{\La^2}{v^2}\KR~.
\label{MHlow}\end{aligned}$$
The combination of upper bound in lower bound in on $\MH$ is shown in . Requiring validity of SM up to GUT scale yields a limit on the SM Higgs boson mass of $130 \gev \lsim \MHSM \lsim 180 \gev$.
![image](higgs_sm_bounds1){height="6cm"} ![image](higgs_sm_bounds2){height="5cm"}
Predictions for a SM Higgs-boson at LHC {#sec:SMHiggs}
----------------------------------------
In order to efficiently search for SM Higgs boson at LHC precise predictions for production cross sections decay branching ratios are necessary. To provide most up-to-date predictions in 2010 the “LHC Higgs Cross Section Working Group” [@lhchxswg] was founded. Two of main results are shown in , see for an extensive list of references. left plot shows SM theory predictions for main production cross sections, where colored bands indicate theoretical uncertainties. The right plot shows branching ratios (BRs), again with colored band indicating theory uncertainty (see for more details). Results of this type are constantly updated refined by Working Group.
![image](Higgs_XS_8TeV_LM200){width=".45\textwidth"} ![image](Higgs_BR_LM){width=".45\textwidth" height=".45\textwidth"}
Discovery of an SM Higgs-like particle at LHC {#sec:SMHiggsLHC}
----------------------------------------------
On 4th of July 2012 both ATLAS [@ATLASdiscovery] and CMS [@CMSdiscovery] announced discovery of a new boson with a mass of $\sim 125.5 \gev$. This discovery marks a milestone of an effort that has been ongoing for almost half a century and opens up a new era of particle physics. In one can see $p_0$ values of search for the SM Higgs boson (with all search channels combined) as presented by ATLAS (left) CMS (right) in July 2012. $p_0$ value gives the probability that experimental results observed can be caused by background only, i.e. in this case assuming absense of a Higgs boson at each given mass. While $p_0$ values are close to $\sim 0.5$ for nearly all hypothetical Higgs boson masses (as would be expected for the absense of a Higgs boson), both experiments show a very low $p_0$ value of $p_0 \sim 10^{-6}$ around $\MH \sim 125.5 \gev$. This corresponds to a discovery of a new particle at $5\,\si$ level by each experiment individually.
Another step in analysis is a comparison of measurement of production cross sectinos times branching ratios with respective SM prediction. Two examples, using LHC data of about $5\,\ifb$ at $7 \tev$ about $20\,\ifb$ at $8 \tev$ are shown in . Here ATLAS [@ATLAS_5p20ifb] (left) and CMS [@CMS-Higgs-WWW] (right) compare their experimental results with SM prediction in various channels. It can be seen that all channels are, within theoretical experimental uncertainties, in agreement with the SM.
![image](CombinedResults-ATLAS-040712){width=".48\textwidth" height=".45\textwidth"} ![image](Combined-CMS-040712){width=".45\textwidth"}
![image](ATLAS_5p20ifb_allchannels){width=".45\textwidth" height=".45\textwidth"} ![image](CMS_5p20ifb_allchannels){width=".45\textwidth"}
In this discussion it must be kept in mind that a measurement of the total width thus of individual couplings is not possible at the LHC (see, e.g., references therein). In SM, for a fixed value of $\MH$, all Higgs couplings to other (SM) particles are specifed. Consequently, it is in general not possible to perform a fit to experimenal data within SM, where Higgs couplings are treated as free parameters. Therefore, in order to test compatibility of predictions for the SM Higgs boson with (2012) experimental data, LHC Higgs Cross Section Working Group proposed several benchmark scenarios for “coupling scale factors” [@HiggsRecommendation; @YR3] (see for a recent review on Higgs coupling extractions). Effectively, predicted SM Higgs cross sections partial decay widths are dressed with scale factors $\kappa_i$ (and $\kappa_i=1$ corresponds to SM). Several assumptions are made for this $\kappa$-framework: there is only one state at $\simMH \gev$ responsible for signal, coupling structure is same as for SM Higgs (i.e. it is a $\cp$-even scalar), zero width approximation is assumed to be valid, allowing for a clear separation simple handling of production and decay of Higgs particle. The most relevant coupling strength modifiers are $\kappa_t$, $\kappa_b$, $\kappa_\tau$, $\kappa_W$, $\kappa_Z$, $\kappa_\gamma$, $\kappa_g$, …[^1]
One limitation at LHC (but not at ILC) is fact that the total width cannot be determined experimentally without additional theory assumptions. In absence of a total width measurement only ratios of $\kappa$’s can be determined from experimental data. An assumption often made is $\kappa_{W,Z} \le 1$ [@Duhrssen:2004cv]. A recent analysis from CMS using Higgs decays to $ZZ$ far off-shell yielded an upper limit on total width about four times larger than the SM width [@CMS:2014ala]. However, here assumption of the equality of on-shell off-shell couplings of Higgs boson plays a crucial role. It was pointed out that this equality is violated in particular in presense of new physics in Higgs sector [@Englert:2014aca].
In left plot of we compare results estimated for HL-LHC (with $3 \iab$ an assumed improvement of 50% in theoretical uncertainties) with various stages of ILC under theory assumption $\kappa_{W,Z} \le 1$ [@HiggsCouplings]. This most general fit includes $\kappa_{W,Z}$ for gauge bosons, $\kappa_{u,d,l}$ for up-type quarks, down-type quarks charged leptons, respectively, as well as $\kappa_\gamma$ $\kappa_g$ for the loop-induced couplings of Higgs to photons gluons. Also the (possibly invisible) branching ratio of Higgs boson to new physics ($\br(H \to {\rm NP})$) is included. One can observe that HL-LHC and ILC250 yield comparable results. However, going to higher ILC energies, yields substantially higher precisions in fit for the coupling scale factors. In final stage of ILC (ILC1000 LumiUp), precisions at per-mille level in $\kappa_{W,Z}$ are possible. $1-2\%$ range is reached for all other $\kappa$’s. The branching ratio to new physics can be restricted to per-mille level.
Using ILC data theory assumption $\kappa_{W,Z} \le 1$ can be dropped, since “$Z$-recoil method” (see and references therein) allows for a model independent determination of $HZZ$ coupling. The corresponding results are shown in right plot of , where HL-LHC results are combined with various stages of ILC. results from HL-LHC alone continue to very large values of $\kappa$’s, since fit cannot be done without theory assumptions. Including ILC measurements (where first line corresponds to the inclusion of [*only*]{} $\si_{ZH}^{\rm total}$ measurement at the ILC) yields a converging fit. In final ILC stage $\kappa_{W,Z}$ are determined to better than one per-cent, whereas other coupling scale factors are obtained in $1-2\%$ range. branching ratio to new physics is restricted to be smaller than one per-cent.
![image](HLLHC_vs_ILC_KVle1){width="45.00000%"} ![image](HLLHC_ILC_mod_indep_withLHConly){width="45.00000%"}
The Higgs in Supersymmetry
==========================
Why SUSY?
---------
Theories based on Supersymmetry (SUSY) [@mssm] are widely considered as theoretically most appealing extension of SM. They are consistent with approximate unification of gauge coupling constants at GUT scale and provide a way to cancel quadratic divergences in Higgs sector hence stabilizing huge hierarchy between GUT electroweak (EW) scale. Furthermore, in SUSY theories breaking of electroweak symmetry is naturally induced at EW scale, lightest supersymmetric particle can be neutral, weakly interacting and absolutely stable, providing therefore a natural solution for dark matter problem.
The Minimal Supersymmetric Standard Model (MSSM) constitutes, hence its name, minimal supersymmetric extension of the SM. number of SUSY generators is $N=1$, smallest possible value. In order to keep anomaly cancellation, contrary to SM a second Higgs doublet is needed [@glawei]. All SM multiplets, including two Higgs doublets, are extended to supersymmetric multiplets, resulting in scalar partners for quarks and leptons (“squarks” “sleptons”) fermionic partners for the SM gauge boson Higgs bosons (“gauginos”, “higgsinos” and “gluinos”). So far, direct search for SUSY particles has not been successful. One can only set lower bounds of to on their masses [@SUSYMoriond14; @SUSYMoriond14-QCD].
The MSSM Higgs sector
---------------------
An excellent review on this subject is given in .
### The Higgs boson sector at tree-level {#sec:Higgstree}
Contrary to SM, in MSSM two Higgs doublets are required. The Higgs potential [@hhg] $$\begin{aligned}
V &= m_{1}^2 |\cHe|^2 + m_{2}^2 |\cHz|^2
- m_{12}^2 (\epsilon_{ab} \cHe^a\cHz^b + \mbox{h.c.}) \non \\
& + \frac{1}{8}(g^2+g^{\prime 2}) \left[ |\cHe|^2 - |\cHz|^2 \right]^2
+ \frac{1}{2} g^2|\cHe^{\dag} \cHz|^2~,
\label{higgspot}\end{aligned}$$ contains $m_1, m_2, m_{12}$ as soft SUSY breaking parameters; $g, g'$ are as before $SU(2)$ $U(1)$ gauge couplings, $\epsilon_{12} = -1$.
The doublet fields $\cHe$ $\cHz$ are decomposed in following way: $$\begin{aligned}
\cHe &= \VL \cHe^0 \\[0.5ex] \cHe^- \VR \; = \; \VL v_1
+ \frac{1}{\sqrt2}(\phi_1^0 - i\chi_1^0) \\[0.5ex] -\phi_1^- \VR~,
\non \\
\cHz &= \VL \cHz^+ \\[0.5ex] \cHz^0 \VR \; = \; \VL \phi_2^+ \\[0.5ex]
v_2 + \frac{1}{\sqrt2}(\phi_2^0 + i\chi_2^0) \VR~.
\label{higgsfeldunrot}\end{aligned}$$ $\cHe$ gives mass to down-type fermions, while $\cHz$ gives masses to up-type fermions. The potential (\[higgspot\]) can be described with help of two independent parameters (besides $g$ $g'$): $\Tb = v_2/v_1$ $M_A^2 = -m_{12}^2(\Tb+\CTb)$, where $M_A$ is mass of $\cp$-odd Higgs boson $A$.
Which values can be expected for $\tb$? One natural choice would be $\tb \approx 1$, i.e. both vevs are about same. On other hand, one can argue that $v_2$ is responsible for top quark mass, while $v_1$ gives rise to bottom quark mass. Assuming that their mass differences comes largely from vevs, while their Yukawa couplings could be about same. natural value for $\tb$ would then be $\tb \approx \mt/\mb$. Consequently, one can expect $$\begin{aligned}
\label{tbrange}
1 \lsim \tb \lsim 50~.\end{aligned}$$
The diagonalization of bilinear part of Higgs potential, i.e. of Higgs mass matrices, is performed via orthogonal transformations $$\begin{aligned}
\label{hHdiag}
\VL H^0 \\[0.5ex] h^0 \VR &= \ML \Ca & \Sa \\[0.5ex] -\Sa & \Ca \MR
\VL \phi_1^0 \\[0.5ex] \phi_2^0~, \VR \\
\label{AGdiag}
\VL G^0 \\[0.5ex] A^0 \VR &= \ML \Cb & \Sbe \\[0.5ex] -\Sbe & \Cb \MR
\VL \chi_1^0 \\[0.5ex] \chi_2^0 \VR~, \\
\label{Hpmdiag}
\VL G^{\pm} \\[0.5ex] H^{\pm} \VR &= \ML \Cb & \Sbe \\[0.5ex] -\Sbe &
\Cb \MR \VL \phi_1^{\pm} \\[0.5ex] \phi_2^{\pm} \VR~.\end{aligned}$$ The mixing angle $\al$ is determined through $$\begin{aligned}
\al = {\rm arctan}\KKL
\frac{-(\MA^2 + \MZ^2) \Sbe \Cb}
{\MZ^2 \CQb + \MA^2 \SQb - m^2_{h,{\rm tree}}} \KKR~, ~~
-\frac{\pi}{2} < \al < 0
\label{alphaborn}\end{aligned}$$ with $m_{h, {\rm tree}}$ defined below in .\
One gets following Higgs spectrum: $$\begin{aligned}
\mbox{2 neutral bosons},\, {\cal CP} = +1 &: h, H \non \\
\mbox{1 neutral boson},\, {\cal CP} = -1 &: A \non \\
\mbox{2 charged bosons} &: H^+, H^- \non \\
\mbox{3 unphysical Goldstone bosons} &: G, G^+, G^- .\end{aligned}$$
At tree level mass matrix of neutral $\cp$-even Higgs bosons is given in $\Pe$-$\Pz$-basis in terms of $\MZ$, $\MA$, $\Tb$ by $$\begin{aligned}
M_{\rm Higgs}^{2, {\rm tree}} &= \ML \mpe^2 & \mpez^2 \\
\mpez^2 & \mpz^2 \MR
= \ML \MA^2 \SQb + \MZ^2 \CQb & -(\MA^2 + \MZ^2) \Sbe \Cb \\
-(\MA^2 + \MZ^2) \Sbe \Cb & \MA^2 \CQb + \MZ^2 \SQb \MR,
\label{higgsmassmatrixtree}\end{aligned}$$ which by diagonalization according to yields the tree-level Higgs boson masses $$\begin{aligned}
M_{\rm Higgs}^{2, {\rm tree}}
\stackrel{\al}{\longrightarrow}
\ML m_{H,{\rm tree}}^2 & 0 \\ 0 & m_{h,{\rm tree}}^2 \MR\end{aligned}$$ with $$\begin{aligned}
m_{H,h, {\rm tree}}^2 &=
\edz \KKL \MA^2 + \MZ^2
\pm \sqrt{(\MA^2 + \MZ^2)^2 - 4 \MZ^2 \MA^2 \CQZb} \KKR ~.
\label{mhtree}\end{aligned}$$ From this formula famous tree-level bound $$\begin{aligned}
m_{h, {\rm tree}} \le \mbox{min}\{\MA, \MZ\} \cdot |\CZb| \le \MZ\end{aligned}$$ can be obtained. The charged Higgs boson mass is given by $$\begin{aligned}
\label{rMSSM:mHp}
\mHp^2 = \MA^2 + \MW^2~.\end{aligned}$$ The masses of gauge bosons are given in analogy to SM: $$\begin{aligned}
M_W^2 = \frac{1}{2} g^2 (v_1^2+v_2^2) ;\qquad
M_Z^2 = \frac{1}{2}(g^2+g^{\prime 2})(v_1^2+v_2^2) ;\qquad M_\gamma=0.\end{aligned}$$
The couplings of Higgs bosons are modified from corresponding SM couplings already at tree-level. Some examples are $$\begin{aligned}
g_{hVV} &= \sin(\be - \al) \; g_{HVV}^{\rm SM}, \quad V = W^{\pm}, Z~, \\
g_{HVV} &= \cos(\be - \al) \; g_{HVV}^{\rm SM} ~,\\
g_{h b\bar b}, g_{h \tau^+\tau^-} &= - \frac{\sin\al}{\cos\be} \;
g_{H b\bar b, H \tau^+\tau^-}^{\rm SM} ~, \\
g_{h t\bar t} &= \frac{\cos\al}{\sin\be} \; g_{H t\bar t}^{\rm SM} ~, \\
g_{A b\bar b}, g_{A \tau^+\tau^-} &= \ga_5\tb \;
g_{H b\bar b, H \tau^+\tau^-}^{\rm SM}~.\end{aligned}$$ The following can be observed: couplings of $\cp$-even Higgs boson to SM gauge bosons is always suppressed with respect to SM coupling. However, if $g_{hVV}^2$ is close to zero, $g_{HVV}^2$ is close to $(g_{HVV}^{\rm SM})^2$ vice versa, i.e. it is not possible to decouple $\cp$-even Higgs bosons from SM gauge bosons. The coupling of $h$ to down-type fermions can be suppressed [*or enhanced*]{} with respect to SM value, depending on size of $\Sa/\Cb$. Especially for not too large values of $\MA$ large $\tb$ one finds $|\Sa/\Cb| \gg 1$, leading to a strong enhancement of this coupling. same holds, in principle, for coupling of $h$ to up-type fermions. However, for large parts of MSSM parameter space the additional factor is found to be $|\Ca/\Sbe| < 1$. For $\cp$-odd Higgs boson an additional factor $\tb$ is found. According to this can lead to a strongly enhanced coupling of the $A$ boson to bottom quarks or $\tau$ leptons, resulting in new search strategies at LHC for $\cp$-odd Higgs boson, see below.
For $\MA \gsim 150 \gev$ “decoupling limit” is reached. The couplings of light Higgs boson become SM-like, i.e. additional factors approach 1. couplings of heavy neutral Higgs bosons become similar, $g_{Axx} \approx g_{Hxx}$, masses of heavy neutral charged Higgs bosons fulfill $\MA \approx \MH \approx \MHp$. As a consequence, search strategies for $A$ boson can also be applied to $H$ boson, both are hard to disentangle at hadron colliders (see also below).
### The scalar quark sector {#sec:squark}
Since most relevant squarks for MSSM Higgs boson sector are the $\Stop$ and $\Sbot$ particles, here we explicitly list their mass matrices in basis of gauge eigenstates $\StopL, \StopR$ $\SbotL, \SbotR$: $$\begin{aligned}
\label{stopmassmatrix}
{\cal M}^2_{\Stop} &=
\ML \MstL^2 + \mt^2 + \CZb (\edz - \frac{2}{3} \sw^2) \MZ^2 &
\mt \Xt \\
\mt \Xt &
\MstR^2 + \mt^2 + \frac{2}{3} \CZb \sw^2 \MZ^2
\MR, \\
& \non \\
\label{sbotmassmatrix}
{\cal M}^2_{\Sbot} &=
\ML \MsbL^2 + \mb^2 + \CZb (-\edz + \frac{1}{3} \sw^2) \MZ^2 &
\mb \Xb \\
\mb \Xb &
\MsbR^2 + \mb^2 - \frac{1}{3} \CZb \sw^2 \MZ^2
\MR.\end{aligned}$$ $\MstL$, $\MstR$, $\MsbL$ $\MsbR$ are (diagonal) soft SUSY-breaking parameters. We furthermore have $$\begin{aligned}
\mt \Xt = \mt (\At - \mu \CTb) , \quad
\mb\, \Xb = \mb\, (\Ab - \mu \Tb) .
\label{eq:Xtb}\end{aligned}$$ The soft SUSY-breaking parameters $\At$ $\Ab$ denote trilinear Higgs–stop Higgs–sbottom coupling, and $\mu$ is Higgs mixing parameter. $SU(2)$ gauge invariance requires relation $$\begin{aligned}
\MstL = \MsbL .\end{aligned}$$ Diagonalizing ${\cal M}^2_{\Stop}$ ${\cal M}^2_{\Sbot}$ with the mixing angles $\tst$ $\tsb$, respectively, yields physical $\Stop$ and $\Sbot$ masses: $\mste$, $\mstz$, $\msbe$ $\msbz$.
### Higher-order corrections to Higgs boson masses
A review about this subject can be found in . In Feynman diagrammatic (FD) approach higher-order corrected $\cp$-even Higgs boson masses in MSSM are derived by finding the poles of $(h,H)$-propagator matrix. inverse of this matrix is given by $$\left(\Delta_{\rm Higgs}\right)^{-1}
= - i \ML p^2 - \mHtree^2 + \hSi_{HH}(p^2) & \hSi_{hH}(p^2) \\
\hSi_{hH}(p^2) & p^2 - \mhtree^2 + \hSi_{hh}(p^2) \MR~.
\label{higgsmassmatrixnondiag}$$ Determining poles of matrix $\Delta_{\rm Higgs}$ in is equivalent to solving the equation $$\left[p^2 - \mhtree^2 + \hSi_{hh}(p^2) \right]
\left[p^2 - \mHtree^2 + \hSi_{HH}(p^2) \right] -
\left[\hSi_{hH}(p^2)\right]^2 = 0\,.
\label{eq:proppole}$$ The very leading one-loop correction to $\Mh^2$ is given by $$\begin{aligned}
\De\Mh^2 &\sim \GF \mt^4 \log\KL\frac{\mste \mstz}{\mt^2}\KR~,
\label{DeltaMhmt4}\end{aligned}$$ where $\GF$ denotes Fermi constant. shows two important aspects: First, leading loop corrections go with $\mt^4$, which is a “very large number”. Consequently, loop corrections can strongly affect $\Mh$ push mass beyond reach of LEP [@LEPHiggsSM; @LEPHiggsMSSM] into mass regime of newly discovered boson at $\sim 125.5 \gev$. Second, scalar fermion masses (in this case scalar top masses) appear in log entering loop corrections (acting as a “cut-off” where new physics enter). In this way light Higgs boson mass depends on all other sectors via loop corrections. This dependence is particularly pronounced for scalar top sector due to large mass of top quark can be used to constrain masses mixings in scalar top sector [@Mh125], see below.
The status of available results for self-energy contributions to can be summarized as follows. The complete one-loop result within MSSM is known [@ERZ; @mhiggsf1lA; @mhiggsf1lB; @mhiggsf1lC]. The by far dominant one-loop contribution is term due to top stop loops ($\alt \equiv h_t^2 / (4 \pi)$, $h_t$ being the top-quark Yukawa coupling). computation of two-loop corrections has meanwhile reached a stage where all presumably dominant contributions are available [@mhiggsletter; @mhiggslong; @mhiggsFD2; @bse; @mhiggsEP0; @mhiggsEP1; @mhiggsEP1b; @mhiggsEP2; @mhiggsEP3; @mhiggsEP4; @mhiggsEP4b; @mhiggsRG1; @mhiggsRG1a]. In particular, contributions to self-energies – evaluated in the FD as well as in effective potential (EP) method – as well as , , contributions – evaluated in EP approach – are known for vanishing external momenta. An evaluation of momentum dependence at two-loop level in a pure calculation was presented in . A (nearly) full two-loop EP calculation, including even leading three-loop corrections, has also been published [@mhiggsEP5]. calculation presented in Ref. [@mhiggsEP5] is not publicly available as a computer code for Higgs-mass calculations. Subsequently, another leading three-loop calculation of , depending on various SUSY mass hierarchies, has been performed [@mhiggsFD3l], resulting in code [H3m]{} (which adds three-loop corrections to [FeynHiggs]{} result). Most recently, a combination of full one-loop result, supplemented with leading subleading two-loop corrections evaluated in the Feynman-diagrammatic/effective potential method a resummation of the leading subleading logarithmic corrections from scalar-top sector has been published [@Mh-logresum] in latest version of the code [@feynhiggs; @mhiggslong; @mhiggsAEC; @mhcMSSMlong; @Mh-logresum] (also including leading $p^2$ dependend two-loop corrections [@mhiggs2Lp2]). While previous to this combination remaining theoretical uncertainty on lightest $\cp$-even Higgs boson mass had been estimated to be about $3 \gev$ [@mhiggsAEC; @PomssmRep], combined result was roughly estimated to yield an uncertainty of about $2 \gev$ [@Mh-logresum; @ehowp]; however, more detailed analyses will be necessary to yield a more solid result. Taking available loop corrections into account, upper limit of $\Mh$ is shifted to [@mhiggsAEC], $$\begin{aligned}
\label{Mh135}
\Mh \le 135 \gev~\end{aligned}$$ (as obtained with code [FeynHiggs]{} [@feynhiggs; @mhiggslong; @mhiggsAEC; @mhcMSSMlong; @Mh-logresum]). This limit takes into account experimental uncertainty for top quark mass as well as intrinsic uncertainties from unknown higher-order corrections. Consequently, a Higgs boson with a mass of $\sim 125.5 \gev$ can naturally be explained by MSSM.
The charged Higgs boson mass is obtained by solving equation $$\begin{aligned}
\label{rMSSM:mHpHO}
p^2 - \mHp^2 - \ser{H^-H^+}(p^2) = 0~.\end{aligned}$$ For charged Higgs boson self-energy full one-loop corrections are known [@chargedmhiggs; @markusPhD] as well as leading two-loop corrections at [@chargedmhiggs2L].
MSSM Higgs bosons at LHC {#sec:MSSMHiggsLHC}
-------------------------
The “decoupling limit” has been discussed above for tree-level couplings masses of MSSM Higgs bosons. This limit also persists taking into account radiative corrections. corresponding Higgs boson masses are shown in for $\tb = 5$ in benchmark scenario [@benchmark2; @benchmark4] obtained with [FeynHiggs]{}. For $\MA \gsim 180 \gev$ lightest Higgs boson mass approaches its upper limit (depending on SUSY parameters), heavy Higgs boson masses are nearly degenerate. Furthermore, also light Higgs boson couplings including loop corrections approach their SM-value for. Consequently, for $\MA \gsim 180
\gev$ an SM-like Higgs boson (below $\sim 135 \gev$) can naturally be explained by MSSM. On other hand, deviations from a SM-like behavior can be described in MSSM by deviating from full decoupling limit.
![image](decoupling){width=".99\textwidth"}
$\phantom{0}$
An example for the various productions cross sections at LHC is shown in (for $\sqrt{s} = 14 \tev$). For low masses the light Higgs cross sections are visible, for $\MH \gsim 130 \gev$ the heavy $\cp$-even Higgs cross section is displayed, while cross sections for $\cp$-odd $A$ boson are given for whole mass range. As discussed above $g_{Abb}$ coupling is enhanced by $\tb$ with respect to corresponding SM value. Consequently, $b\bar b A$ cross section is largest or second largest cross section for all $\MA$, despite relatively small value of $\tb = 5$. For larger $\tb$, see , this cross section can become even more dominant. Furthermore, coupling of the heavy $\cp$-even Higgs boson becomes very similar to one of the $A$ boson, two production cross sections, $b \bar b A$ $b \bar b H$ are indistinguishable in plot for $\MA > 200 \gev$.
![image](mhmax_tb05_LHC.cl.eps){width=".85\textwidth" height="8cm"}
More precise results in most important channels, $gg \to \phi$ $b \bar b \to \phi$ ($\phi = h, H, A$) have been obtained by LHC Higgs Cross Section Working Group [@lhchxswg], see also references therein. Most recently a new code, [SusHi]{} [@sushi] for $gg \to \phi$ (and $bb\phi$) production mode(s) including full MSSM one-loop contributions as well as higher-order SM MSSM corrections has been presented, see for more details.
Of particular interest is “LHC wedge” region, i.e.the region in which only light $\cp$-even MSSM Higgs boson, but none of heavy MSSM Higgs bosons can be detected at LHC. It appears for $\MA \gsim 200 \gev$ at intermediate $\tb$ widens to larger $\tb$ values for larger $\MA$. Consequently, in “LHC wedge” only a SM-like light Higgs boson can be discovered at LHC, part of the LHC wedge (depending on explicit choice of SUSY parameters) can be in agreement with $\Mh \sim 125.5 \gev$. This region, bounded from above by the 95% CL exclusion contours for heavy neutral MSSM Higgs bosons can be seen in [@CMS-HAtautau].
![image](CMS_5p20ifb_MSSM_Phitautau){width=".99\textwidth"}
$\phantom{0}$
Agreement of MSSM Higgs sector with a Higgs at [$\sim 125.5 \gev$]{} {#sec:125MSSM}
---------------------------------------------------------------------
Many investigations have been performed analyzing agreement of the MSSM with a Higgs boson at $\sim 125.5 \gev$. In a first step only the mass information can be used to test model, while in a second step also rate information of various Higgs search channels can be taken into account. Here we briefly discuss results in two of the new benchmark scenarios [@benchmark4], devised for search for heavy MSSM Higgs bosons. In left plot of the $\mhmax$ scenario is shown. red area is excluded by LHC searches for the heavy MSSM Higgs bosons, blue area is excluded by LEP Higgs searches, light shaded red area is excluded by LHC searches for a SM-like Higgs boson. bounds have been obtained with [HiggsBounds]{} [@higgsbounds] (where an extensive list of original references can be found). green area yields $\Mh = 125 \pm 3 \gev$, i.e. region allowed by experimental data, taking into account theoretical uncertainty in $\Mh$ calculation as discussed above. Since scenario maximizes light $\cp$-even Higgs boson mass it is possible to extract lower (one parameter) limits on $\MA$ $\tb$ from edges of green band. By choosing parameters entering via radiative corrections such that those corrections yield a maximum upward shift to $\Mh$, lower bounds on $\MA$ $\tb$ that can be obtained are general in sense that they (approximately) hold for [*any*]{} values of other parameters. To address (small) residual $\msusy (:= \MstL = \MstR = \MsbR)$ dependence of lower bounds on $\MA$ $\tb$, limits have been extracted for three different values $\msusy=\{0.5, 1, 2\}\tev$, see [@Mh125]. For comparison also the previous limits derived from LEP Higgs searches [@LEPHiggsMSSM] are shown, i.e. before incorporation of Higgs discovery at LHC. The bounds on $\MA$ translate directly into lower limits on $\MHp$, which are also given in table. More recent experimental Higgs exclusion bounds shift these limits to even higher values, see left plot in . Consequently, experimental result of $\Mh \sim \simMH \pm 3 \gev$ requires $\MHp \gsim \mt$ with important consequences for charged Higgs boson phenomenology.
In right plot of we show $\mh^{\rm mod+}$ scenario that differs from scenario in choice of $\Xt$. While in scenario $\Xt/\msusy = +2$ had been chosen to maximize $\Mh$, in $\mh^{\rm mod+}$ scenario $\Xt/\msusy = +1.5$ is used to yield a “good” $\Mh$ value over nearly entire $\MA$-$\tb$ plane, which is visible as extended green region.
![image](benchmark_402-hb400-det){width=".48\textwidth"} ![image](benchmark_414-hb400){width=".48\textwidth"}
---------------- ------- ------------- -------------- ------- ------------- --------------
$\msusy$ (GeV) $\tb$ $\MA$ (GeV) $\MHp$ (GeV) $\tb$ $\MA$ (GeV) $\MHp$ (GeV)
500 $2.7$ $95$ $123$ $4.5$ $140$ $161$
1000 $2.2$ $95$ $123$ $3.2$ $133$ $155$
2000 $2.0$ $95$ $123$ $2.9$ $130$ $152$
---------------- ------- ------------- -------------- ------- ------------- --------------
![image](h_Xt_mstop1){width=".48\textwidth"} ![image](h_mstop1_mstop2){width=".48\textwidth"}
It is also possible to investigate what can be inferred from assumed Higgs signal about higher-order corrections in Higgs sector. In a scan over seven relevant MSSM parameters has been performed: $\MA$, $\tb$, $\mu$, $\msusy$, $M_{\tilde l_3}$ (the soft SUSY-breaking parameter for third generation of scalar leptons), $A_f$ (a “universal” trilinear coupling), $M_2$ (the soft SUSY-breaking parameter for gauginos). measurement of Higgs boson mass as well as (the then current) results for Higgs boson production decay rates were taken into account. In we show results in the $\Xt/\msusy$-$\mste$ plane (left, with $\msusy \equiv M_{\tilde q_3}$) and in $\mste$-$\mstz$ plane (right). Blue (gray) points were accepted (rejected) by [HiggsBounds]{}. Red (yellow) points have $\De\chi^2 \le 2.30 (5.99)$, thus they constitute “favored” part of parameter space. One can see that values of $\Xt/\msusy \approx +2$ are preferred. light scalar top mass can be as low as $\sim 200 \gev$, while heavier scalar top mass starts at $\sim 650 \gev$. A clear correlation between two masses can be observed (where scan stopped around $\msusy \sim 1.5 \tev$). While no absolute value for any of stop masses can be obtained, very light masses are in agreement with $\Mh \sim \simMH$, with interesting prospects for LHC ILC.
### Acknowledgments {#acknowledgments .unnumbered}
I thank organizers for creating a stimulating environment for their hospitality, in particular when hotel bar ran out of beer.
[99.]{}
[^1]: We do not discuss here triple Higgs coupling, see for a recent review.
| {
"pile_set_name": "ArXiv"
} |
=1
Introduction
============
We recently introduced a class of ${\mathbb{Z}}_N$ graded discrete Lax pairs and studied the associated discrete integrable systems [@f14-3; @f17-2]. Many well known examples belong to that scheme for $N=2$, so, for $N\geq 3$, some of our systems may be regarded as generalisations of these.
In this paper we give a short review of our considerations and discuss the general framework for the derivation of continuous flows compatible with our discrete Lax pairs. These derivations lead to differential-difference equations which define generalised symmetries of our systems [@f14-3]. Here we are interested in the particular subclass of self-dual discrete integrable systems, which exist only for $N$ odd [@f17-2], and derive their lowest order generalised symmetries which are of order one. We also derive corresponding master symmetries which allow us to construct infinite hierarchies of symmetries of increasing orders.
These self-dual systems also have the interesting property that they can be [*reduced*]{} from $N-1$ to $\frac{N-1}{2}$ components, still with an $N\times N$ Lax pair. However not all symmetries of our original systems are compatible with this reduction. From the infinite hierarchies of generalised symmetries only the even indexed ones are reduced to corresponding symmetries of the reduced systems. Thus the lowest order symmetries of our reduced systems are of order two.
Another interesting property of these differential-difference equations is that they can be brought to a polynomial form through a Miura transformation. In the lowest dimensional case ($N=3$) this polynomial equation is directly related to the Bogoyavlensky equation (see (\[eq:Bog\]) below), whilst the higher dimensional cases can be regarded as multicomponent generalisations of the Bogoyavlensky lattice.
Our paper is organised as follows. Section \[sec:ZN-LP\] contains a short review of our framework, the fully discrete Lax pairs along with the corresponding systems of difference equations, and Section \[continuous-defs\] discusses continuous flows and symmetries. The following section discusses the self-dual case and the reduction of these systems. It also presents the systems and corresponding reductions for $N=3$, $5$ and $7$. Section \[sec:Miura\] presents the Miura transformations for the reduced systems in $N=3$ and $5$, and discusses the general formulation of these transformations for any dimension $N$.
$\boldsymbol{{\mathbb{Z}}_N}$-graded Lax pairs {#sec:ZN-LP}
==============================================
We now consider the specific discrete Lax pairs, which we introduced in [@f14-3; @f17-2]. Consider a pair of matrix equations of the form
\[eq:dLP-gen\] $$\begin{gathered}
\Psi_{m+1,n} = L_{m,n} \Psi_{m,n} \equiv \big( U_{m,n} + \lambda \Omega^{\ell_1}\big) \Psi_{m,n}, \label{eq:dLP-gen-L} \\
\Psi_{m,n+1} = M_{m,n} \Psi_{m,n} \equiv \big( V_{m,n} + \lambda \Omega^{\ell_2}\big) \Psi_{m,n}, \label{eq:dLP-gen-M}\end{gathered}$$ where $$\begin{gathered}
\label{eq:A-B-entries}
U_{m,n} = \operatorname{diag}\big(u^{(0)}_{m,n},\dots,u^{(N-1)}_{m,n}\big) \Omega^{k_1},\qquad
V_{m,n} = \operatorname{diag}\big(v^{(0)}_{m,n},\dots,v^{(N-1)}_{m,n}\big) \Omega^{k_2},\end{gathered}$$ and $$\begin{gathered}
(\Omega)_{i,j} = \delta_{j-i,1} + \delta_{i-j,N-1}.\end{gathered}$$
The matrix $\Omega$ defines a grading and the four matrices of (\[eq:dLP-gen\]) are said to be of respective levels $k_i$, $\ell_i$, with $\ell_i\neq k_i$ (for each $i$). The Lax pair is characterised by the quadruple $(k_1,\ell_1;k_2,\ell_2)$, which we refer to as [*the level structure*]{} of the system, and for consistency, we require $$\begin{gathered}
k_1 + \ell_2 \equiv k_2 + \ell_1 \quad (\bmod N).\end{gathered}$$ Since matrices $U$, $V$ and $\Omega$ are independent of $\lambda$, the compatibility condition of (\[eq:dLP-gen\]), $$\begin{gathered}
L_{m,n+1} M_{m,n} = M_{m+1,n} L_{m,n},\end{gathered}$$ splits into the system
\[eq:dLP-gen-scc\] $$\begin{gathered}
U_{m,n+1} V_{m,n} = V_{m+1,n} U_{m,n} , \label{eq:dLP-gen-scc-1}\\
U_{m,n+1} \Omega^{\ell_2} - \Omega^{\ell_2} U_{m,n} = V_{m+1,n} \Omega^{\ell_1} - \Omega^{\ell_1} V_{m,n}, \label{eq:dLP-gen-scc-2}\end{gathered}$$
which can be written explicitly as
\[eq:dLP-ex-cc\] $$\begin{gathered}
u^{(i)}_{m,n+1} v_{m,n}^{(i+k_1)} = v^{(i)}_{m+1,n} u^{(i+k_2)}_{m,n} , \label{eq:dLP-ex-cc-1}\\
u^{(i)}_{m,n+1} - u_{m,n}^{(i+\ell_2)} = v^{(i)}_{m+1,n} - v^{(i+\ell_1)}_{m,n} , \label{eq:dLP-ex-cc-2}\end{gathered}$$
for $i \in {\mathbb{Z}}_N$.
Quotient potentials {#sect:coprime-quotient}
-------------------
Equations (\[eq:dLP-ex-cc-1\]) hold identically if we set $$\begin{gathered}
\label{eq:dLP-gen-ph-1}
u^{(i)}_{m,n} = \alpha \frac{\phi^{(i)}_{m+1,n}}{\phi^{(i+k_1)}_{m,n}} ,\qquad v^{(i)}_{m,n} = \beta \frac{\phi^{(i)}_{m,n+1}}{\phi^{(i+k_2)}_{m,n}} ,\qquad i \in {\mathbb{Z}}_N,\end{gathered}$$ after which (\[eq:dLP-ex-cc-2\]) takes the form $$\begin{gathered}
\label{eq:dLP-gen-sys-1}
\alpha \left(\frac{\phi^{(i)}_{m+1,n+1}}{\phi^{(i+k_1)}_{m,n+1}} - \frac{\phi^{(i+\ell_2)}_{m+1,n}}{\phi^{(i+\ell_2+k_1)}_{m,n}} \right) =
\beta \left(\frac{\phi^{(i)}_{m+1,n+1}}{\phi^{(i+k_2)}_{m+1,n}} - \frac{\phi^{(i+\ell_1)}_{m,n+1}}{\phi^{(i+\ell_1+k_2)}_{m,n}} \right) ,
\qquad i \in {\mathbb{Z}}_N,\end{gathered}$$ defined on a square lattice. These equations can be explicitly solved for the variables on any of the four vertices and, in particular, $$\begin{gathered}
\label{eq:dLP-gen-sys-1-a}
\phi^{(i)}_{m+1,n+1} = \frac{\phi_{m,n+1}^{(i+k_1)} \phi_{m+1,n}^{(i+k_2)}}{\phi_{m,n}^{(i+k_1+\ell_2)}} \left(
\frac{\alpha \phi_{m+1,n}^{(i+\ell_2)}- \beta \phi_{m,n+1}^{(i+\ell_1)}}{\alpha \phi_{m+1,n}^{(i+k_2)}- \beta \phi_{m,n+1}^{(i+k_1)}}
\right) ,\qquad i \in {\mathbb{Z}}_N.\end{gathered}$$ In this potential form, the Lax pair (\[eq:dLP-gen\]) can be written
\[eq:LP-ir-g-rat\] $$\begin{gathered}
\Psi_{m+1,n} = \big( \alpha {\boldsymbol{\phi}}_{m+1,n} \Omega^{k_1} {\boldsymbol{\phi}}_{m,n}^{-1} + \lambda \Omega^{\ell_1}\big) \Psi_{m,n},\nonumber\\
\Psi_{m,n+1} = \big( \beta {\boldsymbol{\phi}}_{m,n+1} \Omega^{k_2} {\boldsymbol{\phi}}_{m,n}^{-1} + \lambda \Omega^{\ell_2}\big) \Psi_{m,n}, \label{eq:LP-ir-g-rat-1}\end{gathered}$$ where $$\begin{gathered}
\label{eq:LP-ir-g-rat-2}
{\boldsymbol{\phi}}_{m,n} := \operatorname{diag}\big(\phi^{(0)}_{m,n},\dots,\phi^{(N-1)}_{m,n}\big) \qquad {\mbox{and}} \qquad \det\left({\boldsymbol{\phi}}_{m,n}\right) = \prod_{i=0}^{N-1}\phi^{(i)}_{m,n}=1.\end{gathered}$$
We can then show that the Lax pair (\[eq:LP-ir-g-rat\]) is compatible if and only if the system (\[eq:dLP-gen-sys-1\]) holds.
Differential-difference equations as symmetries {#continuous-defs}
===============================================
Here we briefly outline the construction of [*continuous*]{} isospectral flows of the Lax equations (\[eq:dLP-gen\]), since these define continuous symmetries for the systems (\[eq:dLP-ex-cc\]). The most important formula for us is (\[eq:phi-sys-sym\]), which gives the explicit form of the symmetries in potential form.
We seek continuous time evolutions of the form $$\begin{gathered}
\label{psit}
\partial_{t} \Psi_{m,n} = S_{m,n} \Psi_{m,n},\end{gathered}$$ which are compatible with each of the discrete shifts defined by (\[eq:dLP-gen\]), if $$\begin{gathered}
\partial_t L_{m,n} = S_{m+1,n} L_{m,n} - L_{m,n}S_{m,n}, \nonumber\\
\partial_t M_{m,n} = S_{m,n+1} M_{m,n} - M_{m,n}S_{m,n}. $$ Since $$\begin{gathered}
\partial_t (L_{m,n+1} M_{m,n} - M_{m+1,n} L_{m,n}) \\
\qquad {}= S_{m+1,n+1} (L_{m,n+1} M_{m,n} - M_{m+1,n} L_{m,n}) - (L_{m,n+1} M_{m,n} - M_{m+1,n} L_{m,n}) S_{m,n},\end{gathered}$$ we have compatibility on solutions of the fully discrete system (\[eq:dLP-gen-scc\]).
If we define $S_{mn}$ by $$\begin{gathered}
\label{Q=LS}
S_{m,n} = L_{m,n}^{-1}Q_{m,n},\qquad\mbox{where}\quad Q_{m,n}=\operatorname{diag}\big(q^{(0)}_{m,n},q^{(1)}_{m,n},\dots ,q^{(N-1)}_{m,n}\big) \Omega^{k_1},\end{gathered}$$ then $$\begin{gathered}
Q_{m,n}U_{m-1,n} - U_{m,n} \Omega^{-\ell_1} Q_{m,n}\Omega^{\ell_1} = 0, \qquad \partial_{t} U_{m,n} = \Omega^{-\ell_1} Q_{m+1,n}\Omega^{\ell_1} - Q_{m,n},\end{gathered}$$ which are written explicitly as
\[X1\] $$\begin{gathered}
q^{(i)}_{m,n} u^{(i+k_1)}_{m-1,n} = u^{(i)}_{m,n} q^{(i+k_1-\ell_1)}_{m,n}, \label{q-eqs} \\
\partial_t u^{(i)}_{m,n} = q^{(i-\ell_1)}_{m+1,n} - q^{(i)}_{m,n}. \label{eq:gen-eq-sym}\end{gathered}$$ It is also possible to prove (see [@f14-3]) that $$\begin{gathered}
\label{tracecon1}
\sum_{i=0}^{N-1} \frac{q_{m,n}^{(i)}}{u_{m,n}^{(i)}} = \frac{1}{\alpha^N},\end{gathered}$$
for an autonomous symmetry.
Equations (\[q-eqs\]) and (\[tracecon1\]), fully determine the functions $q^{(i)}_{m,n}$ in terms of ${\bf u}_{m,n}$ and ${\bf u}_{m-1,n}$.
The formula (\[Q=LS\]) defines a symmetry which (before prolongation) only involves shifts in the $m$-direction. There is an analogous symmetry in the $n$-direction, defined by $$\begin{gathered}
\label{n-sym}
\partial_s \Psi_{m,n} = \big(V_{m,n}+\lambda \Omega^{\ell_2}\big)^{-1} R_{m,n} \Psi_{m,n},\qquad\mbox{with}\quad R_{m,n} = \operatorname{diag}\big(r^{(0)}_{m,n},\dots,r^{(N-1)}_{m,n}\big) \Omega^{k_2}.\!\!\!\end{gathered}$$
### Master symmetry and the hierarchy of symmetries {#master-symmetry-and-the-hierarchy-of-symmetries .unnumbered}
The vector field $X^M$, defined by the evolution $$\begin{gathered}
\partial_{\tau} u^{(i)}_{m,n} = (m+1) q^{(i-\ell)}_{m+1,n} - m q^{(i)}_{m,n},\qquad \partial_\tau \alpha = 1/\big(N \alpha^{N-1}\big),\end{gathered}$$ with $q^{(i)}$ being the solution of (\[q-eqs\]) and (\[tracecon1\]), is a [*master symmetry*]{} of $X^1$, satisfying $$\begin{gathered}
\big[\big[X^M,X^1\big],X^1\big]=0, \qquad\mbox{with}\quad \big[X^M,X^1\big]\neq 0.\end{gathered}$$ We then define $X^k$ recursively by $X^{k+1}=[X^M,X^k]$. We have
Given the sequence of vector fields $X^k$, defined above, we suppose that, for some $\ell\ge 2$, $\{X^1,\dots ,X^\ell\}$ pairwise commute. Then $[X^i,X^{\ell+1}]=0$, for $1\leq i\leq \ell-1$.
This follows from an application of the Jacobi identity.
We [*cannot*]{} deduce that $[[X^M,X^\ell],X^\ell]=0$ by using the Jacobi identity. Since we are [*given*]{} this equality for $\ell=2$, we [*can*]{} deduce that $[X^1,X^3]=0$ (see the discussion around Theorem 19 of [@Y]). Nevertheless it [*is*]{} possible to check this by hand for low values of $\ell$, for all the examples given in this paper.
Symmetries in potentials variables
----------------------------------
If we write (\[X1\]) and the corresponding $n$-direction symmetry in the potential variables (\[eq:dLP-gen-ph-1\]), we obtain
\[eq:phi-sys-sym\] $$\begin{gathered}
\partial_t \phi^{(i)}_{m,n} = \alpha^{-1} q^{(i-\ell_1)}_{m,n} \phi_{m-1,n}^{(i+k_1)} -\frac{\phi^{(i)}_{m,n}}{N\alpha^{N}} , \label{eq:phi-sys-sym-1} \\
\partial_s \phi^{(i)}_{m,n} = \beta^{-1} q^{(i-\ell_2)}_{m,n} \phi_{m,n-1}^{(i+k_2)} -\frac{\phi^{(i)}_{m,n}}{N\beta^{N}}. \label{eq:phi-sys-sym-2}\end{gathered}$$
The symmetry (\[eq:phi-sys-sym-1\]) is a combination of the “generalised symmetry” (\[X1\]) and a simple scaling symmetry, with coefficient chosen so that the vector field is [*tangent*]{} to the level surfaces $\prod\limits_{i=0}^{N-1} \phi^{(i)}_{m,n}=\mbox{const}$, so this symmetry survives the reduction to $N-1$ components, which we always make in our examples. The symmetry (\[eq:phi-sys-sym-2\]) is similarly related to (\[n-sym\]) and also survives the reduction to $N-1$ components.
The [*master symmetries*]{} are similarly adjusted, to give
\[eq:phi-sys-msym\] $$\begin{gathered}
\partial_{\tau} \phi^{(i)}_{m,n} = m \alpha^{-1} q^{(i-\ell_1)}_{m,n} \phi_{m-1,n}^{(i+k_1)}
-\frac{m \phi^{(i)}_{m,n}}{N\alpha^{N}} , \label{dtauphim} \\
\partial_{\sigma} \phi^{(i)}_{m,n} = n \beta^{-1} q^{(i-\ell_2)}_{m,n} \phi_{m,n-1}^{(i+k_2)}
-\frac{n \phi^{(i)}_{m,n}}{N\beta^{N}}, \label{dsigmaphim}\end{gathered}$$ where $\partial_\tau \alpha = 1/\big(N \alpha^{N-1}\big)$ and $\partial_\sigma \beta = 1/\big(N \beta^{N-1}\big)$.
The self-dual case {#sect:selfdual}
==================
In [@f17-2] we give a number of equivalence relations for our general discrete system. For the case with $(k_2,\ell_2)=(k_1,\ell_1)=(k,\ell)$ the mapping
\[sd\] $$\begin{gathered}
\label{sd-kl}
(k,\ell)\mapsto \big(\tilde k,\tilde \ell\big) = (N-\ell,N-k)\end{gathered}$$ is an involution on the parameters, so we refer to such systems as [*dual*]{}. The [*self-dual*]{} case is when $(\tilde k,\tilde \ell)=(k,\ell)$, giving $k+\ell=N$. In particular, we consider the case with $$\begin{gathered}
\label{s-dual}
k+\ell=N,\qquad \ell-k =1 \quad\Rightarrow\quad N=2k+1,\end{gathered}$$ so we require that $N$ is [*odd*]{}. In this case, we have that Equations (\[eq:dLP-gen-sys-1\]) are invariant under the change $$\begin{gathered}
\label{sd-phi}
\big(\phi^{(i)}_{m,n},\alpha,\beta\big) \mapsto \big(\widetilde\phi^{(i)}_{m,n},\widetilde\alpha,\widetilde\beta\big),\qquad\mbox{where}\quad \widetilde\alpha \alpha =1,\quad \widetilde\beta \beta =1,\quad \widetilde{\phi}^{(i)}_{m,n} \phi^{(2k-1-i)}_{m,n} = 1.\end{gathered}$$
The self-dual case admits the reduction $\widetilde{\phi}^{(i)}_{m,n}=\phi^{(i)}_{m,n}$, when $\alpha=-\beta$ ($=1$, without loss of generality), which we write as $$\begin{gathered}
\phi^{(i+k)}_{m,n} \phi^{(k-1-i)}_{m,n} = 1,\qquad i=0,\dots ,k-1.\end{gathered}$$ The condition $\prod\limits_{i=0}^{N-1} \phi^{(i)}_{m,n} = 1$ then implies $\phi^{(N-1)}_{m,n}=1$. Therefore the matrices $U_{m,n}$ and $V_{m,n}$ are built from $k$ components: $$\begin{gathered}
U_{m,n} = \operatorname{diag}\Bigg(\phi^{(0)}_{m+1,n}\phi^{(k-1)}_{m,n},\dots,\phi^{(k-1)}_{m+1,n}\phi^{(0)}_{m,n},
\frac{1}{\phi^{(k-1)}_{m+1,n}},\frac{1}{\phi^{(0)}_{m,n}\phi^{(k-2)}_{m+1,n}},\dots \\
\hphantom{U_{m,n} = \operatorname{diag}\bigg(}{}\dots ,\frac{1}{\phi^{(k-2)}_{m,n}\phi^{(0)}_{m+1,n}}, \frac{1}{\phi^{(k-1)}_{m,n}}\Bigg)\Omega^k ,\end{gathered}$$ with $V_{m,n}$ given by the same formula, but with $(m+1,n)$ replaced by $(m,n+1)$. In this case the system (\[eq:dLP-gen-sys-1-a\]) reduces to $$\begin{gathered}
\label{self-dual-equn}
\phi^{(i)}_{m+1,n+1} \phi^{(i)}_{m,n} = \frac{1}{\phi^{(k-i-2)}_{m+1,n}\phi^{(k-i-2)}_{m,n+1}} \left(\frac{\phi^{(k-i-2)}_{m+1,n}+\phi^{(k-i-2)}_{m,n+1}}{\phi^{(k-i-1)}_{m+1,n}+\phi^{(k-i-1)}_{m,n+1}}\right),
\quad\mbox{for}\quad i=0,1,\dots , k-1.\!\!\!\!\end{gathered}$$
This reduction has $\frac{N-1}{2}$ components and is represented by an $N\times N$ Lax pair, but is [*not*]{} $3D$ consistent.
### Symmetries {#symmetries .unnumbered}
Below we give the explicit forms of the self-dual case for $N=3$, $N=5$ and $N=7$. In each case, we give the lowest order symmetry $X^1$. However, this symmetry does [*not*]{} reduce to the case of (\[self-dual-equn\]), but the second symmetry, $X^2$, of the hierarchy generated by the master symmetries (\[eq:phi-sys-msym\]), is a symmetry of the reduced system.
The case $\boldsymbol{N=3}$, with level structure $\boldsymbol{(1,2;1,2)}$
--------------------------------------------------------------------------
After the transformation $\phi^{(0)}_{m,n} \rightarrow 1/\phi^{(0)}_{m,n}$, this system becomes
\[eq:3D-1212\] $$\begin{gathered}
\phi^{(0)}_{m+1,n+1} = \frac{\alpha \phi_{m+1,n}^{(1)} - \beta \phi^{(1)}_{m,n+1}}{\alpha \phi_{m+1,n}^{(0)}\phi^{(1)}_{m,n+1} - \beta \phi_{m,n+1}^{(0)}\phi^{(1)}_{m+1,n}} \frac{1}{\phi^{(0)}_{m,n}} ,\\
\phi^{(1)}_{m+1,n+1} = \frac{\alpha \phi_{m,n+1}^{(0)} - \beta \phi^{(0)}_{m+1,n}}{\alpha \phi_{m+1,n}^{(0)}\phi^{(1)}_{m,n+1} - \beta \phi_{m,n+1}^{(0)}\phi^{(1)}_{m+1,n}} \frac{1}{\phi^{(1)}_{m,n}} .
\end{gathered}$$
System (\[eq:3D-1212\]) admits two point symmetries generated by $$\begin{gathered}
\begin{cases} \partial_\epsilon \phi^{(0)}_{m,n} = \omega^{n+m} \phi^{(0)}_{m,n}, \\ \partial_\epsilon \phi^{(1)}_{m,n} = 0,\end{cases} \qquad \begin{cases} \partial_\eta \phi^{(0)}_{m,n} =0,\\ \partial_\eta \phi^{(1)}_{m,n} = \omega^{n+m} \phi^{(1)}_{m,n},\end{cases}\qquad \omega^2+\omega+1=0,\end{gathered}$$ and two local generalized symmetries. Here we present the symmetry for the $m$-direction whereas the ones in the $n$-direction follow by changing $\phi^{(i)}_{m+j,n} \rightarrow \phi^{(i)}_{m,n+j}$ $$\begin{gathered}
\partial_{t_1} \phi^{(0)}_{m,n} = \phi^{(0)}_{m,n} \frac{1+\phi^{(0)}_{m+1,n}\phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n} - 2 \phi^{(1)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n}}{1+\phi^{(0)}_{m+1,n}\phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n} + \phi^{(1)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n}} ,\\
\partial_{t_1} \phi^{(1)}_{m,n} = -\phi^{(1)}_{m,n} \frac{1-2 \phi^{(0)}_{m+1,n}\phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n} + \phi^{(1)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n}}{1+\phi^{(0)}_{m+1,n}\phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n} + \phi^{(1)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n}} .
\end{gathered}$$ We also have the master symmetry (\[eq:phi-sys-msym\]), which can be written $$\begin{gathered}
\partial_\tau \phi^{(0)}_{m,n} = m \partial_{t^1} \phi^{(0)}_{m,n} ,\qquad \partial_\tau \phi^{(1)}_{m,n} = m \partial_{t^1} \phi^{(1)}_{m,n} ,\qquad \partial_\tau \alpha = \alpha,\end{gathered}$$ which allows us to construct a hierarchy of symmetries of system (\[eq:3D-1212\]) in the $m$-direction. For instance, the second symmetry is
\[eq:3D-1212-sym-2\] $$\begin{gathered}
\partial_{t_2} \phi^{(0)}_{m,n} = \frac{\phi^{(0)}_{m,n} \phi^{(1)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n}}{{\cal{F}}_{m,n}} ({\cal{S}}_m+1)\left( \frac{({\cal{S}}_m- 1)\big(\phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n} \phi^{(0)}_{m-2,n} \big)}{{\cal{F}}_{m,n} {\cal{F}}_{m-1,n}} \right),\\
\partial_{t_2} \phi^{(1)}_{m,n} = \frac{\phi^{(1)}_{m,n} \phi^{(0)}_{m+1,n} \phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n}}{{\cal{F}}_{m,n}} ({\cal{S}}_m+1)\left( \frac{({\cal{S}}_m- 1)\big(\phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n} \phi^{(1)}_{m-2,n} \big)}{{\cal{F}}_{m,n} {\cal{F}}_{m-1,n}} \right),\end{gathered}$$ where $$\begin{gathered}
{\cal{F}}_{m,n} := 1+\phi^{(0)}_{m+1,n}\phi^{(0)}_{m,n} \phi^{(0)}_{m-1,n} + \phi^{(1)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m-1,n} ,\end{gathered}$$ and ${\cal{S}}_m$ denotes the shift operator in the $m$-direction.
### The reduced system {#the-reduced-system .unnumbered}
The reduced system (\[self-dual-equn\]) takes the explicit form (first introduced in [@14-6]) $$\begin{gathered}
\label{eq:MX2}
\phi_{m,n} \phi_{m+1,n+1} ( \phi_{m+1,n} + \phi_{m,n+1} ) = 2,\end{gathered}$$ where $$\begin{gathered}
\phi^{(0)}_{m,n} = \phi^{(1)}_{m,n} = \frac{1}{\phi_{m,n}} ,\qquad \beta = - \alpha.\end{gathered}$$ With this coordinate, the second symmetry (\[eq:3D-1212-sym-2\]) takes the form $$\begin{gathered}
\partial_{t_2} \phi_{m,n} = \phi_{m,n} \frac{1}{P^{(1)}_{m,n}} \left( \frac{1}{P^{(1)}_{m+1,n}}-\frac{1}{P^{(1)}_{m-1,n}}\right),\end{gathered}$$ where $$\begin{gathered}
\label{eq:N3-P-G}
P^{(0)}_{m,n} = \phi_{m+1,n} \phi_{m,n} \phi_{m-1,n} , \qquad P^{(1)}_{m,n} = 2 + P^{(0)}_{m,n},\end{gathered}$$ first given in [@14-6]. Despite the $t_2$ notation, this is the [*first*]{} of the hierarchy of symmetries of the reduction (\[eq:MX2\]).
The case $\boldsymbol{N=5}$, with level structure $\boldsymbol{(2,3;2,3)}$
--------------------------------------------------------------------------
In this case, equations (\[eq:dLP-gen-sys-1-a\]) take the form
\[eq:N5-sys\] $$\begin{gathered}
\phi ^{(0)}_{m+1,n+1}= \frac{\phi ^{(2)}_{m+1,n} \phi ^{(2)}_{m,n+1}}{\phi^{(0)}_{m,n}} \frac{\alpha \phi ^{(3)}_{m+1,n}-\beta \phi ^{(3)}_{m,n+1}}{\alpha \phi ^{(2)}_{m+1,n}- \beta \phi ^{(2)}_{m,n+1}}, \\
\phi ^{(1)}_{m+1,n+1} = \frac{1}{\phi ^{(1)}_{m,n} \big(\alpha \phi ^{(3)}_{m+1,n}-\beta \phi ^{(3)}_{m,n+1}\big)} \left(\frac{\alpha \phi ^{(3)}_{m,n+1}}{\phi ^{(0)}_{m+1,n}\phi ^{(1)}_{m+1,n} \phi^{(2)}_{m+1,n}} -\frac{\beta \phi ^{(3)}_{m+1,n}}{\phi ^{(0)}_{m,n+1} \phi ^{(1)}_{m,n+1} \phi ^{(2)}_{m,n+1}}\right), \nonumber \\
\phi ^{(2)}_{m+1,n+1} = \frac{\alpha \phi ^{(0)}_{m+1,n}-\beta \phi ^{(0)}_{m,n+1}}{\phi ^{(2)}_{m,n}
\big(\alpha \phi ^{(0)}_{m,n+1} \phi ^{(1)}_{m,n+1} \phi ^{(2)}_{m,n+1} \phi ^{(3)}_{m,n+1}-\beta \phi ^{(0)}_{m+1,n} \phi ^{(1)}_{m+1,n} \phi ^{(2)}_{m+1,n} \phi^{(3)}_{m+1,n}\big)}, \!\!\!\\
\phi ^{(3)}_{m+1,n+1} = \frac{\phi ^{(0)}_{m+1,n} \phi ^{(0)}_{m,n+1}}{ \phi ^{(3)}_{m,n}} \frac{\alpha \phi^{(1)}_{m+1,n} - \beta \phi ^{(1)}_{m,n+1}}{\alpha \phi ^{(0)}_{m+1,n}-\beta\phi ^{(0)}_{m,n+1}}.\end{gathered}$$
Under the transformation (\[sd\]), the first and last of these interchange, as do the middle pair.
The lowest order generalised symmetry in the $m$-direction is generated by $$\begin{gathered}
\partial_{t_1} \phi^{(i)}_{m,n} = \phi^{(i)}_{m,n} \left(\frac{5 A^{(i)}_{m,n}}{B_{m,n}}-1\right),\qquad i=0,\ldots,3,\end{gathered}$$ where, if we denote $F^{(i)}_{m,n} =\phi^{(i)}_{m-1,n} \phi^{(i)}_{m,n} \phi^{(i)}_{m+1,n} $, $$\begin{gathered}
A^{(0)}_{m,n} = F^{(0)}_{m,n} F^{(1)}_{m,n} F^{(2)}_{m,n} \phi^{(1)}_{m,n}\phi^{(2)}_{m,n}\phi^{(3)}_{m,n} ,\qquad
A^{(1)}_{m,n} = F^{(0)}_{m,n} F^{(1)}_{m,n} F^{(2)}_{m,n} F^{(3)}_{m,n} \frac{\phi^{(2)}_{m,n}}{\phi^{(0)}_{n,m}},\\
A^{(2)}_{m,n} = \phi^{(2)}_{m,n} \phi^{(3)}_{m,n},\qquad A^{(3)}_{m,n} = \phi^{(0)}_{m-1,n} \phi^{(0)}_{m+1,n},\\
B_{m,n} = \sum_{j=0}^{3} A^{(j)}_{m,n} + F^{(0)}_{m,n} F^{(1)}_{m,n}\phi^{(2)}_{m,n}.\end{gathered}$$ The corresponding master symmetry is $$\begin{gathered}
\partial_\tau \phi^{(i)}_{m,n} = m \partial_{t_1} \phi^{(i)}_{m,n}, \qquad i= 0,\ldots,3,\end{gathered}$$ along =-1 with $\partial_\tau \alpha = 1$. This is used to construct a hierarchy of symmetries for the system (\[eq:N5-sys\]). We omit here the second symmetry as the expressions become cumbersome for the unreduced case.
### The reduced system {#the-reduced-system-1 .unnumbered}
The reduction (\[self-dual-equn\]) now has components $\phi^{(0)}_{m,n}$, $\phi^{(1)}_{m,n}$, with $\phi^{(2)}_{m,n}=\frac{1}{\phi^{(1)}_{m,n}}$, $\phi^{(3)}_{m,n}=\frac{1}{\phi^{(0)}_{m,n}}$, $\phi^{(4)}_{m,n}= 1$, and the $2$-component system takes the form
\[eq:sys-5-red\] $$\begin{gathered}
\phi^{(0)}_{m+1,n+1} \phi^{(0)}_{m,n} = \frac{1}{\phi^{(0)}_{m+1,n}\phi^{(0)}_{m,n+1}} \left(\frac{\phi^{(0)}_{m+1,n}+\phi^{(0)}_{m,n+1}}{\phi^{(1)}_{m+1,n}+\phi^{(1)}_{m,n+1}}\right),\\[3mm]
\phi^{(1)}_{m+1,n+1} \phi^{(1)}_{m,n} (\phi^{(0)}_{m+1,n}+\phi^{(0)}_{m,n+1}) = 2.\end{gathered}$$
Only the even indexed generalised symmetries of the system (\[eq:N5-sys\]) are consistent with this reduction. This means that the lowest order generalised symmetry is
\[eq:sym-self-dual-5a\] $$\begin{gathered}
\partial_{t_2} \phi^{(0)}_{m,n} = \phi^{(0)}_{m,n} \frac{P^{(0)}_{m,n}}{ P^{(2)}_{m,n}} \left( \frac{1}{P^{(2)}_{m+1,n}} - \frac{1}{P^{(2)}_{m-1,n}}\right),\\
\partial_{t_2} \phi^{(1)}_{m,n} = \phi^{(1)}_{m,n} \frac{1}{ P^{(2)}_{m,n}} \left(\frac{1 + P^{(0)}_{m+1,n}}{P^{(2)}_{m+1,n}} - \frac{1+P^{(0)}_{m-1,n}}{P^{(2)}_{m-1,n}} \right),\end{gathered}$$
where
\[eq:N5-P-G\] $$\begin{gathered}
P^{(0)}_{m,n} = \phi^{(0)}_{m-1,n} \phi^{(0)}_{m,n} \phi^{(0)}_{m+1,n} \phi^{(1)}_{m,n}, \\
P^{(1)}_{m,n} = \phi^{(0)}_{m-1,n} \big(\phi^{(0)}_{m,n}\big)^2 \phi^{(0)}_{m+1,n} \phi^{(1)}_{m-1,n} \phi^{(1)}_{m,n} \phi^{(1)}_{m+1,n},\\
P^{(2)}_{m,n} = 2 +2 P^{(0)}_{m,n} + P^{(1)}_{m,n}.\label{eq:G-5}\end{gathered}$$
The case $\boldsymbol{N=7}$, with level structure $\boldsymbol{(3,4;3,4)}$
--------------------------------------------------------------------------
=-1 The fully discrete system (\[eq:dLP-gen-sys-1-a\]) and its lower order symmetries (\[eq:phi-sys-sym\]) and master symmetries (\[eq:phi-sys-msym\]) can be easily adapted to our choices $N=7$ and $(k,\ell)=(3,4)$. In the same way the corresponding reduced system follows from (\[self-dual-equn\]) with $k=3$. Thus we omit all these systems here and present only the lowest order symmetry of the reduced system which takes the following form $$\begin{gathered}
\partial_{t_2} \phi^{(0)}_{m,n} = \phi^{(0)}_{m,n} \frac{ P^{(1)}_{m,n}}{P^{(3)}_{m,n}} \left(\frac{1}{P^{(3)}_{m+1,n}} - \frac{1}{P^{(3)}_{m-1,n}}\right), \nonumber \\
\partial_{t_2} \phi^{(1)}_{m,n} = \phi^{(1)}_{m,n} \frac{P^{(0)}_{m,n}}{P^{(3)}_{m,n}} \left(\frac{1+ P^{(0)}_{m+1,n}}{P^{(3)}_{m+1,n}} - \frac{1+ P^{(0)}_{m-1,n}}{P^{(3)}_{m-1,n}}\right), \label{eq:N7-red-dd} \\
\partial_{t_2} \phi^{(2)}_{m,n}= \phi^{(2)}_{m,n} \frac{1}{P^{(3)}_{m,n}} \left(\frac{1+ P^{(0)}_{m+1,n}+ P^{(1)}_{m+1,n}}{P^{(3)}_{m+1,n}} - \frac{1+ P^{(0)}_{m-1,n}+ P^{(1)}_{m-1,n}}{P^{(3)}_{m-1,n}}\right), \nonumber\end{gathered}$$ where
\[eq:N7-P-G\] $$\begin{gathered}
P^{(0)}_{m,n} = \phi^{(0)}_{m-1,n} \phi^{(0)}_{m+1,n} \phi^{(1)}_{m,n} \phi^{(2)}_{m,n}, \\
P^{(1)}_{m,n} = \phi^{(0)}_{m-1,n} \phi^{(0)}_{m,n} \phi^{(0)}_{m+1,n} \phi^{(1)}_{m-1,n} \big(\phi^{(1)}_{m,n}\big)^2 \phi^{(1)}_{m+1,n} \phi^{(2)}_{m,n}, \\
P^{(2)}_{m,n} =\phi^{(0)}_{m-1,n} \big(\phi^{(0)}_{m,n}\big)^2 \phi^{(0)}_{m+1,n} \phi^{(1)}_{m-1,n} \big(\phi^{(1)}_{m,n}\big)^2 \phi^{(1)}_{m+1,n} \phi^{(2)}_{m-1,n} \phi^{(2)}_{m,n} \phi^{(2)}_{m+1,n},\\
P^{(3)}_{m,n} = 2 + 2 P^{(0)}_{m,n} + 2 P^{(1)}_{m,n}+ P^{(2)}_{m,n}.
\end{gathered}$$
Miura transformations and relation to Bogoyavlensky lattices {#sec:Miura}
============================================================
In this section we discuss Miura transformations for the reduced systems and their symmetries which bring the latter to polynomial form. In the lowest dimensional case ($N=3$) the polynomial system is directly related to the Bogoyavlensky lattice (see (\[eq:Bog\]) below), whereas the higher dimensional ones result in systems which generalise (\[eq:Bog\]) to $k$ component systems.
The reduced system in $\boldsymbol{N=3}$ {#the-reduced-system-in-boldsymboln3 .unnumbered}
----------------------------------------
The Miura transformation [@14-6] $$\begin{gathered}
\psi_{m,n} = \frac{P^{(0)}_{m,n}}{P^{(1)}_{m,n}} - 1,\end{gathered}$$ where $P^{(0)}_{m,n}$ and $P^{(1)}_{m,n}$ are given in (\[eq:N3-P-G\]), maps equation (\[eq:MX2\]) to $$\begin{gathered}
\label{eq:MX2a}
\frac{\psi_{m+1,n+1}+1}{\psi_{m,n}+\psi_{m,n+1}+1} + \frac{\psi_{m+1,n}}{\psi_{m,n+1}} = 0,\end{gathered}$$ and its symmetry to $$\begin{gathered}
\label{eq:Bog}
\partial_{t_2} \psi_{m,n} = \psi_{m,n} (\psi_{m,n}+1) (\psi_{m+2,n} \psi_{m+1,n} - \psi_{m-1,n} \psi_{m-2,n}),\end{gathered}$$ which is related to the Bogoyavlensky lattice [@B] $$\begin{gathered}
\partial_{t_2} \chi_{m,n} = \chi_{m,n}(\chi_{m+2,n} + \chi_{m+1,n} - \chi_{m-1,n} - \chi_{m-2,n}),\end{gathered}$$ through the Miura transformation $$\begin{gathered}
\chi_{m,n} = \psi_{m+1,n} \psi_{m,n} (\psi_{m-1,n}+1).\end{gathered}$$
The reduced system in $\boldsymbol{N=5}$ {#the-reduced-system-in-boldsymboln5 .unnumbered}
----------------------------------------
The Miura transformation $$\begin{gathered}
\psi_{m,n}^{(0)} = \frac{2 P^{(0)}_{m,n}}{P^{(2)}_{m,n}},\qquad \psi_{m,n}^{(1)} = \frac{P^{(1)}_{m,n}}{P^{(2)}_{m,n}} - 1,\end{gathered}$$ where $P^{(i)}_{m,n}$ are given in (\[eq:N5-P-G\]), maps system (\[eq:sys-5-red\]) to $$\begin{gathered}
\frac{\psi^{(0)}_{m,n+1}\psi^{(0)}_{m+1,n+1}+\psi^{(0)}_{m+1,n} \big(\psi^{(0)}_{m,n}+\psi^{(1)}_{m,n+1}\big)}{\psi^{(1)}_{m,n}+1} = \frac{\psi^{(0)}_{m,n+1}\psi^{(1)}_{m+1,n}-\psi^{(0)}_{m+1,n}\psi^{(1)}_{m,n+1}}{\psi^{(0)}_{m,n+1} + \psi^{(1)}_{m,n+1}}, \nonumber\\
\frac{\psi^{(1)}_{m+1,n+1}+1}{\psi^{(1)}_{m,n}+\psi^{(1)}_{m,n+1}+\psi^{(0)}_{m,n+1}+1} + \frac{\psi^{(0)}_{m+1,n}+\psi^{(1)}_{m+1,n}}{\psi^{(0)}_{m,n+1}+\psi^{(1)}_{m,n+1}} = 0,\label{eq:M-red-sys-5}\end{gathered}$$ and its symmetry (\[eq:sym-self-dual-5a\]) to the following system of polynomial equations in which we have suppressed the dependence on the second index $n$: $$\begin{gathered}
\frac{\partial_{t_2} \psi^{(0)}_m}{\psi^{(0)}_m} = \big(\psi^{(0)}_m+\psi^{(1)}_m\big) \big(\psi^{(0)}_{m+2} \psi^{(0)}_{m+1} - \psi^{(0)}_{m-1} \psi^{(0)}_{m-2} + \psi^{(0)}_{m+1}-\psi^{(0)}_{m-1} + \psi^{(1)}_{m+1}-\psi^{(1)}_{m-1}\big) \nonumber \\
\hphantom{\frac{\partial_{t_2} \psi^{(0)}_m}{\psi^{(0)}_m} =}{} - \big(\psi^{(1)}_m+1\big) \big(\psi^{(1)}_{m+2} \psi^{(1)}_{m+1} - \psi^{(1)}_{m-1} \psi^{(1)}_{m-2}\big) + \big(\psi^{(0)}_m-1\big) \big(\psi^{(1)}_{m+2} \psi^{(0)}_{m+1}-\psi^{(0)}_{m-1} \psi^{(1)}_{m-2}\big), \nonumber\\
\frac{\partial_{t_2} \psi^{(1)}_m}{\psi^{(1)}_m+1} = \big(\psi^{(0)}_m+\psi^{(1)}_m\big) \big(\psi^{(0)}_{m+2} \psi^{(0)}_{m+1} - \psi^{(0)}_{m-1} \psi^{(0)}_{m-2}\big) - \psi^{(1)}_m \big(\psi^{(1)}_{m+2} \psi^{(1)}_{m+1} - \psi^{(1)}_{m-1} \psi^{(1)}_{m-2}\big) \nonumber \\
\hphantom{\frac{\partial_{t_2} \psi^{(1)}_m}{\psi^{(1)}_m+1} =}{} + \psi^{(0)}_m \big(\psi^{(1)}_{m+2} \psi^{(0)}_{m+1}-\psi^{(0)}_{m-1} \psi^{(1)}_{m-2}\big).\label{eq:M-sys-5}
\end{gathered}$$ The above system and its symmetry can be considered as a two-component generalisation of the equation (\[eq:MX2a\]) and its symmetry (\[eq:Bog\]) in the following sense. If we set, $\psi^{(0)}_{m,n} = 0$ and $\psi^{(1)}_{m,n} = \psi_{m,n}$ in (\[eq:M-red-sys-5\]) and (\[eq:M-sys-5\]), then they will reduce to equations (\[eq:MX2a\]) and (\[eq:Bog\]), respectively.
The reduced systems for $\boldsymbol{N>5}$ {#the-reduced-systems-for-boldsymboln5 .unnumbered}
------------------------------------------
It can be easily checked that for each $k$ ($N=2 k+1$), the lowest order symmetry of the reduced system (\[self-dual-equn\]) involves certain functions $P^{(i)}_{m,n}$, $i=0,\ldots,k$, with $$\begin{gathered}
P^{(k)}_{m,n} = 2 + 2 \sum_{i=0}^{k-2} P^{(i)}_{m,n} + P^{(k-1)}_{m,n},\end{gathered}$$ which are given in terms of $\phi^{(i)}_{m,n}$ and their shifts (see relations (\[eq:N5-P-G\]) and (\[eq:N7-P-G\])). Then, the Miura transformation $$\begin{gathered}
\label{eq:gen-Miura}
\psi^{(i)}_{m,n} = \frac{2 P^{(i)}_{m,n}}{P^{(k)}_{m,n}}, \qquad i=0,\ldots,k-2, \qquad
\psi^{(k-1)}_{m,n} = \frac{P^{(k-1)}_{m,n}}{P^{(k)}_{m,n}} - 1,\end{gathered}$$ brings the symmetries of the reduced system to polynomial form. One could derive the polynomial system corresponding to $N=7$ ($k=3$) starting with system (\[eq:N7-red-dd\]), the functions given in (\[eq:N7-P-G\]) and using the corresponding Miura transformation (\[eq:gen-Miura\]). The system of differential-difference equations is omitted here because of its length but it can be easily checked that if we set $\psi^{(0)}_{m,n}=0$ and then rename the remaining two variables as $\psi^{(i)}_{m,n} \mapsto \psi^{(i-1)}_{m,n}$, then we will end up with system (\[eq:M-sys-5\]).
This indicates that every $k$ component system is a generalisation of all the lower order ones, and thus of the Bogoyavlensky lattice (\[eq:Bog\]). To be more precise, if we consider the case $N=2 k+1$ along with the $k$-component system, set variable $\psi^{(0)}_{m,n}=0$ and then rename the remaining ones as $\psi^{(i)}_{m,n} \mapsto \psi^{(i-1)}_{m,n}$, then the resulting $(k-1)$-component system is the reduced system corresponding to $N = 2 k-1$. Recursively, this means that it also reduces to the $N=3$ system, i.e., equation (\[eq:Bog\]). Other systems with similar behaviour have been presented in [@BW].
Acknowledgements {#acknowledgements .unnumbered}
----------------
PX acknowledges support from the EPSRC grant [*Structure of partial difference equations with continuous symmetries and conservation laws*]{}, EP/I038675/1.
[99]{} =-1pt
Bogoyavlensky O.I., Integrable discretizations of the [K]{}d[V]{} equation, [*Phys. Lett. A*](https://doi.org/10.1016/0375-9601(88)90542-7) **134** (1988), 34–38.
Fordy A.P., Xenitidis P., [$\mathbb{Z}_N$]{} graded discrete [Lax]{} pairs and discrete integrable systems, [arXiv:1411.6059](https://arxiv.org/abs/1411.6059).
Fordy A.P., Xenitidis P., [${\mathbb Z}_N$]{} graded discrete [L]{}ax pairs and integrable difference equations, [*J. Phys. A: Math. Theor.*](https://doi.org/10.1088/1751-8121/aa639a) **50** (2017), 165205, 30 pages.
Marì Beffa G., Wang J.P., Hamiltonian evolutions of twisted polygons in [${\mathbb{RP}}^n$]{}, [*Nonlinearity*](https://doi.org/10.1088/0951-7715/26/9/2515) **26** (2013), 2515–2551, [arXiv:1207.6524](https://arxiv.org/abs/1207.6524).
Mikhailov A.V., Xenitidis P., Second order integrability conditions for difference equations: an integrable equation, [*Lett. Math. Phys.*](https://doi.org/10.1007/s11005-013-0668-8) **104** (2014), 431–450, [arXiv:1305.4347](https://arxiv.org/abs/1305.4347).
Yamilov R., Symmetries as integrability criteria for differential difference equations, [*J. Phys. A: Math. Gen.*](https://doi.org/10.1088/0305-4470/39/45/R01) **39** (2006), R541–R623.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Pseudoconvexity of a domain in $\Bbb C^n$ is described in terms of the existence of a locally defined plurisubharmonic/holomorphic function near any boundary point that is unbounded at the point.'
address:
- |
Institute of Mathematics and Informatics\
Bulgarian Academy of Sciences\
Acad. G. Bonchev 8, 1113 Sofia, Bulgaria
- |
Carl von Ossietzky Universität Oldenburg\
Institut für Mathematik\
Postfach 2503\
D-26111 Oldenburg, Germany
- |
Institut de Mathématiques\
Université Paul Sabatier, 118 Route de Narbonne, 31062 Toulouse Cedex 9, France
- 'Instytut Matematyki, Uniwersytet Jagielloński, Reymonta 4, 30-059 Kraków, Poland'
author:
- 'Nikolai Nikolov, Peter Pflug, Pascal J. Thomas, Wlodzimierz Zwonek'
title: On a local characterization of pseudoconvex domains
---
[This paper was written during the stay of the third named author at the Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences supported by a CNRS grant and the stay of the fourth named author at the Universität Oldenburg supported by a DFG grant No 436 POL 113/106/0-2 (July 2008).]{}
Introduction and results
========================
It is well-known that a domain $D\subset\Bbb C^n$ is pseudoconvex if and only if any of the following conditions holds:
\(i) there is a smooth strictly plurisubharmonic function $u$ on $D$ with $\lim_{z\to\partial D}u(z)=\infty;$
\(ii) for any $a\in\partial D$ there is a $u_a\in\PSH(D)$ with $\lim_{z\to a}u_a(z)=\infty;$
\(iii) there is an $f\in\mathcal O(D)$ such that for any $a\in\partial D$ and any neighborhood $U_a$ of $a$ one has that $\limsup_{G\ni z\to a}|f(z)|=\infty$ for any connected component $G$ of $D\cap U_a$ with $a\in\partial G;$
\(iv) for any $a\in\partial D$ there is a neighborhood $U_a$ of $a$ and an $f_a\in\mathcal O(D\cap U_a)$ such that for any neighborhood $V_a\subset U_a$ of $a$ and any connected component $G$ of $D\cap V_a$ with $a\in\partial G$ one has $\limsup_{G\ni
z\to a}|f_a(z)|\\=\infty$ (see Corollary 4.1.26 in [@Hor]).
If $D$ is $C^1$-smooth, we may assume that $D\cap U_a$ is connected in (iii) and (iv).
Our first aim is to see that in (i) in general ’lim’ cannot be weakened by ’limsup’ even if $D$ is $C^1$-smooth.
\[1\] For any $\varepsilon\in(0,1)$ there is a non-pseudoconvex bounded domain $D\subset\Bbb C^2$ with $C^{1,1-\varepsilon}$-smooth boundary and a negative function $u\in\PSH(D)$ with $\limsup_{z\to a}u(z)=0$ for any $a\in\partial
D.$
In particular, $v:=-\log(-u)\in\PSH(D)$ with $\limsup_{z\to
a}v(z)=\infty$ for any $a\in\partial D.$
If we do not require smoothness of $D,$ following the idea presented in the proof, we may just take $D=\{z\in\Bbb
C^n:\min\{||z||,||z-a||\}<1\},$ $0<||a||<2,$ $n\ge 2.$
On the other, this cannot happen if $D$ is $C^2$-smooth.
\[2\] Let $D\subset\Bbb C^n$ be a $C^2$-smooth domain with the following property: for any boundary point $a\in\partial D$ there is a neighborhood $U_a$ of $a$ and a function $u_a\in\PSH(D\cap U_a)$ such that $\limsup_{z\to a}u_a(z)=\infty.$ Then $D$ is pseudoconvex.
However, if we replace ’limsup’ by ’lim’, we may remove the hypothesis about smoothness of the boundary.
\[3\] Let $D\subset\Bbb C^n$ be a domain with the following property: for any boundary point $a\in\partial
D$ there is a neighborhood $U_a$ of $a$ and a function $u_a\in\PSH(D\cap U_a)$ such that $\lim_{z\to a}u_a(z)=\infty.$ Then $D$ is pseudoconvex.
Note that the assumption in Proposition \[3\] is formally weaker that to assume that $D$ is locally pseudoconvex.
0.3cm
[*Remark.*]{} The three propositions above have real analogues replacing (non)pseudoconvex domains by (non)convex domains and plurisubharmonic functions by convex functions (for the analogue of Proposition 3 use e.g. Theorem 2.1.27 in [@Hor] which implies that if $D$ is a nonconvex domain in $\Bbb R^n,$ then there exists a segment $[a,b]$ such that $c=\frac{a+b}{2}\in\partial D$ but $[a,b]\setminus\{c\}\subset D$). The details are left to the reader.
Recall now that a domain $D\subset\Bbb C^n$ is called [*locally weakly linearly convex*]{} if for any boundary point $a\in\partial D$ there is a complex hyperplane $H_a$ through $a$ and a neighborhood $U_a$ of $a$ such that $H_a\cap D\cap U_a=\varnothing.$ D. Jacquet asked whether a locally weakly linearly convex domain is already pseudoconvex (see [@Jac], page 58). The answer to this question is affirmative by Proposition \[3\]. The next proposition shows that such a domain has to be even taut[^1] if it is bounded.
\[4\] Let $D\subset\Bbb C^n$ be a bounded domain with the following property: for any boundary point $a\in\partial D$ there is a neighborhood $U_a$ of $a$ and a function $f_a\in\mathcal O(D\cap
U_a)$ such that $\lim_{z\to a}|f_a(z)|=\infty.$ Then $D$ is taut.
Let $D\subset\Bbb C^n$ be a domain and let $K_D(z)$ denote the Bergman kernel of the diagonal. It is well-known that $\log
K_D\in\PSH(D).$ Recall that
\(v) if $D$ is bounded and pseudoconvex, and $\limsup_{z\to
a}K_D(z)=\infty$ for any $a\in\partial D,$ then $D$ is an $L_h^2$-domain of holomorphy ($L_h^2(D):=L^2(D)\cap\mathcal O(D)$) (see [@Pfl-Zwo]).
We show that the assumption of pseudoconvexity is essential.
\[10\] There is a non-pseudoconvex bounded domain $D\subset\Bbb C^2$ such that $\limsup_{z\to a}K_D(z)=\infty$ for any $a\in\partial D$.
Note that the domain $D$ with $u=\log K_D$ presents a similar kind of example as that in Proposition \[1\] (however, the domain has weaker regularity properties).
The example given in Proposition \[10\] is a domain with non-schlicht envelope of holomorphy. This is not accidental as the following result shows.
\[11\] Let $D\subset\Bbb C^n$ be a domain such that $\limsup_{z\to a}K_D(z)=\infty$ for any $a\in\partial D$. Assume that one of the following conditions is satisfied:
– the envelope of holomorphy $\hat D$ of $D$ is a domain in $\CC^n$,
– for any $a\in\partial D$ and for any neighborhood $U_a$ of $a$ there is a neighborhood $V_a\subset U_a$ of $a$ such that $V_a\cap
D$ is connected (this is the case when e.g. $D$ is a $C^1$-smooth domain).
Then $D$ is pseudoconvex.
[*Remark.*]{} Note that the domain in the example is not fat. We do not know what will happen if $D$ is assumed to be fat.
0.3cm
Making use of the reasoning in [@Irg2] we shall see how Proposition \[10\] implies that the domain from this proposition admits a function $f\in L_h^2(D)$ satisfying the property $\limsup_{z\to a}|f(z)|=\infty$ for any $a\in\partial D.$
\[12\] Let $D$ be the domain from Proposition \[10\]. Then there is a function $f\in L_h^2(D)$ such that $\limsup_{z\to a}|f(z)|=\infty$ for any $a\in\partial D.$
Proof of Proposition 1
======================
First, we shall prove two lemmas.
\[5\] For any $\eps \in (0,1)$ and $C_1$, $C_2>0$, there exists an $F\in
\mathcal C^{1, 1-\eps}(\RR)$ such that:
[(i)]{} $\supp F\subset [-1,+1]$, $0\le F(x)\le C_1$ for all $x
\in \RR$;
[(ii)]{} there is a dense open set $\mathcal U \subset [-1,+1]$ such that $F''(x)$ exists and $F''(x) \le -C_2<0$ for all $x \in
\mathcal U$;
[(iii)]{} $F$ vanishes on a Cantor subset of $[-1,+1].$
An elementary construction yields an even non-negative smooth function $b$ supported on $[-3/4,+3/4]$, decreasing on $[0,
3/4]$, such that $b(x) = 1-4x^2$ for $|x|\le 1/4$, $|b'(x)| \le
C_3$, $-8 \le b''(x) \le C_4$ for all $x\in\Bbb R,$ where $C_3,C_4>0.$
For any $a,p >0$, we set $b_{a,p}(x):= ab(x/p)$, $x\in\RR$.
We shall construct two decreasing sequences of positive numbers $(a_n)_{n\geq 0}$ and $(p_n)_{n\geq 0}$, and intervals $\{
I_{n,i}, J_{n,i}, n \ge 0, 1\le i \le 2^n\}$.
Set $I_{0,1}:= (-1,+1)$ and $J_{0,1}:=[-p_0/4,p_0/4]$, where $p_0<1$. Then $I_{1,1}:= (-1,-p_0)$ and $I_{1,2}:= (p_0,1)$.
In general, if the intervals of the n-th “generation” $ I_{n,i}$ are known, we require $$\label{condpn} p_{n} < \frac{| I_{n,i}|}{2},$$ where $|J|$ denotes the length of an interval $J$. Denote by $c_{n,i}$ the center of $ I_{n,i}$ and put $J_{n,i}:=
[c_{n,i}-p_{n}/4, c_{n,i}+p_{n}/4]$. Denote respectively by $I_{n+1,
2i-1}$ and $I_{n+1, 2i}$ the first and second component of $I_{n,i}
\setminus J_{n,i}$.
Now we write $$f_n (x) := \sum_{i=1}^{2^n} b_{a_n,p_n}(x-c_{n,i}),\ x\in\Bbb R,
\quad F_n:= \sum_{m=0}^n f_m.$$ Note that the terms in the sum defining $f_n$ have disjoint supports contained in $[c_{n,i}-3p_{n}/4, c_{n,i}+3p_{n}/4]\subset
I_{n,i},$ ($J_{n,i}$ does not contain the support of the corresponding term in $f_n;$ it is only a place, where that term coincides with a quadratical polynomial) so that $|f'_n(x)| \le
C_3a_n/p_n$. The function $F=\lim_{n\to\infty} F_n$ will be of class $\mathcal C^1$ if $$\label{condc1} \sum_{n=0}^\infty\frac{a_n}{p_n} < \infty.$$ Also, note that $$|F''_n(x)|\le |F''_{n-1}(x)| + C_4 \frac{a_n}{p_n^2}\mbox{, so } \sup
|F''_n| \le C_4 \sum_{m=1}^n \frac{a_m}{p_m^2}.$$
From now on we choose $$\label{geom} \frac{a_n}{p_n^2} = BA^n\mbox{, for some }A>1, B>0
\mbox{ to be determined.}$$ We then have $\sup |F''_n| \le C_4 B A^{n+1}/(A-1)$.
All the successive terms $f_m, m>n,$ are supported on intervals of the form $I_{m,j}$, thus vanish on the interval $J_{n,i}$, so on those intervals $F$ is a smooth function and $$F''=F''_n=F''_{n-1}-8\frac{a_n}{p_n^2}\le C_4\frac{BA^{n}}{A-1}-8 B A^n;$$ therefore, if we choose $$\label{condA} A>1+\frac{C_4}4,$$ we have $F''(x)\le -4B A^n$ for all $x\in J_{n,i}$, and $1\le i
\le 2^n$.
Set $\mathcal U := \bigcup_{n,i} J_{n,i}^\circ$. We have seen that $| I_{n+1,i}|<|I_{n,j}|/2$ (and those quantities do not depend on $i$ or $j$), so that the complement of $\mathcal U$ has empty interior. This proves claim (ii), by choosing $B=C_2/4$. The other claims are clear from the form of the function $F$, once we provide the sequences $(a_n)$ and $(p_n)$ satisfying , , , and .
Let $a_n:= a_0 \gamma^n$, $p_n=p_0\delta^n$. Then is satisfied by construction and $a_0 = Bp_0^2$. Fix $\delta,p_0\in(0,1/2).$ It follows that $p_{n}<| I_{n,i}|/4$ for all $n$ (by an easy induction). Hence, holds.
By our explicit form, means that $\gamma \delta^{-2}
> 1+\frac{C_4}4$, while means $\gamma \delta^{-1} <
1$, so with $\delta^{-1} > 1+\frac{C_4}4$, it is easy to choose $\gamma$. Finally $\|F\|_\infty \le a_0 (1-\gamma)^{-1} < C_1$ for $a_0$ small enough, which can be achieved by decreasing $p_0$ further.
Given any $\varepsilon >0$, we can modify the choices of $\delta$ and $\gamma$ to obtain that $F' \in \Lambda_{1-\eps}$ (the Hölder class of order $1-\eps$). Given any two points $x,y \in [-1,+1]$ and any integer $n \ge 1$, $$|F'(x) - F'(y)| \le |x-y| \| F''_n \|_\infty + 2 \sum_{m\ge n} \|
f'_m\|_\infty$$ $$\le C \left( (\gamma \delta^{-2})^n |x-y| +
(\gamma \delta^{-1})^n \right),$$ where $C>0$ is a positive constant depending on the parameters we have chosen. Take $n$ such that $\delta |x-y| \le \delta^n \le
|x-y| $. Then $$\frac{|F'(x) - F'(y)|}{|x-y|^{1-\eps}} \le C' (\gamma
\delta^{-2+\eps})^n,$$ and it will be enough to choose $\delta$ and $\gamma$ so that $\gamma \delta^{-2+\eps}\le 1$ and $\gamma \delta^{-2} >
1+\frac{C_4}4$, which can be achieved once we pick $\delta$ small enough. The rest of the parameters are then chosen as above.
[*Remark.*]{} It is clear that $F$ cannot be of class $\mathcal C^2(\Bbb R)$. We do not know if our argument can be pushed to get $F\in \mathcal C^{1,1}(\Bbb R).$
\[6\] For any $\eps\in(0,1)$ there exists a non-pseudoconvex bounded $\mathcal C^{1, 1-\eps}$-smooth domain $D\subset \CC^2$ boundary such that $\partial D$ contains a dense subset of points of strict pseudoconvexity.
We start with the unit ball and cave it in somewhat at the North Pole to get an open set of points of strict pseudoconcavity on the boundary. Let $r_0 < 1/3$ and for $x \in [0,1)$, $$\psi_0 (x) = \min\{\log (1-x^2), x^2-r_0^2\}. \footnote{Note that
the graphs of both functions cut inside the interval
$(r_0/2,r_0).$ Indeed, $x^2-r_0^2>\log(1-x^2)$ for $x\ge r_0^2$
and $x^2-r_0^2<\log(1-x^2)$ for $x\le r_0^2/2.$}$$ We take $\psi$ a $\mathcal C^\infty$ regularization of $\psi_0$ such that $\psi=\psi_0$ outside of $(r_0/2,r_0)$. Consider the Hartogs domain $$D_0 := \left\{ (z,w) \in \CC^2 : |z|<1, \log |w| < \frac12 \psi
(|z|) \right\}.$$ Notice that $D_0 \setminus \{ |z| \le r_0 \} = \BB_2 \setminus \{
|z| \le r_0 \}$, so that $\partial D$ is smooth near $|z|=1$.
Now define $\Phi(z)=\Phi (x+iy) = F(x/r_0) \chi (y/r_0)$, where $F$ is the function obtained in Lemma \[5\], and $\chi$ is a smooth, even cut-off function on $\RR$ such that $0\le \chi \le 1$, $\mbox{supp}\, \chi \subset (-2,2)$, and $\chi\equiv 1$ on $[-1,1]$. We define $$D:= \left\{ (z,w) \in \CC^2 : |z|<1, \log |w| < \frac12 \psi (|z|)
+ \Phi (z) \right\}.$$ Recall that for a Hartogs domain $\{\log|w|<\phi(z), |z|<1\},$ if $\phi$ is of class $\mathcal C^2$ at $z_0$, a boundary point $(z_0,w_0)$ with $|z_0|<1$ is strictly pseudoconvex (respectively, strictly pseudoconcave) if and only if $\Delta \phi (z_0) <0$ (respectively, $\Delta \phi (z_0)>0$). Choosing an appropriate regularization (convolution by a smooth positive kernel of small enough support), we may get that:
- $\Delta \psi (|z|)\le-4$ for $|z|\ge r_0$,
- $\Delta \psi (|z|)=4$ for $|z|\le r_0/2$, and is always $\le 4$.
We consider points $z_0=x+iy$. If $|x|>r_0$, $\Phi (z_0)=0$ and we have pseudoconvex points (the boundary is a portion of the boundary of the ball).
On the other hand, when $x \in r_0 \mathcal U$ (where $\mathcal U$ is the dense open set defined in Lemma \[5\]), $$\Delta \Phi (z_0) = \frac1{r_0^2} \Bigl( F''(x/r_0) \chi (y/r_0) +
F(x/r_0) \chi'' (y/r_0) \Bigr).$$ The only values of $z_0$ for which $F(x/r_0) \chi'' (y/r_0) \neq
0$ or $\chi (y/r_0) <1$ verify $|z_0|>r_0$, and at those points we have, using the fact that $F''(x/r_0) <0$, $$\frac12 \Delta \psi (|z_0|) + \Delta \Phi (z_0) \le -4 +
\frac1{r_0^2} C_1 \|\chi''\|_\infty \le -1$$ if we choose $C_1$ small enough. Hence we have strict pseudoconvexity again.
So we may restrict attention to $|y|\le r_0$ and $\Delta \Phi (z_0)
= F''(x/r_0) /r_0^2.$ Therefore $$\frac12 \Delta \psi (|z_0|) + \Delta \Phi (z_0) \le 2 - C_2/r_0^2
< - 2$$ for a $C_2$ chosen large enough.
Finally, notice that points $(z_0,w_0)$ with $|z_0|<r_0/2$ and $F(x)=0$ verify $(z_0,w_0) \in \partial D_0 \cap
\partial D$, $ D_0 \subset D$, and $D_0$ is strictly pseudoconcave at $(z_0,w_0)$, so $D$ is as well.
[*Proof of Proposition \[1\].*]{} Let $D$ be the domain from Lemma \[6\]. We may choose a dense countable subset $(a_j)\subset\partial D$ of points of strict pseudoconvexity. For any $j,$ there is a negative function $u_j\in\PSH(D)$ with $\lim_{z\to a_j}u_j(z)=0.$ If $(D_j)$ is an exhaustion of $D$ such that $D_j\Subset
D_{j+1}$ and $m_j=-\sup_{D_j}{u_j},$ then it is enough to take $u$ to be the upper semicontinuous regularization of $\sup_j u_j/m_j.$
Proofs of Propositions 2, 3 and 4
=================================
[*Proof of Proposition \[2\].*]{} We may assume that $D$ has a global defining function $r:U\to\RR$ with $U=U(\partial
D)$, $r\in\mathcal{C}^2(U)$, and $\grad r\neq 0$ on $U$, such that $D\cap U=\{z\in U:r(z)<0\}$.
Now assume the contrary. Then we may find a point $z^0\in\partial
D$ such that the Levi form of $r$ at $z^0$ is not positive semidefinite on the complex tangent hyperplane to $\partial D$ at $z_0.$ Therefore, there is a complex tangent vector $a$ with $\L
r(z_0,a)\le-2c<0$, where $\L r(z_0,a)$ denotes its Levi form at $z^0$ in direction of $a$. Moreover, we may assume that $|\frac{\partial r}{\partial z_1}(z_0)|\geq 2c.$
Now choose $V=V(z^0)\subset U$ and $u\in\PSH(D\cap V)$ with $$\limsup_{D\cap V\owns z\to z_0}u(z)=\infty;$$ in particular, there is a sequence of points $D\cap V\ni b^j\to z_0$ such that $u(b^j)\to\infty$.
By the $\mathcal{C}^2$-smooth assumption, there is an $\eps_0>0$ such that for all $z\in\BB(z_0,\eps_0)\subset V$ and all $\tilde
a\in\BB(a,\eps_0)$ we have
$$\L r(z,\tilde a)\leq -c,\quad |\frac{\partial r}{\partial
z_1}(z)|\geq c.$$
Now fix an arbitrary boundary point $z\in\partial
D\cap\BB(z_0,\eps_0)$. Define $$a(z):=a+(-\frac{\sum_{j=1}^n a_j\frac{\partial r}{\partial
z_j}(z)}{\frac{\partial r}{\partial z_1}(z)},0,\dots,0).$$ Observe that this vector is a complex tangent vector at $z$ and $a(z)\in\BB(a,\eps_0)$ if $z\in\BB(z_0,\eps_1)$ for a sufficiently small $\eps_1<\eps_0$.
Now, let $z\in\partial D\cap\BB(z_0,\eps_1)$. Put $$b_1(z):=\frac{\L r(z,a(z))}{2\frac{\partial r}{\partial z_1}(z)}$$ and $$\phi_z(\lambda)=z+\lambda a+(\lambda a_1(z)+\lambda^2
b_1(z),0,\dots,0),\quad \lambda\in\CC.$$
Moreover, if $\eps_1$ is sufficiently small, we may find $\delta,
t_0>0$ such that for all $z\in\partial D\cap\BB(z_0,\eps_1)$ we have $$\overline D\cap \BB(z,\delta)-t\nu(z)\subset D,\quad 0<t\leq t_0,$$ where $\nu(z)$ denotes the outer unit normal vector of $D$ at $z$.
Next using the Taylor expansion of $\phi_z$, $z\in\partial
D\cap\BB(z_0,\eps_1)$, $\eps_1$ sufficiently small, we get $$r\circ\phi_z(\lambda)=|\lambda|^2\Bigl(\L
r(z,a(z))+\eps(z,\lambda\Bigr),$$ where $|\eps(z,\lambda)|\leq \eps(\lambda)\to 0$ if $\lambda\to
0$.
In particular, $\phi_z(\lambda)\in \BB(z,\delta)\cap D\subset
V\cap D$ when $0<|\lambda|\leq \delta_0$ for a certain positive $\delta_0$ and $r\circ\phi_z(\lambda)\leq -\delta_0^2c/2$ when $|\lambda|=\delta_0$.
Hence, $K:=\bigcup_{z\in\partial D\cap \BB(z_0,\eps_1),
|\lambda|=\delta_0}\phi_z(\lambda)\Subset D\cup V$. Choose an open set $W=W(K)\Subset D\cap V$. Then $u\leq M$ on $W$ for a positive $M$.
Finally, choose a $j_0$ such that $b^j=z^j-t_j\nu(z^j)$, $j\geq
j_0$, where $z^j\in\partial D\cap\BB(z_0,\eps_1)$, $0<t_j\leq
t_0$, and $\phi_{z^j}(\lambda)\in W$ when $|\lambda|=\delta_0$. Therefore, by construction, $u(b^j)\leq M$, which contradicts the assumption.
[*Proof of Proposition \[3\].*]{} Assume that $D$ is not pseudoconvex. Then, by Corollary 4.1.26 in [@Hor], there is $\varphi\in\mathcal O(\Bbb D, D)$ such $\dist(\varphi(0),\partial D)<\dist(\varphi(\zeta),\partial D)$ for any $\zeta\in\Bbb D_\ast.$ To get a contradiction, it remains to use similar arguments as in the previous proof and we skip the details.
[*Proof of Proposition \[4\].*]{} It is enough to show that if $\mathcal O(\Bbb D,D)\ni\psi_j\to\psi$ and $\psi(\zeta)\in\partial D$ for some $\zeta\in\Bbb D,$ then $\psi(\Bbb D)\subset\partial D.$ Suppose the contrary. Then it is easy to find points $\eta_k\to\eta\in\Bbb D$ such that $\psi(\eta_k)\in D$ but $a=\psi(\eta)\in\partial D.$ We may assume that $\eta=0$ and $g_a=\frac{1}{f_a}$ is bounded on $D\cap U_a.$ Let $r\in(0,1)$ be such that $\psi(r\Bbb D)\Subset U_a.$ Then $\psi_j(r\Bbb D)\subset U_a$ for any $j\ge j_0.$ Hence $|g_a\circ\psi_j|<1$ and we may assume that $g_a\circ\psi_j\to
h_a\in\mathcal O(r\Bbb D,\Bbb C).$ Since $h_a(\eta)=0,$ it follows by the Hurwitz theorem that $h_a=0.$ This contradicts the fact that $h_a(\eta_k)=g_a\circ\psi(\eta_k)\neq 0$ for $|\eta_k|<r.$
Proofs of Propositions 5, 6 and 7
=================================
[*Proof of Proposition \[10\].*]{} Our aim is to construct a non-pseudoconvex bounded domain $D\subset\CC^2$ such that $\limsup_{z\to a}K_D(z)=\infty$ for any $a\in\partial D$.
Let us start with the domain $P\times\Bbb D$, where $P=\{\lambda\in\Bbb C:\frac{1}{2}<|\lambda|<\frac{3}{2}\}$. Let $$S:=\{(z_1,z_2)=(x_1+iy_1,z_2)\in
P\times\DD:(x_1-1)^2+\frac{1+|z_2|^2}{1-|z_2|^2}y_1^2=\frac{1}{4},y_1>0\}.$$ Define $D:=(P\times\DD)\setminus S$. Note that $D$ is a domain. Its envelope of holomorphy is non-schlicht and consists of the union of $D$ and one additional ’copy’ of the set $$D_1:=\{(z_1,z_2)\in
P\times\DD:(x_1-1)^2+\frac{1+|z_2|^2}{1-|z_2|^2}y_1^2\leq\frac{1}{4},y_1>0\}.$$ In particular, $D$ is not pseudoconvex. Note that convexity of the the interior $D^0$ of $D_1$ implies that $\lim_{z\to\partial
D_1}K_{D^0}(z)=\infty$. Therefore, it follows from the localization result for the Bergman kernel due to Diederich-Fornaess-Herbort formulated for Riemann domains in the paper [@Irg1] that for all $a\in S\subset\partial D_1$ the following property holds: $\lim_{D\cap D_1\owns z \to a}K_D(z)=\infty$ (on the other hand while tending to the points from $S$ from the ’other side’ of the domain $D$ the Bergman kernel is bounded from above). Obviously $P\times\DD$ is Bergman exhaustive, so for any $a\in\partial(P\times\DD)$ the following equality holds $\lim_{z\to
a}K_D(z)=\infty$.
[*Proof of Proposition \[11\].*]{} Recall the following facts that follow from [@Bre].
If the envelope of holomorphy $\hat D$ of the domain $D$ is a domain in $\Bbb C^n$ (is schlicht) then the Bergman kernel $K_D$ extends to a real analytic function $\tilde K_D$ defined on $\hat D$.
Let $\emptyset\neq P_0\subset D$, $P_0\subset P$, $P\setminus
D\neq\emptyset$ and $\bar P_0\cap (\Bbb C^n\setminus
D)\neq\emptyset$, where $P_0,P$ are polydiscs, and the following property is satisfied: for any $f\in\O(D)$ there is a function $\tilde f\in\O(P)$ such that $f=\tilde f$ on $P_0$. Then the Bergman kernel $K_D$ extends to a real analytic function on $P$. More precisely, there is a real analytic function $\tilde K_D$ defined on $P$ such that $\tilde K_D(z)=K_D(z)$, $z\in P_0$.
Both facts above complete the proof of Proposition \[11\].
The proof of Proposition \[12\] is essentially contained in [@Irg2]. However, this PhD Thesis is not publically accessible. Therefore we repeat it here. The idea is the following: if $\limsup_{z\to a}K_D(z)=\infty$ for some $a\in\partial D,$ then there is an $f\in L_h^2(D)$ such that $\limsup_{z\to
a}|f(z)|=\infty$.
[*Proof of Proposition \[12\].*]{} In view of Proposition 5, $\limsup_{z\to a}K_D(z)=\infty$ for any $a\in\partial D$.
Let $a\in\partial D$. We claim that there is an $L_h^2(D)$-function $h$ which is unbounded near $a$.
Assume the contrary. Hence for any $f\in L_h^2(D)$ there exists a neighborhood $U_f$ of $a$ and a number $M_f$ such that $|f|\leq M_f$ on $D\cap U_f.$
Denote by $L$ the unit ball in $L_h^2(D)$ and by $c=\pi^n$.
Let $K_1:=\{z\in D:\dist(z,\partial D)\geq 1\}$ (if this is empty take a smaller number than $1$). By the meanvalue inequality we have for any $f\in L$ that $|f|\leq c$ on $K_1$. By assumption, there are $z_1\in D$ and $f_1\in L$ such that $|z_1-a|<1$ and $|f_1(z_1)|>c$.
Set $g_1:=f_1/c$. Then $g\in L$ and therefore there are a neighborhood $U_1$ of $a$ and number $M_1>1$ such that $|g_1|\leq
M_1$ on $D\cap U_1$.
Set $K_2:=\{z\in D:\dist(z,\partial D)\geq\dist(z_1,\partial D)\}$ and $d=c\dist(z_1,\partial D).$ Then $K_1\subset K_2$. Choose $z_2\in U_1\cap D$, $z_2\notin K_2,$ $|z_2-a|<1/2,$ and $f_2\in L$ with $|f_2(z_2)|\geq d(1^3+1^2M_1)$. Moreover, $|f_2|\leq d$ on $K_2$. Put $g_2:=f_2/d$. Then $g_2\in L$. Choose now a neighborhood $U_2$ of $a$ and a number $M_2$ such that $|g_2|\leq M_2$ on $D\cap
U_2$.
Then we continue this process.
So we have points $z_k\in K_{k-1}$, $z_k\notin K_{k-1}$, $|z_k-a|<1/k,$ and functions $f_k\in L$ with $$|f_k(z_k)|\geq c\dist(z_{k-1},\partial
D)^n(k^3+k^2\sum_{j=1}^{k-1}M_j).$$
Setting $g_k:=f_k/d$ and $h:=\sum_{j=1}^\infty g_j/j^2,$ it is clear that $h\in L_h^2(D)$.
Fix now $k\ge 2$. Then $$|h(z_k)|\geq
\frac{|g_k(z_k)|}{k^2}-\sum_{j=1}^{k-1}\frac{|g_j(z)|}{j^2}-
\sum_{j=k+1}^\infty\frac{|g_j(z)|}{j^2}$$ $$\ge
k+\sum_{j=1}^{k-1}M_j-\sum_{j=1}^{k-1}\frac{M_j}{j^2}-\sum_{j=k+1}^\infty
\frac{1}{j^2}>k-\frac{1}{6}.$$ In particular, $h$ is unbounded at $a$ which is a contradiction.
It remains to choose a dense countable sequence $(a_j)\subset\partial D$ such that any term repeats infinitely many times and to copy the proof of the Cartan-Thullen theorem.
H. J. Bremermann, [*Holomorphic continuation of the kernel function and the Bergman metric in several complex variables*]{} in “Lectures on functions of several complex variables”, Univ. of Mich. Press, 1955, pp. 349–383.
L. Hörmander, [*Notions of convexity*]{}, Birkhäuser, Basel–Boston–Berlin, 1994.
M. Irgens, [*Extension properties of square integrable holomorphic functions*]{}, PhD Thesis, Michigan University, 2002.
M. Irgens, [*Continuation of $L^2$-holomorphic functions*]{}, Math. Z. 247 (2004), 611–617.
D. Jacquet, [*On complex convexity,*]{} Doctoral Dissertation, University of Stockholm, 2008.
P. Pflug, W. Zwonek, [*$L\sp 2\sb h$-domains of holomorphy and the Bergman kernel*]{}, Studia Math. 151 (2002), 99–108.
[^1]: This means that $\mathcal O(\Bbb D,D)$ is a normal family, where $\Bbb D\subset\Bbb C$ is the open unit disc. Note that any taut domain is pseudoconvex and any bounded pseudoconvex domain with $C^1$-smooth boundary is taut.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'The Solar and Heliospheric Observatory ([*[SOHO]{}*]{}) was launched in December 1995 with a suite of instruments designed to answer long-standing questions about the Sun’s internal structure, its extensive outer atmosphere, and the solar wind. This paper reviews the new understanding of the physical processes responsible for the solar wind that have come from the past 8 years of [*SOHO*]{} observations, analysis, and theoretical work. For example, the UVCS instrument on [*SOHO*]{} has revealed the acceleration region of the fast solar wind to be far from simple thermal equilibrium. Evidence for preferential acceleration of ions, 100 million K ion temperatures, and marked departures from Maxwellian velocity distributions all point to specific types of collisionless heating processes. The slow solar wind, typically associated with bright helmet streamers, has been found to share some of the nonthermal characteristics of the fast wind. Abundance measurements from spectroscopy and visible-light coronagraphic movies from LASCO have led to a better census of the plasma components making up the slow wind. The origins of the solar wind in the photosphere and chromosphere have been better elucidated with disk spectroscopy from the SUMER and CDS instruments. Finally, the impact of the solar wind on spacecraft systems, ground-based technology, and astronauts has been greatly aided by having continuous solar observations at the Earth-Sun L1 point, and [*SOHO*]{} has set a strong precedent for future studies of space weather.'
author:
- 'S. R. Cranmer'
title: New insights into Solar Wind Physics from SOHO
---
Introduction
============
If the Sun is a benchmark for stellar astrophysics, then the solar wind is even more of a necessary reference for the study of stellar winds. The Sun is special because of its proximity; the solar wind is [*unique*]{} because we are immersed in it and its plasma can be accessed directly by space probes. Despite all that has been learned by the [*in situ*]{} detection of particles and fields, though, we have learned the most about how the solar wind is produced from good, old-fashioned astronomical imaging and spectroscopy. This paper summarizes the most recent results of this kind from the past decade of observations with the [*SOHO*]{} spacecraft. (A top-ten list of reasons “Why stellar astronomers should be interested in the Sun” is given in an article with that title by Schmelz 2003).
Brief History (pre-SOHO)
========================
Sightings of the solar corona and the shimmering aurora (i.e., the beginning and end points of the solar wind that encounters the Earth) go back into antiquity. The first scientific understanding of the outer solar atmosphere came as spectroscopy began to be applied to heavenly light in the late 19th century. In 1869, Harkness and Young first observed the 5303 [å]{} coronal green line during a total eclipse. Soon after, in the very first issue of [*Nature,*]{} J. Norman Lockyer (the discoverer of helium) reported that these observations were “...[*[bizarre]{}*]{} and puzzling to the last degree!” The chemical element responsible for the green line went unidentified for 70 years, after which Grotrian and Edlén applied new advances in atomic physics to show that several of the coronal emission lines are produced by very high ionization stages of iron, calcium, and nickel. The puzzle then shifted to explaining what causes the outer solar atmosphere to be heated to temperatures of more than 10$^6$ K. Despite many proposed theoretical processes, this “coronal heating problem” is still with us today because there are no clear observational constraints that can determine which if any of the competing mechanisms is dominant.
Knowledge about an outflow of particles from the Sun was starting to coalesce at the beginning of the 20th century. Researchers came to notice strong correlations between sunspot activity, geomagnetic storms, auroral appearances, and motions in comet tails. Parker (1958, 1963) combined these empirical clues with the earlier discovery of a hot corona and postulated a theoretical model of a steady-state outward expansion of gas from the solar surface. Figure 1 shows analytic solutions to Parker’s isothermal solar wind equation, as well as other solutions including the case of spherical accretion found earlier by Bondi (1952). Traditionally this equation was believed to be unsolvable explicitly for the outflow speed, but Cranmer (2004) showed how a new transcendental function (the Lambert $W$ function) can be used to produce exact closed-form solutions of this equation.
Parker’s key insight was that the high temperature of the corona provides enough energy per particle to overcome gravity and produce a natural transition from a subsonic (bound, negative total energy) state near the Sun to a supersonic (outflowing, positive total energy) state in interplanetary space. An initial controversy over whether Parker’s “transonic” solution or Chamberlain’s always-subsonic “breeze” solutions were most physically relevant was dispelled when [*Mariner 2*]{} confirmed the existence of a continuous, supersonic solar wind in interplanetary space (Snyder and Neugebauer 1964; see also Neugebauer 1997). Also, Velli (2001) showed that the subsonic breeze solutions are formally unstable, and that the truly stable solution for the solar wind case is the one that goes through the sonic point.
In the years since [*Mariner 2,*]{} many other deep-space missions have added to our understanding of the solar wind. The turbulent inner heliosphere was probed by the two [*Helios*]{} spacecraft, which measured particle and field properties between 0.29 and 1 AU (Marsch 1991). In the 1990s, [*Ulysses*]{} became the first probe to venture far from the ecliptic plane and soar over the solar poles to measure the solar wind in three dimensions (Marsden 2001). The [*Voyager*]{} probes are still sending back data on the outer reaches of the solar wind, and one of them may have passed through the termination shock separating the heliosphere from the interstellar medium (e.g., Krimigis et al. 2003).
It has not been possible to send space probes closer to the Sun than about the orbit of Mercury, so our understanding of the corona is limited to remote-sensing observations. Ultraviolet and X-ray studies of the “lower” corona (i.e., within about 0.1 to 0.3 $R_{\odot}$ from the surface) began in the 1960s and kept improving in spatial resolution from [*Skylab*]{} in the 1970s (Vaiana 1976) to [*Yohkoh*]{} in the 1990s (Martens and Cauffman 2002). Until the 20th century, total solar eclipses were the only means of seeing any hint of the dim “extended” corona (heights above $\sim$0.3 $R_{\odot}$) where most of the solar wind’s acceleration occurs. However, with the invention of the disk-occulting coronagraph by Lyot in 1930 and the development of rocket-borne ultraviolet coronagraph spectrometers in the 1970s (Kohl et al. 1978; Withbroe et al. 1982), a continuous detailed exploration of coronal plasma physics became possible.
The SOHO Mission
================
The [*Solar and Heliospheric Observatory*]{} ([*[SOHO]{}*]{}) is the most extensive space mission ever dedicated to the study of the Sun and its surrounding environment. [*SOHO*]{} resulted from international collaborations between ESA and NASA dating back to the early 1980s (see, e.g., Huber et al. 1996) and became a cornerstone mission in the International Solar-Terrestrial Physics (ISTP) program. The [*SOHO*]{} spacecraft was launched on December 2, 1995 and entered a halo orbit around the Earth-Sun L1 point two months later. This orbit allows an uninterrupted view of the Sun, essential for helioseismology, but the distance (about 4 Earth-Moon distances) also puts limits on the amount of data telemetry that can be received. [*SOHO*]{} hosts 12 instruments that study the solar interior, solar atmosphere, particles and fields in the solar wind, and the distant heliosphere. Early results from the first year of operations were presented by Fleck and Švestka (1997), and a more up-to-date summary of the mission—along with details about how to access data and analysis software—is given by Domingo (2002).
This paper presents results mainly from the 5 instruments designed to observe the hot, outer atmosphere of the Sun (excluding the photosphere) where the solar wind is accelerated. These instruments are listed below in alphabetic order.
1. [**CDS**]{} (Coronal Diagnostic Spectrometer) is a pair of extreme ultraviolet spectrometers that view the solar disk and low off-limb corona in the wavelength range 150–785 [å]{} with spectral resolution $\lambda / \Delta\lambda \sim 700$–4500, and with 2–$3''$ spatial resolution (Harrison et al. 1995).
2. [**EIT**]{} (Extreme-ultraviolet Imaging Telescope) is a full-disk imager with $5''$ spatial resolution that obtains narrow-bandpass images of the Sun at 384, 171, 195, and 284 [å]{} (Delaboudinière et al. 1995). EIT images and movies are probably the most reproduced data products from [*SOHO*]{} (see, e.g., the covers of many popular magazines).
3. [**LASCO**]{} (Large Angle Spectroscopic Coronagraph) is a package of 3 visible-light coronagraphs with overlapping annular fields of view (Brueckner et al. 1995). The C1 coronagraph observes from 1.1 to 3 $R_{\odot}$ with 5 Fabry-Perot filter bandpasses. The C2 and C3 coronagraphs observe the radii 2–6 $R_{\odot}$ and 4–30 $R_{\odot}$, respectively, in either linearly polarized or unpolarized light. C1 was fully operational from launch until the 3-month [*SOHO*]{} interruption in June 1998.
4. [**SUMER**]{} (Solar Ultraviolet Measurements of Emitted Radiation) is an ultraviolet spectrometer that observes the solar disk and low corona in the wavelength range 330–1610 [å]{} with a spectral resolution of about 12000, and with $\sim$1.5$''$ spatial resolution (Wilhelm et al. 1995). SUMER observations have been constrained recently by the need to conserve the remaining count potential of the detectors.
5. [**UVCS**]{} (Ultraviolet Coronagraph Spectrometer) is a combination of an ultraviolet spectrometer and a linearly occulted coronagraph that observes a 2.5 $R_{\odot}$ long swath of the extended corona, oriented tangentially to the solar radius, at heliocentric heights ranging between 1.3 and 12 $R_{\odot}$ (Kohl et al. 1995). The spectrometer slit can be rotated around the Sun. UVCS observes the wavelength range 470–1360 [å]{} with $\sim$10$^4$ spectral resolution and $7''$ spatial resolution.
SOHO Solar Wind Results
=======================
The bulk of this paper describes results from the above [*SOHO*]{} instruments (and associated theoretical work) concerning the physics of solar wind acceleration and heating. Figure 2 is an illustrative summary of the topics covered by the following subsections.
Wind Origins in Open Magnetic Regions
-------------------------------------
Most of the plasma that eventually becomes the time-steady solar wind seems to originate in thin magnetic flux tubes (with observed sizes of order 100–200 km) observed mainly in the dark lanes between granular cells and concentrated most densely in the supergranular network. These strong-field (1–2 kG) flux tubes have been known as G-band bright points, network bright points, or in groups as “solar filigree” (e.g., Dunn and Zirker 1973; Spruit 1984; Berger and Title 2001). Somewhere in the low chromosphere, the thin flux tubes expand laterally to the point where they merge with one another into a more-or-less homogeneous network field distribution of order $\sim$100 G. At a larger height in the chromosphere, these network flux bundles are thought to merge again into a large-scale “canopy” (Gabriel 1976; Dowdy et al. 1986). This second stage of merging is accompanied by further lateral expansion into “funnels” that may be the lowest sites of observable solar wind acceleration.
SUMER has added substantially to earlier observations of Doppler shifts and nonthermal line broadening in the chromosphere, transition region, and low corona (see H. Peter, these proceedings). Observations of blueshifts in supergranular network lanes and vertices, especially in coronal holes that host the fastest solar wind, may be evidence for either the solar wind itself or upward-going waves that are linked to wind acceleration processes (e.g., Hassler et al. 1999; Peter and Judge 1999; Aiouaz et al. 2004). These interpretations are still not definitive, though, because there are other observational diagnostics that imply more of a blueshift in the supergranular cell-centers [*between*]{} funnels (e.g., He I 10830 [å]{} in coronal holes; Dupree et al. 1996; Malanushenko and Jones 2004).
Off-limb measurements with SUMER and CDS have also provided constraints on plasma temperatures at very low heights in the corona. In coronal holes, ion temperatures exceed electron temperatures even for $r \sim 1.1 \, R_{\odot}$, where densities were presumed to be so high as to ensure rapid collisional coupling and thus equal temperatures for all species (Tu et al. 1998; Moran 2003). Spectroscopic evidence is also mounting for the presence of transverse Alfvén waves propagating into the corona (e.g., Banerjee et al. 1998). Electron temperatures derived from line ratios (David et al. 1998; Doschek et al. 2001) exhibit surprisingly small values in the low off-limb corona (300,000 to 800,000 K) that are not in agreement with higher temperatures derived from “frozen-in” [*in situ*]{} charge states. The only way to reconcile these observations seems to be some combination of non-Maxwellian electron velocity distributions or differential flow between different ion species even very near the Sun (see Esser and Edgar 2002).
[*SOHO*]{} has made new inroads into our understanding of the basic coronal heating problem, but a full review of these results is is outside the scope of this paper. The nature of coronal heating must be closely related to the evolution of the Sun’s magnetic field, which changes on rapid time scales and becomes organized into progressively smaller stochastic structures (see, e.g., Priest and Schrijver 1999; Aschwanden et al. 2001; C. Schrijver, these proceedings). As computer power increases, the direct [*ab initio*]{} simulation of time-dependent coronal heating is becoming possible (Gudiksen and Nordlund 2004).
The Fast Solar Wind
-------------------
It has been known for more than three decades that dark [*coronal holes*]{} coincide with regions of open magnetic field and the highest-speed solar wind streams (Wilcox 1968; Krieger et al. 1973). At the minimum of the Sun’s 11-year magnetic cycle the coronal magnetic field is remarkably axisymmetric (e.g., Banaszkiewicz et al. 1998), with large coronal holes at the north and south poles giving rise to fast wind ($v_{\infty} > 600$ km/s) that fills most of the heliosphere. It was fortunate that the first observations of [*SOHO*]{} were during this comparatively simple phase, thus minimizing issues of interpretation for lines of sight passing through the optically thin outer corona.
One of the most surprising early results from the UVCS instrument concerned the widths of emission line profiles of the O VI 1032, 1037 [å]{} doublet in coronal holes. These lines were an order of magnitude [*broader*]{} than expected, indicating kinetic temperatures exceeding 100 million K at $r > 2 \, R_{\odot}$ (Kohl et al. 1997). Because the observational line of sight passes perpendicularly through the nearly-radial magnetic field in large coronal holes, this kinetic temperature is a good proxy for the local ion $T_{\perp}$. Further analysis of the O$^{5+}$ velocity distribution was made possible by use of the “Doppler dimming/pumping” effect; i.e., by exploiting the sensitivity to the radial velocity distribution when the coronal scattering profile is substantially Doppler shifted away from the stationary profile(s) of solar-disk photons. This technique allowed the ion temperature anisotropy ratio $T_{\perp} / T_{\parallel}$ to be constrained to values of at least 10, and possibly as large as 100. Temperatures for both O$^{5+}$ and Mg$^{9+}$ were found to be significantly greater than mass-proportional when compared to protons (the latter measured by proxy with neutral hydrogen via H I Ly$\alpha$ 1216 [å]{}), and outflow speeds for O$^{5+}$ may exceed those of hydrogen by as much as a factor of two (see also Kohl et al. 1998, 1999; Li et al. 1998; Cranmer et al. 1999b).
Figure 3 shows a summary of the solar-minimum coronal hole temperatures. Note that when $T_{\rm ion} / T_{p}$ exceeds the mass ratio $m_{\rm ion} / m_{p}$, it is not possible to interpret the measurements as a combination of thermal equilibrium and a species-independent “nonthermal speed.” The UVCS and SUMER data (as well as [*Helios*]{} particle data at 0.3 AU) have thus been widely interpreted as a truly “preferential” heating of heavy ions in the fast solar wind. Because of the low particle densities in coronal holes, the observed collection of ion properties has often been associated with [*collisionless*]{} wave damping. The most natural wave mode that may be excited and damped is the ion cyclotron resonant wave; i.e., an Alfvén wave with a frequency $\omega$ approaching the Larmor frequency of the ions $\Omega_{\rm ion}$. The [*SOHO*]{} observations discussed above have given rise to a resurgence of interest in ion cyclotron waves as a potentially important mechanism in the acceleration region of the fast wind (e.g., McKenzie et al. 1995; Tu and Marsch 1995, 1997, 2001; Hollweg 1999, 2000; Axford et al. 1999; Cranmer et al. 1999a; Li et al. 1999; Cranmer 2000, 2001, 2002; Galinsky and Shevchenko 2000; Hollweg and Isenberg 2002; Vocks and Marsch 2002; Gary et al. 2003; Markovskii and Hollweg 2004).
There remains some controversy over whether ion cyclotron waves generated solely at the coronal base can heat the extended corona, or if a more gradual and extended generation of these waves is needed. If the latter case occurs, there is also uncertainty concerning the origin of such extended wave generation. MHD turbulence has long been proposed as a likely means of transforming fluctuation energy from low frequencies (e.g., periods of a few minutes; believed to be emitted copiously by the Sun) to the high frequencies required by cyclotron resonance theories (e.g., 10$^2$ to 10$^4$ Hz). However, both numerical simulations and analytic descriptions of turbulence indicate that the [*cascade*]{} from large to small scales occurs most efficiently for modes that do not increase in frequency. In the corona, the expected type of turbulent cascade would tend to most rapidly increase electron $T_{\parallel}$, not the ion $T_{\perp}$ as observed. Cranmer and van Ballegooijen (2003) discussed this issue at length and surveyed possible solutions.
At times other than solar minimum, coronal holes appear at all solar latitudes and exhibit a variety of properties. UVCS has been used to measure the heating and acceleration of the fast solar wind in a variety of large coronal holes from 1996 to 2004 (Miralles et al. 2001, 2002; Poletto et al. 2002). A pattern is beginning to emerge, in that coronal holes with lower densities at a given heliocentric height tend to exhibit faster ion outflow and higher ion temperatures (Kohl et al. 2001). However, all of the coronal holes observed by both UVCS and [*in situ*]{} instruments were found to have roughly similar outflow speeds and mass fluxes in interplanetary space. Thus, the densities and ion temperatures measured in the extended corona seem to be indicators of the range of heights where the solar wind acceleration takes place.
The Slow Solar Wind
-------------------
The slow, high-density component of the solar wind is a turbulent, chaotic plasma that flows at about 300–500 km/s in interplanetary space. Before the late 1970s, the slow wind was believed to be the “ambient” background state of the solar wind, occasionally punctuated by transient high-speed streams. This idea came from the limited perspective of spacecraft that remained in or near the ecliptic plane, and it gradually became apparent that the fast wind is indeed the more quiet and basic state (e.g., Feldman et al. 1976; Axford 1977).
In the corona, the slow wind is believed to originate mainly from the bright [*streamers*]{} seen in coronagraph images. However, since these structures are thought to be mainly closed magnetic loops or arcades, it is uncertain how the wind “escapes” into a roughly time-steady flow. Does the slow wind flow mainly along the open-field edges of these closed regions, or do the closed fields occasionally open up and release plasma into the heliosphere? [*SOHO*]{} has provided evidence that both processes occur, but an exact census or mass budget of slow-wind source regions has not yet been constructed.
Figure 4 summarizes several UVCS results concerning streamers and the slow solar wind. A comparison of the raster images built up from multiple scans with the UVCS slit at many heights shows that streamers appear differently in H I Ly$\alpha$ and O VI 1032 [å]{}. The Ly$\alpha$ intensity pattern is similar to that seen in LASCO visible-light images; i.e., the streamer is brightest along its central axis. In O VI, though, there is a diminution in the core whose only interpretation can be a substantial abundance depletion. At solar minimum, large equatorial streamers showed an oxygen abundance of 1/3 the photospheric value along the streamer edges, or “legs,” and 1/10 the photospheric value in the core (Raymond et al. 1997). Low FIP (first ionization potential) elements such as Si and Fe were enhanced by a relative factor of 3 in both cases (Raymond 1999; see also Uzzo et al. 2004). Abundances observed in the legs are consistent with abundances measured in the slow wind [*in situ.*]{} This is a strong indication that the majority of the slow wind originates along the open-field edges of streamers. The very low abundances in the streamer core, on the other hand, are evidence for gravitational settling of the heavy elements in long-lived closed regions, a result that was confirmed by SUMER (Feldman et al. 1998).
Also shown in Figure 4 are outflow speeds derived using the Doppler dimming method (Strachan et al. 2002). Note that a null result (i.e., zero flow speed) was found at various locations inside in the closed-field core region. Outflow speeds consistent with the slow solar wind were only found along the higher-latitude edges and above the probable location of the magnetic “cusp” between about 3.6 and 4.1 $R_{\odot}$. Frazin et al. (2003) used UVCS to determine that O$^{5+}$ ions in the legs of a similar streamer have significantly higher kinetic temperatures than hydrogen and exhibit anisotropic velocity distributions with $T_{\perp} > T_{\parallel}$, much like coronal holes. However, the oxygen ions in the core exhibit neither this preferential heating nor the temperature anisotropy. The analysis of UVCS data has thus led to evidence that the fast and slow wind share the same physical processes.
Evidence for another kind of slow wind in streamers came from visible-light coronagraph movies. The increased photon sensitivity of LASCO over earlier instruments revealed an almost continual release of low-contrast density inhomogeneities, or “blobs,” from the cusps of streamers (Sheeley et al. 1997). These features are seen to accelerate to speeds of order 300–400 km/s by the time they reach $\sim$30 $R_{\odot}$, the outer limit of LASCO’s field of view. Wang et al. (2000) reviewed three proposed scenarios for the production of these blobs: (1) “streamer evaporation” as the loop-tops are heated to the point where magnetic tension is overcome by high gas pressure; (2) plasmoid formation as the distended streamer cusp pinches off the gas above an X-type neutral point; and (3) reconnection between one leg of the streamer and an adjacent open field line, transferring some of the trapped plasma from the former to the latter and allowing it to escape. Wang et al. (2000) concluded that all three mechanisms might be acting simultaneously, but the third one seems to be dominant. Because of their low contrast, though (i.e., only about 10% brighter than the rest of the streamer), the blobs cannot comprise a large fraction of the mass flux of the slow solar wind. This is in general agreement with the above abundance results from UVCS.
Despite these new observational clues, the overall energy budget in coronal streamers is still not well understood, nor is their temporal MHD stability. Recent models run the gamut from simple, but insightful, analytic studies (Suess and Nerney 2002) to time-dependent multidimensional simulations (e.g., Wiegelmann et al. 2000; Lionello et al. 2001; Ofman 2004). Notably, a two-fluid study by Endeve et al. (2004) showed that the stability of streamers may be closely related to the kinetic partitioning of heat to protons versus electrons. When the bulk of the heating goes to the protons, the modeled streamers become unstable to the ejection of massive plasmoids; when the electrons are heated more strongly, the streamers are stable. It is possible that the observed (small) mass fraction of LASCO blobs can give us an observational “calibration” of the relative amounts of heat deposited in the proton and electron populations.
Finally, [*SOHO*]{} has given us a much better means of answering the larger question: [*“Why is the fast wind fast, and why is the slow wind slow?”*]{} A simple, but probably wrong, answer would be that coronal holes could be heated more strongly than streamers, so it would be natural for a pressure-driven wind to be accelerated faster in regions of greater heating. Even though UVCS has shown that heavy ions in coronal holes are hotter than in streamers, the proton temperatures between the two regions may not be that different, and the electrons are definitely [*cooler*]{} in coronal holes.
Traditionally, the higher speeds in coronal holes have been attributed to Alfvén wave pressure acceleration; i.e., the net work done on the plasma by repeated pummeling from outward-propagating waves in the inhomogeneous plasma (e.g., Leer et al. 1982). This is likely to be a major contributor, but it does not address the question of why the wave pressure terms would be [*stronger*]{} in coronal holes compared to streamers. A key empirical clue came from Wang and Sheeley (1990), who noticed that the eventual wind speed at 1 AU is inversely correlated with the amount of transverse flux-tube expansion between the solar surface and the mid-corona. In other words, the field lines in the central regions of coronal holes undergo a relatively low degree of “superradial” expansion, but the more distorted field lines at the hole/streamer boundaries undergo more expansion. Wang and Sheeley (1991) also proposed that the observed anticorrelation is a natural by-product of [*equal*]{} amounts of Alfvén wave flux emitted at the bases of all flux tubes (see also earlier work by Kovalenko 1978, 1981).
The Wang/Sheeley/Kovalenko hypothesis can be summarized as follows. (Any misconceptions are mine!) In the low corona, the Alfvén wave flux $F_A$ is proportional to $\rho V_{A} \langle \delta v_{\perp}^{2} \rangle$. The density dependence in the product of Alfvén speed $V_A$ and the squared wave amplitude $\langle \delta v_{\perp}^{2} \rangle$ cancels almost exactly with the linear factor of $\rho$ in the wave flux, thus leaving $F_A$ proportional only to the radial magnetic field strength $B$. The ratio of $F_A$ at the solar wind sonic point (in the mid-corona) to its value at the photosphere thus scales as the ratio of $B$ at the sonic point to its value at the photosphere. The latter ratio of field strengths is proportional to $1/f$, where $f$ is the superradial expansion factor as defined by Wang and Sheeley. For equal wave fluxes at the photosphere for all regions, coronal holes (with low $f$) will thus have a larger flux of Alfvén waves at and above the sonic point compared to streamers (that have high $f$). In other words, for streamers, more of the energy flux will have been deposited [*below*]{} the sonic point. It is worthwhile to recall that the response of the solar wind plasma to extended acceleration and heating depends on whether the energy is deposited in the subsonic or supersonic wind (Leer and Holzer 1980; Pneuman 1980). Adding energy in the subsonic, i.e., nearly hydrostatic, corona raises the density scale height and decreases the asymptotic outflow speed. Adding energy above the sonic point results mainly in a larger outflow speed because the “supply” of material through the sonic point has already been set. This dichotomy seems to agree with the observed differences between coronal holes and streamers.
(For other simulations showing how the wind speed depends on the flux tube divergence, see Chen and Hu 2002; Vásquez et al. 2003. For a recent summary of low-frequency Alfvén wave propagation, reflection, and damping from the photosphere to the interplanetary medium, see Cranmer and van Ballegooijen 2004.)
Space Weather and CMEs
----------------------
In addition to the relatively time-steady solar wind, the Sun exhibits periodic eruptions of plasma and magnetic energy in the forms of flares, eruptive prominences, and coronal mass ejections (CMEs). These “space weather” events have the potential to interrupt satellite communications, disrupt ground-based power grids, and threaten the safety of orbiting astronauts (see, e.g., Feynman and Gabriel 2000; Song et al. 2001).
[*SOHO*]{} observations of CMEs have demonstrated this mission’s capability to combine high resolution imaging with sensitive spectral measurements to obtain the morphology, evolution, and plasma parameters of the ejected material. As the rate of CME events increased from solar minimum to solar maximum, many unprecedented observations were obtained. Specifically, EIT, SUMER, and CDS observations contained information about CME initiation; LASCO constructed a huge catalog of sizes, morphologies, and expansion speeds of CMEs; and UVCS provided plasma densities, temperatures, ionization states, and Doppler shift velocities of dozens of CMEs in the extended corona (see reviews by Forbes 2000; Raymond 2002; Webb 2002; Lin et al. 2003). UVCS spectra have provided the first real diagnostics of the physical conditions in CME plasma in the corona, and they have helped elucidate the roles of shock fronts (Mancuso and Raymond 2004), thin current sheets driven by reconnection (Lin et al. 2004), and helicity conservation (Ciaravella et al. 2000).
Conclusions
===========
[*SOHO*]{} has made significant progress toward identifying and characterizing the processes that heat the corona and accelerate the solar wind. Most of the [*SOHO*]{} instruments are expected to continue performing at full scientific capability for many more years, hopefully surviving to have some overlap with upcoming solar missions such as [*SDO, STEREO,*]{} and [*Solar-B.*]{} Unfortunately, none of these missions continue the UVCS-type coronagraph spectroscopy of the extended corona; a next-generation instrument of this type would provide much tighter constraints on, e.g., specific departures from Maxwellian and bi-Maxwellian velocity distributions that would nail down the physics conclusively. NASA’s [*Solar Probe,*]{} if ever funded fully, would also make uniquely valuable [*in situ*]{} measurements of the solar wind acceleration region. Observations have guided theorists to a certain extent, but [*ab initio*]{} models are still required before we can claim a full understanding of the physics. To make further progress, the lines of communication must be kept open between theorists and observers, and also between the solar and stellar physics communities.
The author would like to thank John Kohl, John Raymond, and Andrea Dupree for many valuable discussions. The groundbreaking [*SOHO*]{} results would not have been possible without the tireless efforts of the instrument teams and operations personnel at the Experimenters’ Operations Facility (EOF) at Goddard. This work is supported by the National Aeronautics and Space Administration (NASA) under grants NAG5-11913, NAG5-10996, NNG04GE77G, NNG04GE84G to the Smithsonian Astrophysical Observatory, by Agenzia Spaziale Italiana, and by the Swiss contribution to ESA’s PRODEX program.
[**Discussion**]{}
[*Bob Barber:*]{} The Sun is a G5 star. For what other types or classes of star do you think that your models hold?
[*Cranmer:*]{} Late-type stars with hot coronae and solar-type winds probably extend up the main sequence at least into the mid-F spectral type and down to M. Evolved stars between the main sequence and the various “dividing lines” in the upper-right H-R diagram also exhibit coronal signatures and probably have solar-like mass loss rates (see B. Wood, these proceedings). Even the outer atmospheres of the “hybrid chromosphere” stars seem to show some similarities to the solar case.
[*Andrea Dupree:*]{} Any time you have strong magnetic fields in a reasonably high-gravity atmosphere, solar-type coronae and winds seem to be a natural by-product. Even a low-gravity supergiant such as Betelgeuse may have a surface field strength of order 500 G (see B. Dorch, these proceedings) and thus magnetic activity and heating in its outer atmosphere.
[*Jürgen Schmitt:*]{} You pointed out that the slow wind originates from individual helmet streamers. Why is it, then, that the slow wind at the Earth shows such uniform properties?
[*Cranmer:*]{} Well, the slow wind is intrinsically more variable than the fast wind, but taking this variability into account, it is true that the slow wind from one streamer looks very much like the slow wind from another streamer. This uniformity may be related to the overall uniformity in the solar wind mass loss rate, which varies only by about 50% for [*all*]{} types of solar wind (e.g., Galvin 1998; Wang 1998). Older solar wind models could not account for this near-constancy of $\dot{M}$; indeed they predicted that tiny changes in the coronal temperature would result in exponentially amplified changes in $\dot{M}$. Hammer (1982) and Hansteen and Leer (1995) explained this by modeling the complex [*negative feedback*]{} that is set up between thermal conduction, radiative cooling, and mechanical heating. This explained the similarity between slow-wind and fast-wind mass loss rates, so I would assume that it even better explains the eventual similarity between different source regions of the slow wind.
[*Manfred Cuntz:*]{} Can you comment on the significance of “polar plumes” regarding the acceleration of the solar wind?
[*Cranmer:*]{} Plumes are bright ray-like features in coronal holes that seem to trace out the superradial expansion of these open-field regions. Plumes are denser and cooler than the ambient “interplume” plasma, but there is still some controversy about whether the solar wind inside them is slower (Giordano et al. 2000; Teriaca et al. 2003) or faster (Gabriel et al. 2003) than the flow between plumes. Wang (1994) presented a model of polar plumes as the extensions of concentrated bursts of added coronal heating at the base—presumably via microflare-like reconnection events in X-ray bright points. This idea still seems to hold up well in the post-[*[SOHO]{}*]{} era. EIT and UVCS made observations of compressive MHD waves channeled along polar plumes (DeForest and Gurman 1998; Ofman et al. 1999), and if the oscillations are slow-mode magnetosonic waves they should steepen into shocks at relatively low coronal heights (Cuntz and Suess 2001). Somewhere between about 30 and 100 $R_{\odot}$, plumes seem to blend in with the interplume plasma, but it is not yet clear whether this is a gradual approach to transverse pressure balance or the result of something like a Kelvin-Helmholtz mixing instability.
Aiouaz, T., Peter, H., Lemaire, P., Keppens, R., 2004, in [*SOHO–13: Waves, Oscillations, and Small-scale Events in the Solar Atmosphere,*]{} ESA SP–547, 375
Aschwanden, M. J., Poland, A. I., Rabin, D. M., 2001, Ann. Rev. Astron. Astrophys., 39, 175
Axford, W. I., 1977, in [*Study of Travelling Interplanetary Phenomena,*]{} ed. M. A. Shea, D. F. Smart, S. T. Wu (Reidel), 145
Axford, W. I., McKenzie, J. F., Sukhorukova, G. V., Banaszkiewicz, M., Czechowski, A., Ratkiewicz, R., 1999, Space Sci. Rev., 87, 25
Banaszkiewicz, M., Axford, W. I., McKenzie, J. F., 1998, A&A, 337, 940
Banerjee, D., Teriaca, L., Doyle, J. G., Wilhelm, K., 1998, A&A, 339, 208
Berger, T. E., Title, A. M., 2001, ApJ, 553, 449
Bondi, H., 1952, MNRAS, 112, 195
Brueckner, G. E., Howard, R. A., Koomen, M. J., et al., 1995, Solar Phys., 162, 357
Chen, Y., Hu. Y.-Q., 2002, Astrophys. Space Sci., 282, 447
Ciaravella, A., Raymond, J. C., Thompson, B. J., et al., 2000, ApJ, 529, 575
Cranmer, S. R., 2000, ApJ, 532, 1197
Cranmer, S. R., 2001, JGR, 106, 24937
Cranmer, S. R., 2002, Space Sci. Rev., 101, 229
Cranmer, S. R., 2004, American J. Phys., in press (arXiv astro–ph/0406176)
Cranmer, S. R., Field, G. B., Kohl, J. L., 1999a, ApJ, 518, 937
Cranmer, S. R., Kohl, J. L., Noci, G., et al., 1999b, ApJ, 511, 481
Cranmer, S. R., van Ballegooijen, A. A., 2003, ApJ, 594, 573
Cranmer, S. R., van Ballegooijen, A. A., 2004, ApJ Suppl., submitted
Cuntz, M., Suess, S. T., 2001, ApJ, 549, L143
David, C., Gabriel, A. H., Bely-Dubau, F., Fludra, A., Lemaire, P., Wilhelm, K., 1998, A&A, 336, L90
DeForest, C. E., Gurman, J. B., 1998, ApJ, 501, L217
Delaboudinière, J.-P., Artzner, G. E., Brunaud, J., et al., 1995, Solar Phys., 162, 291
Domingo, V., 2002, Astrophys. Space Sci., 282, 171
Doschek, G. A., Feldman, U., Laming, J. M., Schühle, U., Wilhelm, K., 2001, ApJ, 546, 559
Dowdy, J. F., Jr., Rabin, D., Moore, R. L., 1986, Solar Phys., 105, 35
Dunn, R. B., Zirker, J. B., 1973, Solar Phys., 33, 281
Dupree, A. K., Penn, M. J., Jones, H. P., 1996, ApJ, 467, L121
Endeve, E., Holzer, T. E., Leer, E., 2004, ApJ, 603, 307
Esser, R., Edgar, R. J., Adv. Space Res., 30 (3), 481
Feldman, U., Schühle, U., Widing, K. G., Laming, J. M., 1998, ApJ, 505, 999
Feldman, W. C., Asbridge, J. R., Bame, S. J., Gosling, J. T., 1976, JGR, 81, 5054
Feynman, J., Gabriel, S. B., 2000, JGR, 105, 10543
Fleck, B., Švestka, Z. (eds.), 1997 [*The First Results from SOHO*]{} (Kluwer)
Forbes, T. G., 2000, JGR, 105, 23153
Frazin, R. A., Cranmer, S. R., Kohl, J. L., 2003, ApJ, 597, 1145
Gabriel, A. H., 1976, Phil. Trans. Roy. Soc. A, 281, 339
Gabriel, A. H., Bely-Dubau, F., Lemaire, P., 2003, ApJ, 589, 623
Galinsky, V. L., Shevchenko, V. I., 2000, Phys. Rev. Lett., 85, 90
Galvin, A. B., 1998, in [*Cool Stars 10,*]{} ed. R. A. Donahue and J. A. Bookbinder, ASP Conf. Ser. 154, 341–342
Gary, S. P., Yin, L., Winske, D., Ofman, L., Goldstein, B. E., Neugebauer, M., 2003, JGR, 108 (A2), 1068, 10.1029/2002JA009654
Giordano, S., Antonucci, E., Noci, G., Romoli, M., Kohl, J. L., 2000, ApJ, 531, L79
Gudiksen, B. V., Nordlund, [å]{}., 2004, ApJ, submitted (arXiv astro–ph/0407266)
Hammer, R., 1982, ApJ, 259, 767
Hansteen, V. H., Leer, E., 1995, JGR, 100, 21577
Hansteen, V. H., Leer, E., Holzer, T. E., 1997, ApJ, 482, 498
Harrison, R. A., Sawyer, E. C., Carter, M. K., et al., 1995, Solar Phys., 162, 233
Hassler, D. M., Dammasch, I. E., Lemaire, P., Brekke, P., Curdt, W., Mason, H. E., Vial, J.-C., Wilhelm, K., 1999, Science, 283, 810
Hassler, D. M., Wilhelm, K., Lemaire, P., Schühle, U., 1997, Solar Phys., 175, 375
Hollweg, J. V., 1999, JGR, 104, 505
Hollweg, J. V., 2000, JGR, 105, 15699
Hollweg, J. V., Isenberg, P. A., 2002, JGR, 107 (A7), 1147, 10.1029/2001JA000270
Huber, M. C. E., Bonnet, R. M., Dale, D. C., Arduini, M., Fröhlich, C., Domingo, V., Whitcomb, G., 1996, ESA Bulletin, 86, 25, online at http://esapub.esrin.esa.it/
Kohl, J. L., Cranmer, S. R., Frazin, R. A., Miralles, M., Strachan, L., 2001, Eos Trans. AGU, 82 (20), Spring Meeting Suppl., S302 (SH21B-08)
Kohl, J. L., Esser, R., Cranmer, S. R., et al., 1999, ApJ, 510, L59
Kohl, J. L., Esser, R., Gardner, L. D., et al., 1995, Solar Phys., 162, 313
Kohl, J. L., Noci, G., Antonucci, E., et al., 1997, Solar Phys., 175, 613
Kohl, J. L., Noci, G., Antonucci, E., et al., 1998, ApJ, 501, L127
Kohl, J. L., Reeves, E. M., Kirkham, B., 1978, in [*New Instrumentation for Space Astronomy,*]{} ed. K. van der Hucht, G. Vaiana (Pergamon), 91
Kovalenko, V. A., 1978, Geomag. Aeronom., 18, 529
Kovalenko, V. A., 1981, Solar Phys., 73, 383
Krieger, A. S., Timothy, A. F., Roelof, E. C., 1973, Solar Phys., 29, 505
Krimigis, S. M., Decker, R. B., Hill, M. E., et al., 2003, Nature, 426, 45
Leer, E., Holzer, T. E., 1980, JGR, 85, 4631
Leer, E., Holzer, T. E., [Flå]{}, T., 1982, Space Sci. Rev., 33, 161
Li, X., Habbal, S. R., Hollweg, J. V., Esser, R., 1999, JGR, 104, 2521
Li, X., Habbal, S. R., Kohl, J. L., Noci, G., 1998, ApJ, 501, L133
Lin, J., Raymond, J. C., van Ballegooijen, A. A., 2004, ApJ, 602, 422
Lin, J., Soon, W., Baliunas, S. L. 2003, New Astron. Reviews, 47, 53
Lionello, R., Linker, J. A., Mikić, Z., 2001, ApJ, 546, 542
Malanushenko, O. V., Jones, H. P., 2004, Solar Phys., 222, 43
Mancuso, S., Raymond, J. C., 2004, A&A, 413, 363
Markovskii, S. A., Hollweg, J. V., 2004, ApJ, 609, 1112
Marsch, E., 1991, in [*Physics of the Inner Heliosphere,*]{} vol. 2, ed. R. Schwenn, E. Marsch (Springer-Verlag), 45
Marsden, R. G., 2001, Astrophys. Space Sci., 277, 337
Martens, P. C. H., Cauffman, D. P., 2002, [*Multi-Wavelength Observations of Coronal Structure and Dynamics,*]{} Yohkoh 10th Anniv. Meeting, COSPAR Colloq. Ser. 13 (Pergamon)
McKenzie, J. F., Banaszkiewicz, M., Axford, W. I., 1995, A&A, 303, L45
Miralles, M. P., Cranmer, S. R., Kohl, J. L., 2001, ApJ, 560, L193
Miralles, M. P., Cranmer, S. R., Kohl, J. L., 2002, in [*SOHO–11: From Solar Minimum to Solar Maximum,*]{} ESA SP–508, 351
Moran, T. G., 2003, ApJ, 598, 657
Neugebauer, M., 1997, JGR, 102, 26887
Ofman, L., 2004, Adv. Space Res., 33 (5), 681
Ofman, L., Nakariakov, V. M., DeForest, C. E. 1999, ApJ, 514, 441
Parker, E. N., 1958, ApJ, 128, 664
Parker, E. N., 1963, [*Interplanetary Dynamical Processes*]{} (Interscience)
Peter, H., Judge, P. G., 1999, ApJ, 522, 1148
Pneuman, G. W., 1980, A&A, 81, 161
Poletto, G., Suess, S. T., Biesecker, D. A., et al., 2002, JGR, 107 (A10), 1300, 10.1029/2001JA000275
Priest, E. R., Schrijver, C. J., 1999, Solar Phys., 190, 1
Raymond, J. C., 1999, Space Sci. Rev., 87, 55
Raymond, J. C., 2002, in [*SOHO–11: From Solar Minimum to Solar Maximum,*]{} ESA SP–508, 421
Raymond, J. C., Kohl, J. L., Noci, G., et al., 1997, Solar Phys., 175, 645
Schmelz, J. T., 2003, Adv. Space Res., 32 (6), 895
Sheeley, N. R., Jr., Wang, Y.-M., Hawley, S. H., et al., 1997, ApJ, 484, 472
Snyder, C. W., Neugebauer, M., 1964, Space Res., 4, 89
Song, P., Singer, H. J., Siscoe, G. L. (eds.), 2001, [*Space Weather,*]{} Geophys. Monograph Ser., 125 (AGU)
Spruit, H. C., 1984, in [*Small-scale Dynamical Processes in Quiet Stellar Atmospheres,*]{} ed. S. L. Keil (NSO), 249
Strachan, L., Suleiman, R., Panasyuk, A. V., Biesecker, D. A., Kohl, J. L., 2002, ApJ, 571, 1008
Suess, S.T., Nerney, S. F., 2002, ApJ, 565, 1275
Teriaca, L., Poletto, G., Romoli, M., Biesecker, D. A., 2003, ApJ, 588, 566
Tu, C.-Y., Marsch, E., 1995, Space Sci. Rev., 73, 1
Tu, C.-Y., Marsch, E., 1997, Solar Phys., 171, 363
Tu, C.-Y., Marsch, E., 2001, JGR, 106, 8233
Tu, C.-Y., Marsch, E., Wilhelm, K., Curdt, W., 1998, ApJ, 503, 475
Uzzo, M., Ko, Y.-K., Raymond, J. C., 2004, ApJ, 603, 760
Vaiana, G. S., 1976, Phil. Trans. Roy. Soc. A, 281, 365
Vásquez, A. M., van Ballegooijen, A. A., Raymond, J. C., 2003, ApJ, 598, 1361
Velli, M., 2001, Astrophys. Space Sci., 277, 157
Vocks, C., Marsch, E., 2002, ApJ, 568, 1030
Wang, Y.-M., 1994, ApJ, 435, L153
Wang, Y.-M., 1998, in [*Cool Stars 10,*]{} ed. R. A. Donahue and J. A. Bookbinder, ASP Conf. Ser. 154, 147
Wang, Y.-M., Sheeley, N. R., Jr., 1990, ApJ, 355, 726
Wang, Y.-M., Sheeley, N. R., Jr., 1991, ApJ, 372, L45
Wang, Y.-M., Sheeley, N. R., Jr., Socker, D. G., Howard, R. A., Rich, N. B., 2000, JGR, 105, 25133
Webb, D. F., 2002, in [*SOHO–11: From Solar Minimum to Solar Maximum,*]{} ESA SP–508, 409
Wiegelmann, T., Schindler, K., Neukirch, T., 2000, Solar Phys., 191, 391
Wilcox, J. M., 1968, Space Sci. Rev., 8, 258
Wilhelm, K., Curdt, W., Marsch, E., et al., 1995, Solar Phys., 162, 189
Withbroe, G. L., Kohl, J. L., Weiser, H., Munro, R. H., 1982, Space Sci. Rev., 33, 17
| {
"pile_set_name": "ArXiv"
} |
Introduction {#sec:intro}
============
Background {#sec:background}
==========
Learning Variable Importance with Theoretical Guarantee {#sec:theory}
=======================================================
Experiment Analysis {#sec:exp}
===================
Posterior Concentration and Uncertainty Quantification {#sec:exp_learning}
------------------------------------------------------
Effectiveness in High-dimensional Variable Selection {#sec:exp_selection}
----------------------------------------------------
Discussion and Future Directions {#sec:discussion}
================================
In this work, we investigate the theoretical basis underlying the deep ’s ability to achieve rigorous uncertainty quantification in variable selection. Using the square integrated gradient $\psi_p(f)=||\deriv{x_p}f||_n^2$ as the measure of variable importance, we established two new Bayesian nonparametric results on the ’s ability to learn and quantify uncertainty about variable importance. Our results suggest that the neural network can learn variable importance effectively in high dimensions (Theorem \[thm:post\_convergence\]), in a speed that in some cases “breaks" the curse of dimensionality (Proposition \[thm:f\_post\_converg\]). Moreover, it can generate rigorous and calibrated uncertainty estimates in the sense that its $(1-q)$-level credible intervals for variable importance cover the true parameter $(1-q)\%$ of the time (Theorem \[thm:bvm\]-\[thm:bvm\_mvn\]). The simulation experiments confirmed these theoretical findings, and revealed the interesting fact that can learn variable importance $\psi_p(f)$ at a rate much faster than learning predictions for $f^*$ (Figure \[fig:learning\]). The comparative study illustrates the effectiveness of the proposed approach for the purpose of variable selection in high dimensions, which is a scenario where the existing methods experience difficulties due to model misspecification, the curse of dimensionality, or the issue of multiple comparisons. Consistent with classic Bayesian nonparametric and deep learning literature [@castillo_bernsteinvon_2015; @rockova_theory_2018; @barron_universal_1993; @barron_approximation_2018], the theoretical results developed in this work assume a well-specified scenario where $f^* \in \Fsc$ and the noise distribution is known. Furthermore, computing exact credible intervals under a model requires the use of MCMC procedures, which can be infeasible for large datasets. Therefore two important future directions of this work are to investigate the ’s ability to learn variable importance under model misspecification, and to identify posterior inference methods (e.g., particle filter [@dai_provable_2016] or structured variational inference [@pradier_projected_2018]) that are scalable to large datasets and can also achieve rigorous uncertainty quantification.
Acknowledgements {#acknowledgements .unnumbered}
================
I would like to thank Dr. Rajarshi Mukherjee and my advisor Dr. Brent Coull at Harvard Biostatistics for the helpful discussions and generous support. This publication was made possible by USEPA grant RD-83587201 Its contents are solely the responsibility of the grantee and does not necessarily represent the official views of the USEPA. Further, USEPA does not endorse the purchase of any commercial products or services mentioned in the publication.
[10]{}
M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng. : [A]{} system for large-scale machine learning. , May 2016. arXiv: 1605.08695.
R. A. Adams and J. J. F. Fournier. . Academic Press, Amsterdam, 2 edition edition, July 2003.
A. Altmann, L. Toloşi, O. Sander, and T. Lengauer. Permutation importance: a corrected feature importance measure. , 26(10):1340–1347, May 2010.
U. Anders and O. Korn. Model selection in neural networks. , 12(2):309–323, Mar. 1999.
C. Andrieu and J. Thoms. A tutorial on adaptive [MCMC]{}. , 18(4):343–373, Dec. 2008.
F. Bach. Breaking the [Curse]{} of [Dimensionality]{} with [Convex]{} [Neural]{} [Networks]{}. , Dec. 2014. arXiv: 1412.8690.
R. F. Barber and E. J. Candès. Controlling the false discovery rate via knockoffs. , 43(5):2055–2085, Oct. 2015.
A. R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. , 39(3):930–945, May 1993.
A. R. Barron and J. M. Klusowski. Approximation and [Estimation]{} for [High]{}-[Dimensional]{} [Deep]{} [Learning]{} [Networks]{}. , Sept. 2018. arXiv: 1809.03090.
Y. Benjamini and Y. Hochberg. Controlling the [False]{} [Discovery]{} [Rate]{}: [A]{} [Practical]{} and [Powerful]{} [Approach]{} to [Multiple]{} [Testing]{}. , 57(1):289–300, 1995.
L. Breiman. Random [Forests]{}. , 45(1):5–32, Oct. 2001.
P. Bühlmann. Statistical significance in high-dimensional linear models. , 19(4):1212–1242, Sept. 2013.
E. Candès, Y. Fan, L. Janson, and J. Lv. Panning for gold: ‘model-[X]{}’ knockoffs for high dimensional controlled variable selection. , 80(3), June 2018.
G. Castellano and A. M. Fanelli. Variable selection using neural-network models. , 31(1):1–13, Mar. 2000.
I. Castillo and J. Rousseau. A [Bernstein]{}–von [Mises]{} theorem for smooth functionals in semiparametric models. , 43(6):2353–2383, Dec. 2015.
B. Dai, N. He, H. Dai, and L. Song. Provable [Bayesian]{} [Inference]{} via [Particle]{} [Mirror]{} [Descent]{}. In [*Artificial [Intelligence]{} and [Statistics]{}*]{}, pages 985–994, May 2016.
J. Feng and N. Simon. Sparse [Input]{} [Neural]{} [Networks]{} for [High]{}-dimensional [Nonparametric]{} [Regression]{} and [Classification]{}. , Nov. 2017. arXiv: 1711.07592.
A. Gelman, J. Hill, and M. Yajima. Why [We]{} ([Usually]{}) [Don]{}’t [Have]{} to [Worry]{} [About]{} [Multiple]{} [Comparisons]{}. , 5(2):189–211, Apr. 2012.
S. Ghosal and A. van der Vaart. Convergence rates of posterior distributions for noniid observations. , 35(1):192–223, Feb. 2007.
S. Ghosh and F. Doshi-Velez. Model [Selection]{} in [Bayesian]{} [Neural]{} [Networks]{} via [Horseshoe]{} [Priors]{}. , May 2017. arXiv: 1705.10388.
F. Giordano, M. L. Rocca, and C. Perna. Input [Variable]{} [Selection]{} in [Neural]{} [Network]{} [Models]{}. , 43(4):735–750, Feb. 2014.
R. Gribonval, G. Kutyniok, M. Nielsen, and F. Voigtlaender. Approximation spaces of deep neural networks. , May 2019. arXiv: 1905.01208.
I. Guyon and A. Elisseeff. An [Introduction]{} to [Variable]{} and [Feature]{} [Selection]{}. , 3(Mar):1157–1182, 2003.
K. He, X. Zhang, S. Ren, and J. Sun. Deep [Residual]{} [Learning]{} for [Image]{} [Recognition]{}. In [*2016 [IEEE]{} [Conference]{} on [Computer]{} [Vision]{} and [Pattern]{} [Recognition]{} ([CVPR]{})*]{}, pages 770–778, June 2016.
X. He, J. Wang, and S. Lv. Scalable kernel-based variable selection with sparsistency. , Feb. 2018. arXiv: 1802.09246.
A. Krizhevsky, I. Sutskever, and G. E. Hinton. with [Deep]{} [Convolutional]{} [Neural]{} [Networks]{}. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, [*Advances in [Neural]{} [Information]{} [Processing]{} [Systems]{} 25*]{}, pages 1097–1105. Curran Associates, Inc., 2012.
M. La Rocca and C. Perna. Variable selection in neural network regression models with dependent data: a subsampling approach. , 48(2):415–429, Feb. 2005.
Y. LeCun, J. S. Denker, and S. A. Solla. Optimal [Brain]{} [Damage]{}. In D. S. Touretzky, editor, [*Advances in [Neural]{} [Information]{} [Processing]{} [Systems]{} 2*]{}, pages 598–605. Morgan-Kaufmann, 1990.
H. K. Lee. Consistency of posterior distributions for neural networks. , 13(6):629–642, July 2000.
F. Liang, Q. Li, and L. Zhou. Bayesian [Neural]{} [Networks]{} for [Selection]{} of [Drug]{} [Sensitive]{} [Genes]{}. , 113(523):955–972, July 2018.
R. Lockhart, J. Taylor, R. Tibshirani, and R. Tibshirani. A significance test for the [LASSO]{}. , 42, Jan. 2013.
C. Louizos, M. Welling, and D. P. Kingma. Learning [Sparse]{} [Neural]{} [Networks]{} through [L]{}\_0 [Regularization]{}. Feb. 2018.
Y. Y. Lu, Y. Fan, J. Lv, and W. S. Noble. : reproducible feature selection in deep neural networks. Sept. 2018.
R. May, G. Dandy, and H. Maier. Review of [Input]{} [Variable]{} [Selection]{} [Methods]{} for [Artificial]{} [Neural]{} [Networks]{}. , Apr. 2011.
H. Montanelli and Q. Du. New error bounds for deep networks using sparse grids. , Dec. 2017. arXiv: 1712.08688.
R. M. Neal. . Lecture [Notes]{} in [Statistics]{}. Springer-Verlag, New York, 1996.
M. F. Pradier, W. Pan, J. Yao, S. Ghosh, and F. Doshi-velez. Projected [BNNs]{}: [Avoiding]{} weight-space pathologies by learning latent representations of neural network weights. , Nov. 2018. arXiv: 1811.07006.
V. Rockova and N. Polson. Posterior [Concentration]{} for [Sparse]{} [Deep]{} [Learning]{}. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, [*Advances in [Neural]{} [Information]{} [Processing]{} [Systems]{} 31*]{}, pages 930–941. Curran Associates, Inc., 2018.
V. Rockova and E. Saha. On [Theory]{} for [BART]{}. , Oct. 2018. arXiv: 1810.00787.
L. Rosasco, S. Villa, S. Mosci, M. Santoro, and A. Verri. Nonparametric [Sparsity]{} and [Regularization]{}. , 14:1665–1714, 2013.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. . , 115(3):211–252, Dec. 2015.
S. Scardapane, D. Comminiello, A. Hussain, and A. Uncini. Group sparse regularization for deep neural networks. , 241:81–89, June 2017.
J. Schmidt-Hieber. Nonparametric regression using deep neural networks with [ReLU]{} activation function. , Aug. 2017. arXiv: 1708.06633.
K. Simonyan and A. Zisserman. Very [Deep]{} [Convolutional]{} [Networks]{} for [Large]{}-[Scale]{} [Image]{} [Recognition]{}. , Sept. 2014. arXiv: 1409.1556.
T. Suzuki. Adaptivity of deep [ReLU]{} network for learning in [Besov]{} and mixed smooth [Besov]{} spaces: optimal rate and curse of dimensionality. Sept. 2018.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the [Inception]{} [Architecture]{} for [Computer]{} [Vision]{}. , pages 2818–2826, 2015.
R. Tibshirani. Regression [Shrinkage]{} and [Selection]{} via the [Lasso]{}. , 58(1):267–288, 1996.
A. W. van der Vaart. . Cambridge University Press, Cambridge, June 2000.
G. Wahba. . SIAM, Sept. 1990. Google-Books-ID: ScRQJEETs0EC.
H. White and J. Racine. Statistical inference, the bootstrap, and neural-network modeling with application to foreign exchange rates. , 12(4):657–673, July 2001.
L. Yang, S. Lv, and J. Wang. Model-free [Variable]{} [Selection]{} in [Reproducing]{} [Kernel]{} [Hilbert]{} [Space]{}. , 17(82):1–24, 2016.
D. Yarotsky. Error bounds for approximations with deep [ReLU]{} networks. , Oct. 2016. arXiv: 1610.01145.
M. Ye and Y. Sun. Variable [Selection]{} via [Penalized]{} [Neural]{} [Network]{}: a [Drop]{}-[Out]{}-[One]{} [Loss]{} [Approach]{}. In [*International [Conference]{} on [Machine]{} [Learning]{}*]{}, pages 5620–5629, July 2018.
Statement of Multivariate Theorem {#sec:bvm_mvn}
=================================
For $f \in \Fsc(L, W, S, B)$, assuming the posterior distribution $\Pi_n(f)$ contracts around $f_0$ at rate $\epsilon_n$. Denote $\hat{\psi}^c=[\hat{\psi}^c_{1}, \dots, \hat{\psi}^c_P]$ for $\hat{\psi}^c_p$ as defined in Theorem \[thm:bvm\]. Also recall that $P=o(1)$, i.e. the data dimension does not grow with sample size.
Then $\hat{\psi}^c$ is an unbiased estimator of $\psi(f_0)=[\psi_1(f_0), \dots, \psi_P(f_0)]$, and the posterior distribution for $\psi^c(f)$ asymptotically converge toward a multivariate normal distribution surrounding $\hat{\psi}^c$, i.e. $$\begin{aligned}
\Pi\Big(\sqrt{n}(\psi^c(f) - \hat{\psi}^c)
\Big| \{\bx_i, y_i\}_{i=1}^n \Big) \leadsto MVN(0, V_0),
%\label{eq:mvn_bvm}\end{aligned}$$ where $V_0$ is a $P \times P$ matrix such that $(V_0)_{p_1, p_2} = 4\inner{H_{p_1}f_0}{H_{p_2}f_0}_n$. \[thm:bvm\_mvn\]
Proof is in Supplementary Section \[sec:bvm\_mvn\_proof\].
Table and Figures
=================
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We consider the problem of maintaining a maximal independent set (MIS) in a dynamic graph subject to edge insertions and deletions. Recently, Assadi, Onak, Schieber and Solomon (STOC 2018) showed that an MIS can be maintained in *sublinear* (in the dynamically changing number of edges) amortized update time. In this paper we significantly improve the update time for *uniformly sparse graphs*. Specifically, for graphs with arboricity $\alpha$, the amortized update time of our algorithm is $O(\alpha^2 \cdot \log^2 n)$, where $n$ is the number of vertices. For low arboricity graphs, which include, for example, minor-free graphs as well as some classes of “real world” graphs, our update time is polylogarithmic. Our update time improves the result of Assadi [*et al.*]{} for all graphs with arboricity bounded by $m^{3/8 - \eps}$, for any constant $\eps > 0$. This covers much of the range of possible values for arboricity, as the arboricity of a general graph cannot exceed $m^{1/2}$.'
author:
- 'Krzysztof Onak[^1]'
- 'Baruch Schieber[^2]'
- 'Shay Solomon[^3]'
- 'Nicole Wein[^4]'
bibliography:
- 'icalp18.bib'
title: 'Fully Dynamic MIS in Uniformly Sparse Graphs[^5]'
---
Introduction
============
The importance of the maximal independent set (MIS) problem is hard to overstate. In general, MIS algorithms constitute a useful subroutine for locally breaking symmetry between several choices. The MIS problem has intimate connections to a plethora of fundamental combinatorial optimization problems such as maximum matching, minimum vertex cover, and graph coloring. As a prime example, MIS is often used in the context of graph coloring, as all vertices in an independent set can be assigned the same color. As another important example, Hopcroft and Karp [@HopcroftKarp] gave an algorithm to compute a large matching (approximating the maximum matching to within a factor arbitrarily close to 1) by applying maximal independent sets of longer and longer augmenting paths. The seminal papers of Luby [@Luby86] and Linial [@Linial87] discuss additional applications of MIS. A non-exhaustive list of further direct and indirect applications of MIS includes resource allocation [@YuWHL14], leader election [@DaumGKN12], the construction of network backbones [@KuhnMW04; @JurdzinskiK12], and sublinear-time approximation algorithms [@NguyenO08].
In the 1980s, questions concerning the computational complexity of the MIS problem spurred a line of research that led to the celebrated parallel algorithms of Luby [@Luby86], and Alon, Babai, and Itai [@AlonBI86]. These algorithms find an MIS in $O(\log n)$ rounds without global coordination. More recently, Fischer and Noever [@FischerN18] gave an even simpler greedy algorithm that considers vertices in random order and takes $O(\log n)$ rounds with high probability (see also an earlier result of Blelloch, Fineman, and Shun [@BlellochFS12]).
In this work we continue the study of the MIS problem by considering the *dynamic setting*, where the underlying graph is not fixed, but rather evolves over time via edge updates. Formally, a *dynamic graph* is a graph sequence ${\cal G} = (G_0,G_1,\dots,G_M)$ on $n$ fixed vertices, where the initial graph is $G_0 = (V,\emptyset)$ and each graph $G_i = (V,E_i)$ is obtained from the previous graph $G_{i-1}$ in the sequence by either adding or deleting a single edge. The basic goal in this context is to maintain an MIS in time significantly faster than it takes to recompute it from scratch following every edge update. In STOC’18, Assadi, Onak, Schieber, and Solomon [@AOSS2018] gave the first fully dynamic algorithm for maintaining an MIS with sub-linear (amortized) update time. Their update time is $\min\{m^{3/4},\Delta\}$, where $m$ is the (dynamically changing) number of edges and $\Delta$ is a fixed bound on the maximum degree of the graph. Achieving an update time of $O(\Delta)$ is simple, and the main technical contribution of [@AOSS2018] is in further reducing the update time to $O(m^{3/4})$, which improves over the simple $O(\Delta)$ bound for sufficiently sparse graphs.
Our contribution
----------------
We focus on graphs that are “uniformly sparse” or “sparse everywhere”, as opposed to the previous work by Assadi et al. [@AOSS2018] that considers unrestricted sparse graphs. We aim to improve the update time of [@AOSS2018] as a function of the “uniform sparsity” of the graph. This fundamental property of graphs has been studied in various contexts and under various names over the years, one of which is *arboricity* [@NashW61; @NashW64; @Tutte61]:
\[d1\] The arboricity $\alpha$ of a graph $G=(V,E)$ is defined as $\alpha=\max_{U\subset V} \lceil\frac{|E(U)|}{|U|-1}\rceil$, where $E(U)=\left\{(u,v)\in E\mid u,v\in U\right\}$.
Thus a graph has bounded arboricity if all its induced subgraphs have bounded density. The family of bounded arboricity graphs contains, bounded-degree graphs, all minor-closed graph classes (e.g., bounded genus graphs and planar graphs in particular, graphs with bounded treewidth), and randomly generated preferential attachment graphs. Moreover, it is believed that many real-world graphs such as the world wide web graph and social networks also have bounded arboricity [@GoelG06].
A dynamic graph of *arboricity* $\alpha$ is a dynamic graph such that all graphs $G_i$ have arboricity bounded by $\alpha$. We prove the following result.
\[main\] For any dynamic $n$-vertex graph of arboricity $\alpha$, an MIS can be maintained deterministically in $O(\alpha^2\log^2 n)$ amortized update time.
Theorem \[main\] improves the result of Assadi [*et al.*]{} for all graphs with arboricity bounded by $m^{3/8 - \eps}$, for any constant $\eps > 0$. This covers much of the range of possible values for arboricity, as the arboricity of a general graph cannot exceed $\sqrt{m}$. Furthermore, our algorithm has polylogarithmic update time for graphs of polylogarithmic arboricity; in particular, for the family of constant arboricity graphs the update time is $O(\log^2 n)$.
Our and previous techniques
---------------------------
### Arboricity and the dynamic edge orientation problem {#eop}
Our algorithm utilizes two properties of arboricity $\alpha$ graphs:
1. Every subgraph contains a vertex of degree at most $2\alpha$.
2. There exists an *$\alpha$ out-degree orientation*, i.e., an orientation of the edges such that every vertex has out-degree at most $\alpha$.
The first property follows from Definition \[d1\] and the second property is due to a well-known alternate definition by Nash-Williams [@NashW64]. It is known that a bounded out-degree orientation of bounded arboricity graphs can be maintained efficiently in dynamic graphs. Brodal and Fagerberg [@BrodalF99] initiated the study of the dynamic edge orientation problem and gave an algorithm that maintains an $O(\alpha)$ out-degree orientation with an amortized update time of $O(\alpha+\log n)$ in any dynamic $n$-vertex graph of arboricity $\alpha$; our algorithm uses the algorithm of [@BrodalF99] as a *black box*. (Refer to [@Kowalik07; @HeTZ14; @KopelowitzKPS14; @BerglinB17] for additional results on the dynamic edge orientation problem.)
### A comparison to Assadi [*et al.*]{}
As noted already by Assadi [*et al.*]{} [@AOSS2018], a central obstacle in maintaining an MIS in a dynamic graph is to maintain detailed information about the 2-hop neighborhood of each vertex. Suppose an edge is added to the graph and as a result, one of its endpoints $v$ is removed from the MIS. In this case, to maintain the maximality of the MIS, we need to identify and add to the MIS all neighbors of $v$ that are not adjacent to a vertex in the MIS. This means that for each of $v$’s neighbors, we need to know whether it has a neighbor in the MIS (other than $v$). Dynamically maintaining *complete* information about the 2-hop neighborhood of each vertex is prohibitive.
To overcome this hurdle, we build upon the approach of Assadi [*et al.*]{} [@AOSS2018] and maintain *incomplete* information on the 2-hop neighborhoods of vertices. Consequently, we may err by adding vertices to the MIS even though they are adjacent to MIS vertices. To restore independence, we need to remove vertices from the MIS. The important property that we maintain, following [@AOSS2018], is that the number of vertices added to the MIS following a single update operation will be significantly higher than the number of removed vertices. Like [@AOSS2018], we analyze our algorithm using a potential function defined as the number of vertices not in the MIS. The underlying principle behind achieving a low amortized update time is to ensure that if the process of updating the MIS takes a long time, then the size of the MIS increases substantially as a result. On the other hand, the size of the MIS may decrease by at most one following each update. Our algorithm deviates significantly from Assadi [*et al.*]{} in several respects, described next.
##### I: An underlying bounded out-degree orientation.
We apply the algorithm of [@BrodalF99] for efficiently maintaining a bounded out-degree orientation. Given such an orientation, we can maintain complete information about the *2-hop out-neighborhoods* of vertices (i.e., the 2-hop neighborhoods restricted to the *outgoing edges* of vertices), which helps significantly in the maintenance of the MIS. More specifically, the usage of a bounded out-degree orientation enables us to reduce the problem to a single nontrivial case, described in Section \[sec:triv\].
##### II: An intricate chain reaction.
To handle the nontrivial case efficiently, we develop an intricate “chain reaction” process, initiated by adding vertices to the MIS that violate the independence property, which then forces the removal of other vertices from the MIS, which in turn forces the addition of other vertices, and so forth. Such a process may take a long time. However, by employing a novel partition of a subset of the in-neighborhood of each vertex into a logarithmic number of “buckets” (see point III) together with a careful analysis, we limit this chain reaction to a logarithmic number of steps; upper bounding the number of steps is crucial for achieving a low update time. We note that such an intricate treatment was not required in [@AOSS2018], where the chain reaction was handled by a simple recursive procedure.
##### III: A precise bucketing scheme.
In order to achieve a sufficient increase in the size of the MIS, we need to carefully choose which vertices to add to the MIS so as to guarantee that at every step of the aforementioned [chain reaction]{}, even steps far in the future, we will be able to add enough vertices to the MIS. We achieve this by maintaining a precise *bucketing* of a carefully chosen subset of the in-neighborhood of each vertex, where vertices in larger-indexed buckets are capable of setting off longer and more fruitful chain reactions. At the beginning of the chain reaction, we add vertices in the larger-indexed buckets to the MIS, and gradually allow for vertices in smaller-indexed buckets to be added.
##### IV: A tentative set of MIS vertices.
In contrast to the algorithm of Assadi [*et al.*]{}, here we cannot iteratively choose which vertices to add to the MIS. Instead, we need to build a *tentative* set of all vertices that may potentially be added to the MIS, and only later prune this set to obtain the set of vertices that are actually added there. If we do not select the vertices added to the MIS in such a careful manner, we may not achieve a sufficient increase in the size of the MIS. To guarantee that the number of vertices actually added to the MIS (after the pruning of the tentative set) is sufficiently large, we make critical use of the first property of bounded arboricity graphs mentioned in Section \[eop\].
### A comparison to other previous work
Censor-Hillel, Haramaty, and Karnin [@CHK16] consider the problem of maintaining an MIS in the dynamic distributed setting. They show that there is a randomized algorithm that requires only a constant number of rounds in expectation to update the maintained MIS, as long as the sequence of graph updates does not depend on the algorithm’s randomness. This assumption is often referred to as *oblivious adversary*. As noted by Censor-Hillel [*et al.*]{}, it is unclear whether their algorithm can be implemented with low total work in the centralized setting. This shortcoming is addressed by Assadi [*et al.*]{} [@AOSS2018] and by the current paper.
Kowalik and Kurowski [@KowalikK03] employ a dynamic bounded out-degree orientation to answer shortest path queries in (unweighted) planar graphs. Specifically, given a planar graph and a constant $k$, they maintain a data structure that determines in constant time whether two vertices are at distance at most $k$, and if so, produces a path of such length between them. This data structure is fully dynamic with polylogarithmic amortized update time. They show that the case of distance $k$ queries (for a general $k \ge 2$) reduces in a clean way to the case of distance 2 queries. Similarly to our approach, Kowalik and Kurowski maintain information on the 2-hop neighborhoods of vertices, but the nature of the 2-hop neighborhood information necessary for the two problems is inherently different. For answering distance 2 queries it is required to maintain *complete* information on the 2-hop neighborhoods, whereas for maintaining an MIS one may do with partial information. On the other hand, the 2-hop neighborhood information needed for maintaining an MIS must be more detailed, so as to capture the information *as it pertains to the 1-hop neighborhood*. Such detailed information is necessary for determining the vertices to be added to the MIS following the removal of a vertex from the MIS.
Dynamic MIS vs. dynamic maximal matching
----------------------------------------
In the maximal matching problem, the goal is to compute a matching that cannot be extended by adding more edges to it. The problem is equivalent to finding an MIS in the line graph of the input graph. Despite this intimate relationship between the MIS and maximal matching problems, (efficiently) *maintaining* an MIS appears to be inherently harder than maintaining a maximal matching. As potential evidence, there is a significant gap in the performance of the naive dynamic algorithms for these problems. (The naive algorithms just maintain for each vertex whether it is matched or in the MIS.) For the maximal matching problem, the naive algorithm has an update time of $O(\Delta)$. This is because the naive algorithm just needs to inspect the neighbors of a vertex $v$, when $v$ becomes unmatched, to determine whether it can to be matched. For the MIS problem, when a vertex $v$ is removed from the MIS, the naive algorithm needs to inspect not only the neighbors of $v$, but also the neighbors of $v$’s neighbors to determine which vertices need to be added to the MIS; as a result the update time is $O(\min\{m,\Delta^2\})$. Furthermore, the worst-case number of MIS changes (by any algorithm) may be as large as $\Omega(\Delta)$ [@CHK16; @AOSS2018], whereas the worst-case number of changes to the maximal matching maintained by the naive algorithm is $O(1)$. Lastly, the available body of work on the dynamic MIS problem is significantly sparser than that on the dynamic maximal matching problem.
It is therefore plausible that it may be hard to obtain, for the dynamic MIS problem, a bound better than the best bounds known for the dynamic maximal matching problem. In particular, the state-of-the-art dynamic *deterministic* algorithm for maintaining a maximal matching has an update time of $O(\sqrt{m})$ [@NS13], even in the amortized sense. Hence, in order to obtain deterministic update time bounds sub-polynomial in $m$, one may have to exploit the structure of the graph, and bounded arboricity graphs are a natural candidate. A maximal matching can be maintained in graphs of arboricity bounded by $\alpha$ with amortized update time $O(\alpha +\sqrt{\alpha \log n})$ [@NS13; @HeTZ14]; as long as the arboricity is polylogarithmic in $n$, the amortized update time is polylogarithmic. In this work we show that essentially the same picture applies to the seemingly harder problem of dynamic MIS.
Algorithm overview {#overview}
==================
Using a bounded out-degree orientation of the edges, a very simple algorithm suffices to handle edge updates that fall into certain cases. The nontrivial case occurs when we remove a vertex $v$ from the MIS and need to determine which vertices in $v$’s in-neighborhood have no neighbors in the MIS, and thus need to be added to the MIS. The in-neighborhood of $v$ could be very large and it would be costly to spend even constant time per in-neighbor of $v$. Furthermore, it would be costly to maintain a data structure that stores for each vertex in the MIS which of its in-neighbors have no other neighbors in the MIS. Suppose we stored such a data structure for a vertex $v$. Then the removal of a vertex $u$ from the MIS could cause the entirety of the common neighborhood of $u$ and $v$ to change their status in $v$’s data structure. If this common neighborhood is large, then this operation is costly.
To address this issue, our algorithm does not even attempt to determine the exact set of neighbors of $v$ that need to be added to the MIS. Instead, we maintain *partial* information about which vertices will need to be added to the MIS. Then, when we are unsure about whether we need to add a specific vertex to the MIS, we simply add it to the MIS and remove its conflicting neighbors from the MIS, which triggers a chain reaction of changes to the MIS. Despite the fact that this chain reaction may take a long time to resolve, we obtain an amortized time bound by using a potential function: the number of vertices not in the MIS. That is, we ensure that if we spend a lot of time processing an edge update, then the size of the MIS increases substantially as a result.
The core of our algorithm is to carefully choose which vertices to add to the MIS at each step of the chain reaction to ensure that the size of the MIS increases sufficiently. To accomplish this, we store an intricate data structure for each vertex which includes a partition of a subset of its in-neighborhood into numbered buckets. The key idea is to ensure that whenever we remove a vertex from the MIS, it has at least one full bucket of vertices, which we add to the MIS.
When we remove a vertex from the MIS whose top bucket is full of vertices, we begin the chain reaction by adding these vertices to the MIS (and removing the conflicting vertices from the MIS). In each subsequent step of the chain reaction, we process more vertices, and for each processed vertex, we add to the MIS the set of vertices in its *topmost full bucket*. To guarantee that every vertex that we process has at least one full bucket, we utilize an invariant (the “Main Invariant”) which says that for all $i$, when we process a vertex whose bucket $i$ is full, then in the next iteration of the chain reaction we will only process vertices whose bucket $i-1$ is full. This implies that if the number of iterations in the chain reaction is at most the number of buckets, then we only process vertices with at least one full bucket. To bound the number of iterations of the chain reaction, we prove that the number of processed vertices *doubles* at every iteration. This way, there cannot be more than a logarithmic number of iterations. Thus, by choosing the number of buckets to be logarithmic, we only process vertices with at least one full bucket, which results in the desired increase in the size of the MIS.
Algorithm setup
===============
Our algorithm uses a dynamic edge orientation algorithm as a black box. For each vertex $v$, let $N(v)$ denote the neighborhood of $v$, let $N^+(v)$ denote the out-neighborhood of $v$, and let $N^-(v)$ denote the in-neighborhood of $v$.
The trivial cases {#sec:triv}
-----------------
Let $\cal{M}$ be the MIS that we maintain. For certain cases of edge updates, there is a simple algorithm to update the $\cal{M}$. Here, we introduce this simple algorithm and then describe the case that this algorithm does not cover.
We say that a vertex $v$ is *resolved* if either $v$ is in $\cal M$ or a vertex in $N^-(v)$ is in $\cal{M}$. Otherwise we say that $v$ is *unresolved*.
The data structure is simply that each vertex $v$ stores (i) the set $M^-(v)$ of $v$’s in-neighbors that are in $\cal{M}$, and (ii) a partition of its in-neighborhood into resolved vertices and unresolved vertices. To maintain this data structure, whenever a vertex $v$ enters or exits $\cal{M}$, $v$ notifies its 2-hop out-neighborhood. Additionally, following each update to the edge orientation, each affected vertex notifies its 2-hop out-neighborhood.
<span style="font-variant:small-caps;">Delete(u,v)</span>:
- It cannot be the case that both $u$ and $v$ are in $\cal M$ since $\cal M$ is an independent set.
- If neither $u$ nor $v$ is in $\cal M$ then both must already have neighbors in $\cal M$ and we do nothing.
- If $u\in \cal{M}$ and $v\not\in \cal{M}$, then we may need to add $v$ to $\cal{M}$. If $v$ is resolved, we do not add $v$ to $\cal{M}$. Otherwise, we scan $N^+(v)$ and if no vertex in $N^+(v)$ is in $\cal{M}$, we add $v$ to $\cal{M}$.
<span style="font-variant:small-caps;">Insert(u,v)</span>:
- If it is not the case that both $u$ and $v$ are in $\cal{M}$, then we do nothing and $\cal M$ remains maximal.
- If both $u$ and $v$ are in $\cal M$ we remove $v$ from $\cal{M}$. Now, some of $v$’s neighbors may need to be added to $\cal{M}$, specifically, those with no neighbors in $\cal{M}$. For each unresolved vertex $w \in N^+(v)$, we scan $N^+(w)$ and if $N^+(w) \cap \cal M=\emptyset$, then we add $w$ to $\cal{M}$. For each resolved vertex $w \in N^-(v)$, we know not to add $w$ to $\cal{M}$. On the other hand, for each unresolved vertex $w \in N^-(v)$, we do not know whether to add $w$ to $\cal M$ and it could be costly to scan $N^+(w)$ for all such $w$. This simple algorithm does not handle the case where $v$ has many unresolved in-neighbors.
In summary, the nontrivial case occurs when we delete a vertex $v$ from $\cal M$ and $v$ has many unresolved in-neighbors.
Data structure
--------------
As in the trivial cases, each vertex $v$ maintains $M^-(v)$ and a partition of $N^-(v)$ into resolved vertices and unresolved vertices. In addition, we further refine the set of unresolved vertices. One important subset of the unresolved vertices in $N^-(v)$ is the *active set of v*, denoted $A_v$. As motivated in the algorithm overview, $A_v$ is partitioned into $b$ buckets $A_v(1),\dots,A_v(b)$ each of size at most $s$. We will set $b$ and $s$ so that $b=\Theta(\log n)$ and $s=\Theta(\alpha)$.
The purpose of maintaining $A_v$ is to handle the event that $v$ is removed from $\cal{M}$. When $v$ is removed from $\cal{M}$, we use the partition of $A_v$ into buckets to carefully choose which neighbors of $v$ to add to $\cal{M}$ to begin a chain reaction of changes to $\cal{M}$. For the rest of the vertices in $A_v$, we scan through them and update the data structure to reflect the fact that $v\not\in \cal{M}$. This scan of $A_v$ is why it is important that each active set is small (size $O(\alpha\log n)$).
One important property of active sets is that each vertex is in the active set of at most one vertex. For each vertex $v$, let $a(v)$ denote the vertex whose active set contains $v$. Let $B(v)$ denote the bucket of $A_{a(v)}$ that contains $v$.
For each vertex $v$ the data structure maintains the following partition of $N^-(v)$:
- $Z_v$ is the set of resolved vertices in $N^-(v)$.
- $A_v$ (*the active set*) is a subset of the unresolved vertices in $N^-(v)$ partitioned into $b=\Theta(\log n)$ buckets $A_v(1),...,A_v(b)$ each of size at most $s=\Theta(\alpha)$. $A_v$ is empty if $v\not\in\cal{M}$.
- $P_v$ (*the passive set*) is the set of unresolved vertices in $N^-(v)$ in the active set of some vertex other than $v$. $P_v$ is partitioned into $b$ buckets $P_v(1),...,P_v(b)$ such that each vertex $u\in P_v$ is in the set $P_v(i)$ if and only if $B(u)=i$.
- $R_v$ (*the residual set*) is the set of unresolved vertices in $N^-(v)$ not in the active set of any vertex.
We note that while $Z_v$ depends only on $\cal{M}$ and the orientation of the edges, the other three sets depend on internal choices made by the algorithm. In particular, for each vertex $v$, the algorithm picks at most one vertex $a(v)$ for which $v\in A_{a(v)}$ and this choice uniquely determines for every vertex $u\in N^+(v)$, which set ($A_u$, $P_u$, or $R_u$) $v$ belongs to.
We now outline the purpose of the passive set and the residual set. Suppose a vertex $v$ is removed from $\cal{M}$. We do not need to worry about the vertices in $P_v$ because we know that all of these vertices are in the active set of a vertex in $\cal{M}$ and thus none of them need to be added to $\cal{M}$. On the other hand, we do not know whether the vertices in $R_v$ need to be added to $\cal{M}$. We cannot afford to scan through them all and we cannot risk not adding them since this might cause $\cal{M}$ to not be maximal. Thus, we add them all to $\cal{M}$ and set off a chain reaction of changes to $\cal{M}$. That is, even though our analysis requires that we carefully choose which vertices of $A_v$ to add to $\cal{M}$ during the chain reaction, it suffices to simply add every vertex in $R_v$ to $\cal{M}$ (except for those with edges to other vertices we are adding to $\cal{M}$).
Invariants
----------
We maintain several invariants of the data structure. The invariant most central to the overall argument is the Main Invariant (Invariant \[inv:main\]), whose purpose is outlined in the algorithm overview. The first four invariants follow from the definitions.
\[inv:res\] *(Resolved Invariant)*. For all resolved vertices $v$, for all $u \in N^+(v)$, $v$ is in $Z_u$.
\[inv:or\] *(Orientation Invariant)*. For all $v$, $Z_v \cup A_v \cup P_v \cup R_v =N^-(v)$.
\[inv:em\] *(Empty Active Set Invariant)*. For all $v \not \in \cal{M}$, $A_v$ is empty.
\[inv:con\] *(Consistency Invariant)*.
- If $v$ is resolved, then for all vertices $u\in N^+(v)$, $v \not \in A_u$.
- If $v$ is unresolved then $v$ is in $A_u$ for at most one vertex $u\in N^+(v)$.
- If $v$ is in the active set of some vertex $u$, then for all $w \in N^+(v)\setminus\{u\}$, $v$ is in $P_w(i)$ where $i$ is such that $B(v)=i$.
- If $v$ is in the residual set for some vertex $u$, then for all $w \in N^+(v)$, $v$ is in $R_w$.
The next invariant says that the active set of a vertex is filled from lowest bucket to highest bucket and only then is the residual set filled.
We say that a bucket $A_v(i)$ is *full* if $|A_v(i)|=s$. We say $A_v$ is *full* if all $b$ of its buckets are full.
\[inv:full\] *(Full Invariant)*. For all vertices $v$ and all $i<b$, if $A_v(i)$ is not full then $A_v(i+1)$ is empty. Also, if $A_v$ is not full then $R_v$ is empty.
The next invariant, the Main Invariant, says that if we were to move $v$ from $A_{a(v)}$ to the active set of a different vertex $u$ by placing $v$ in the lowest non-full bucket of $A_u$, then $B(v)$ would not decrease.
\[inv:main\] *(Main Invariant)*. For all $v$, if $B(v)=i>1$ then for all $u \in N^+(v) \cap \cal{M}$, $A_u(i-1)$ is full.
Algorithm phases {#sec:setup}
----------------
The algorithm works in four phases.
1. Update $\cal{M}$.
2. Update the data structure.
3. Run a black box dynamic edge orientation algorithm.
4. Update the data structure.
During all phases, for each vertex $v$ we maintain $M^-(v)$. Otherwise, the data structure is completely static during phases 1 and 3.
Algorithm for updating $\cal{M}$ {#sec:updating_M}
================================
When an edge $(u,v)$ is deleted, we run the procedure $\textsc{Delete(u,v)}$ specified in the trivial cases section. When an edge $(u,v)$ is inserted and it is not the case that both $u$ and $v$ are in $\cal{M}$, then we do nothing and $\cal M$ remains maximal. In the case that both $u$ and $v$ are in $\cal{M}$, we need to remove either $u$ or $v$ from $\cal M$ which may trigger many changes to $\cal{M}$. For the rest of this section we consider this case of an edge insertion $(u,v)$.
The procedure of updating $\cal M$ happens in two stages. In the first stage, we iteratively build two sets of vertices, $S^+$ and $S^-$. Intuitively, $S^+$ is a subset of vertices that we intend to add to $\cal M$ and $S^-$ is the set of vertices that we intend to delete from $\cal{M}$. The aforementioned chain reaction of changes to $\cal{M}$ is captured in the construction of $S^+$ and $S^-$. In the second stage we make changes to $\cal{M}$ according to $S^+$ and $S^-$. In particular, the set of vertices that we add to $\cal{M}$ contains a large subset of $S^+$ as well as some additional vertices, and the set of vertices that we remove from $\cal{M}$ is a subset of $S^-$. In accordance with our goal of increasing the size of $\cal M$ substantially, we ensure that $S^+$ is much larger than $S^-$. Why is it important to build $S^+$ before choosing which vertices to add to $\cal{M}$? The answer is that it is important that we add a *large* subset of $S^+$ to $\cal{M}$ since our goal is to increase the size of $\cal{M}$ substantially. We find this large subset of $S^+$ by finding a large MIS in the graph induced by $S^+$, which exists (and can be found in linear time) because the graph has bounded arboricity. Suppose that instead of iteratively building $S^+$, we tried to iteratively add vertices directly to $\cal{M}$ in a greedy fashion. This could result in only very few vertices successfully being added to $\cal{M}$. For example, if we begin by adding the center of a star graph to $\cal{M}$ and subsequently try to add the leaves of the star, we will not succeed in adding any of the leaves to $\cal{M}$. On the other hand, if we first add the vertices of the star to $S^+$ then we can find a large MIS in the star (the leaves) to add it to $\cal{M}$.
Stage 1: Constructing $S^+$ and $S^-$
-------------------------------------
The crux of the algorithm is the construction of $S^+$ and $S^-$. A key property of the construction is that $S^+$ is considerably larger than $S^-$:
\[lem:bigs+\] If $|S^-|>1$ then $|S^+|\geq4 \alpha |S^-|$.
After constructing $S^+$ and $S^-$ we will add at least $\frac{|S^+|}{2 \alpha}$ vertices to $\cal M$ and remove at most $|S^-|$ vertices from $\cal{M}$. Thus, Lemma \[lem:bigs+\] implies that $\cal{M}$ increases by $\Omega(\frac{|S^+|}{\alpha})$.
To construct $S^+$ and $S^-$, we define a recursive procedure <span style="font-variant:small-caps;">Process</span>($w$) which adds at least one full bucket of $A_w$ to $S^+$. A key idea in the analysis is to guarantee that for every call to <span style="font-variant:small-caps;">Process</span>($w$), $A_w$ indeed has at least one full bucket.
### Algorithm description
We say that a vertex $w \in S^-$ has been *processed* if <span style="font-variant:small-caps;">Process</span>($w$) has been called and otherwise we say that $w$ is *unprocessed*. We maintain a partition of $S^-$ into the processed set and the unprocessed set and we maintain a partition of the set of unprocessed vertices $w$ into two sets based on whether $A_w$ is full or not. We also maintain a queue $\cal{Q}$ of vertices to process, which is initially empty. Recall that $(u,v)$ is the inserted edge and both $u$ and $v$ are in $\cal{M}$. The algorithm is as follows.
First, we add $v$ to $S^-$. Then, if $A_v$ is not full, we terminate the construction of $S^+$ and $S^-$. Otherwise, we call <span style="font-variant:small-caps;">Process</span>($v$).
<span style="font-variant:small-caps;">Process</span>($w$):
1. If $A_w$ is full, then add all vertices in $A_w(b)\cup R_w$ to $S^+$. If $A_w$ is not full, then let $i$ be the largest full bucket of $A_w$ and add all vertices in $A_w(i)$ and $A_w(i+1)$ to $S^+$. We will claim that such an $i$ exists (Lemma \[lem:full\]).
2. For all vertices $x$ added to $S^+$ in this call to $\textsc{Process}$, we add $N^+(x) \cap \cal M$ to $S^-$.
3. If $S^-$ contains an unprocessed vertex $x$ with full $A_x$, we call <span style="font-variant:small-caps;">Process</span>($x$).
When a call to $\textsc{Process}$ terminates, including the recursive calls, we check whether Lemma \[lem:bigs+\] is satisfied (that is, whether $|S^+|\geq4 \alpha |S^-|$), and if so, we terminate. Otherwise, if $\cal{Q}$ is not empty, we let $w$ be the next vertex in $\cal{Q}$ and call <span style="font-variant:small-caps;">Process</span>($w$). If $\cal{Q}$ is empty we enqueue a new batch of vertices to $\cal{Q}$. This batch consists of the set of all unprocessed vertices in $S^-$. We will claim that such vertices exist (Lemma \[lem:full\]).
The reason we terminate without calling <span style="font-variant:small-caps;">Process</span>($v$) if $A_v$ is not full (i.e. $R_v$ is empty) is because $R_v$ is the only set for which we cannot afford to determine whether or not each vertex has another neighbor in $\cal{M}$ (besides $v$): We know that each vertex $w\in Z_v\cup P_v$ has another neighbor in $\cal{M}$, and the set $A_v$ is small enough to scan. For the same reason, step 3 of <span style="font-variant:small-caps;">Process</span> is necessary because it ensures that for every vertex $w$ in $S^-$, all vertices in $R_w$ are in $S^+$. If this weren’t the case and we removed a vertex $w$ in $S^-$ from $\cal{M}$, we might be left in the “hard case” of needing to deal with $R_w$.
Lemma \[lem:bigs+\] follows from the algorithm specification: either the algorithm terminates immediately with $S^-=\{v\}$ or the algorithm terminates according to the termination condition, which is that Lemma \[lem:bigs+\] is satisfied.
Several steps in the algorithm (Step 2 of <span style="font-variant:small-caps;">Process</span>($w$) and the last sentence of the algorithm specification) rely on Lemma \[lem:full\]:
\[lem:full\]
1. If we call <span style="font-variant:small-caps;">Process</span>($w$), then $A_w$ has at least one full bucket.
2. Every batch of vertices that we enqueue to $\cal{Q}$ is nonempty.
### Proof of Lemma \[lem:full\]
Let epoch 1 denote the period of time until the first batch of vertices has been enqueued to $\cal{Q}$. For all $i>1$, let epoch i denote the period of time from the end of epoch $i-1$ to when the $i^{th}$ batch of vertices has been enqueued to $\cal{Q}$.
To prove Lemma \[lem:full\], we prove a collection of lemmas that together show that (i) Lemma \[lem:full\] holds for all calls to <span style="font-variant:small-caps;">Process</span> before epoch $b$ ends (recall that $b$ is the number of buckets) and (ii) the algorithm terminates before the end of epoch $b$.
For all $i$, let $p_i$ and $u_i$ be the number of processed and unprocessed vertices in $S^-$ respectively, when epoch $i$ ends. Let $S^+_i$ and $S^-_i$ be the sets $S^+$ and $S^-$ respectively when epoch $i$ ends. Recall that $s$ is the size of a full bucket. Let $s = 8 \alpha$ and let $b = \log_2 n + 1$.
\[lem:fullj\] For all $1\leq j\leq b$, every time we call <span style="font-variant:small-caps;">Process</span>($w$) during epoch $j$, $A_w(b-j+1)$ is full.
We proceed by induction on $j$.
[*Base case.*]{} If $j=1$ then the algorithm only calls <span style="font-variant:small-caps;">Process</span>($w$) on vertices $w$ with full $A_w$ and thus full $A_w(b)$.
[*Inductive hypothesis.*]{} Suppose that during epoch $j$, all of the processed vertices have full $A_w(b-j+1)$.
[*Inductive step.*]{} We will show that during epoch $j+1$, all of the processed vertices have full $A_w(b-j)$. We first note that during <span style="font-variant:small-caps;">Process</span>($w$), the algorithm only adds the vertices in the topmost full bucket of $A_w$ to $S^+$. Thus, the inductive hypothesis implies that for all vertices $x \in S^+_j$, $B(x)\geq b-j+1$.
Then, by the Main Invariant, for all $x \in S^+_j$ and all $y \in N^+(x) \cap \cal{M}$, $A_y(b-j)$ is full. By construction, the only vertices in $S^-_j$ other than $v$ are those in $N^+(x) \cap \cal M$ for some $x \in S^+_j$. Thus, for all vertices $y \in S^-_j$, $A_y(b-j)$ is full. During epoch $j+1$, the set of vertices that we process consists only of vertices $w$ that are either in $S^-_j$ or have full $A_w$. We have shown that all of these vertices $w$ have full $A_w(b-j)$.
\[lem:sp\] For all $1\leq j\leq b$, $|S^+_j|\geq p_j s$.
By Lemma \[lem:fullj\], for all calls to <span style="font-variant:small-caps;">Process</span>($w$) until the end of epoch $j$, $A_w$ has at least one full bucket. During each call to <span style="font-variant:small-caps;">Process</span>($w$), the algorithm adds at least one full bucket (of size $s$) of $A_w$ to $S^+$. By the Consistency Invariant, (i) every vertex is in the active set of at most one vertex and (ii) if a vertex $w$ appears in the active set of some vertex, then $w$ is not in the residual set of any vertex. The only vertices added to $S^+$ are those in some active set or some residual set, so every vertex in some active set that is added to $S^+$, is added at most once. Thus, for each processed vertex, there are at least $s$ distinct vertices in $S^+$.
\[lem:pu\] For all $1\leq j\leq b$, $p_j<u_j$. That is, there are more unprocessed vertices than processed vertices.
At the end of epoch $j$, Lemma \[lem:bigs+\] is not satisfied because if it were then the algorithm would have terminated. That is, $|S^+_j|<4 \alpha |S^-_j|$. Combining this with Lemma \[lem:sp\] and the fact that $p_j+u_j=|S^-_j|$, we have $p_j s<4 \alpha (p_j+u_j)$. Choosing $s=8 \alpha$ completes the proof.
\[lem:double\] For all $1< j\leq b$, $p_j>2 p_{j-1}$. That is, the number of processed vertices more than doubles during each epoch.
At the end of epoch $j-1$, we add all unprocessed vertices to $\cal{Q}$. As a result of calling $\textsc{Process}$ on each vertex in $ \cal Q$, the number of processed vertices increases by $u_{j-1}$ by the end of epoch $j$. That is, $p_j\geq p_{j-1}+u_{j-1}$. By Lemma \[lem:pu\], $p_{j-1}<u_{j-1}$, so $p_j>2 p_{j-1}$.
We apply these lemmas to complete the proof of Lemma \[lem:full\]:
1. In epoch 1 we process at least one vertex, so $p_1\geq1$. By Lemma \[lem:double\], $p_j>2 p_{j-1}$. Thus, $p_j\geq2^{j-1}$. If $j=b=\log_2 n+1$, then $p_j>n$, a contradiction. Thus, the algorithm never reaches the end of epoch $b$. Then, by Lemma \[lem:fullj\], every time we call <span style="font-variant:small-caps;">Process</span>($w$), $A_w$ has at least one full bucket.
2. Suppose by way of contradiction that we enqueue no vertices to $\cal{Q}$ at the end of some epoch $1\leq j\leq b$. Then, $u_j=0$. By Lemma \[lem:pu\], $p_j<u_j$, so $p_j<0$, a contradiction.
Stage 2: Updating $\cal M$ given $S^+$ and $S^-$ {#sec:MS}
------------------------------------------------
### Algorithm description
A brief outline for updating $\cal{M}$ given $S^+$ and $S^-$ is as follows: We begin by finding a large MIS $\cal{M'}$ in the graph induced by $S^+$ and adding the vertices in $\cal{M'}$ to $\cal{M}$. This may cause $\cal{M}$ to no longer be an independent set, so we remove from $\cal{M}$ the vertices adjacent to those in $\cal{M'}$. By design, all of these removed vertices are in $S^-$. Now, $\cal{M}$ may no longer be maximal, so we add to $\cal{M}$ the appropriate neighbors of the removed set. The details follow.
\[lem:M\] Given $S^+$ and $S^-$, there is an algorithm for updating $\cal M$ so that
1. at least $\frac{|S^+|}{2 \alpha}$ vertices are added to $\cal M$ and at most $|S^-|$ vertices are deleted from $\cal{M}$.
2. the set of vertices added to $\cal{M}$ is disjoint from the set of vertices removed from $\cal{M}$.
3. after updating, $\cal M$ is a valid MIS.
The first step of the algorithm for updating $\cal M$ is to find an MIS $\cal{M'}$ in the graph induced by $S^+$ and add the vertices in $\cal{M'}$ to $\cal{M}$. The following lemma ensures that we can find a sufficiently large $\cal{M'}$.
\[lem:MIS\] If $G'$ is a graph on $n'$ nodes of arboricity $\alpha$, then there is an $O(n' \alpha)$ time algorithm to find an MIS $\cal M'$ in $G'$ of size at least $\frac{n'}{2 \alpha}$.
Since $G'$ has arboricity $\alpha$, every subset of $G'$ must have a vertex of degree at most $2 \alpha$. The algorithm is simple. Until $G'$ is empty, we repeatedly find a vertex $w$ of degree at most $2 \alpha$, add $w$ to $\cal M'$, and then remove $w$ and $N(w)$ from $G'$. For every vertex we add to $\cal{M}$, we remove at most $2 \alpha$ other vertices from the graph so $|\cal{M'}|\geq$ $ \frac{n'}{2 \alpha}$.
To implement this algorithm, we simply use a data structure that keeps track of the degree of each vertex and maintains a partition of the vertices into two sets: one containing the vertices of degree at most $2\alpha$ and the other containing the rest. Then the removal of each edge takes $O(1)$ time. $G'$ has at most $n' \alpha$ edges so the runtime of the algorithm is $O(n' \alpha)$.
The algorithm consists of four steps.
1. We find an MIS $\cal M'$ in the graph induced by $S^+$ using the algorithm of Lemma \[lem:MIS\]. Then we add to $\cal M$ all vertices in $\cal M'$.
2. For all vertices $w \in \cal M'$, we remove from $\cal M$ all vertices in $N^+(w) \cap \cal{M}$. We note that $N^-(w)\cap \cal{M}=\emptyset$ because otherwise $w$ would be resolved, and $S^+$ contains no resolved vertices.
3. If $u$ and $v$ are both in $\cal{M}$ (recall that we are considering an edge insertion $(u,v)$), we remove $v$ from $\cal{M}$. We note that this step is necessary because $v$ needs to be removed from $\cal{M}$ and this may not have happened in step 2.
4. Note that for each vertex $w$ removed from $\cal{M}$ in the previous steps, only $R_w$ and some vertices in $A_w$ are in $S^+$. Hence we need to evaluate whether $N^+(w)$ and the remaining vertices in $A_w$ need to enter $\cal{M}$. For each vertex $x \in N^+(w) \cup A_w$, we check whether $x$ has a neighbor in $\cal{M}$. To do this, we scan $N^+(x)$ and if $N^+(x) \cap \cal M=\emptyset$ and $|M^-(x)|=0$, then we add $x$ to $\cal{M}$. (Recall that the data structure maintains $M^-(w)=N^-(w)\cap \cal{M}$.)
### Proof of Lemma \[lem:M\]
While we build $S^+$ and $S^-$, every vertex in $\cal{M}$ that is the out-neighbor of a vertex in $S^+$ is added to $S^-$. The only vertices we remove from $\cal M$ are $v$ and out-neighbors of vertices in $S^+$. Thus, we have the following observation.
\[obs\] Every vertex that we remove from $\cal{M}$ (including $v$) is in $S^-$.
1. At least $\frac{|S^+|}{2 \alpha}$ vertices are added to $\cal M$ and at most $| S^-|$ vertices are deleted from $\cal{M}$.
By Lemma \[lem:MIS\], in step 1 we add at least $\frac{|S^+|}{2 \alpha}$ vertices to $\cal{M}$. Observation \[obs\] implies that we delete at most $|S^-|$ vertices from $\cal{M}$.
1. The set of vertices added to $\cal{M}$ is disjoint from the set of vertices removed from $\cal{M}$.
We will show that the set of vertices added to $\cal{M}$ were not in $\cal{M}$ when the edge update arrived and the set of vertices removed from $\cal{M}$ were in $\cal{M}$ when the edge update arrived.
By Observation \[obs\], every vertex that we remove from $\cal{M}$ is in $S^-$. By construction, a vertex is only added to $S^-$ if it is in $\cal{M}$.
Every vertex that is added to $\cal{M}$ is added in either step 1 or step 4. We claim that every vertex that is added to $\cal{M}$ is the neighbor of a vertex in $S^-$, and thus was not in $\cal{M}$ when the edge update arrived. With Observation \[obs\], this claim is clear for vertices added to $\cal{M}$ during step 4. If a vertex $w$ is added to $\cal{M}$ in step 1 then $w\in S^+$. By construction, a vertex is only added to $S^+$ if it is a neighbor of a vertex in $S^-$.
1. $\cal{M}$ is an MIS.
$\cal{M}$ is an MIS because our algorithm is designed so that when we add a vertex $w$ to $\cal{M}$ we remove all of $w$’s neighbors from $\cal{M}$, and when we remove a vertex $w$ from $\cal{M}$ we add to $\cal{M}$ all of $w$’s neighbors that have no neighbors in $\cal{M}$. A formal proof follows.
Suppose by way of contradiction that $\cal M$ is not an independent set. Let $(w, x)$ be an edge such that $w$ and $x$ are both in $\cal{M}$. Due to step 3, $(w, x)$ cannot be $(u, v)$ or $(v, u)$. Since $\cal M$ was an independent set before being updated, at least one of $w$ or $x$ was added to $\cal M$ during the update. Without loss of generality, suppose $w$ was added to $\cal M$ during the update and that the last time $w$ was added to $\cal M$ was after the last time $x$ was added to $\cal{M}$. This means that when $w$ was added to $\cal{M}$, $x$ was already in $\cal{M}$.
[**Case 1:**]{} *w was last added to $\cal M$ during step 4.* In step 4, only vertices with no neighbors in $\cal M$ are added to $\cal M$ so this is impossible.
[**Case 2:**]{} *w was last added to $\cal M$ during step 1 and $x$ was last added to $\cal M$ before the current update to $\cal{M}$.* After $w$ is added to $\cal M$ in step 1, we remove $N^+(v) \cap \cal M$ from $\cal{M}$. Thus, $x \not\in N^+(v)$ so $x$ must be in $N^-(v)$. But $x$ was in $\cal M$ right before the current update so $w$ is resolved. Since $w$ was added to $\cal{M}$ during step 1, $w \in S^+$ and by construction the only vertices added to $S^+$ are those in the active set or residual set of some vertex. By the Consistency Invariant, because $w$ is resolved, it is not in the active set or residual set of any vertex, a contradiction.
[**Case 3:**]{} *$w$ was last added to $\cal M$ during step 1 and $x$ was last added to $\cal M$ during the current update during step 1.* Both $x$ and $w$ are in $S^+$. An MIS in the graph induced by $S^+$ is added to $\cal M$ so $x$ and $w$ could not both have been added to $\cal{M}$, a contradiction.\
Now, suppose by way of contradiction that $\cal M$ is not maximal. Let $w$ be a vertex not in $\cal M$ such that no vertex in $N(w)$ is in $\cal{M}$. Since $\cal M$ was maximal before the current update, during this update, either $w$ or one of its neighbors was removed from $\cal{M}$. Let $x$ be the last vertex in $N(w) \cup \{w\}$ to be removed from $\cal{M}$. Every vertex that is removed from $\cal M$ (either in step 2 or step 3) has a neighbor in $\cal M$ at the time its removal. Thus, $x$ cannot be $w$ because if $x$ were $w$ then $w$ would be removed from $\cal{M}$ with no neighbors in $\cal{M}$. Thus, $x$ must be in $N(w)$. Since $x$ is removed from $\cal{M}$ during the current update, in step 4 we add to $\cal M$ all vertices in $N^+(x) \cup A_x$ with no neighbors in $\cal{M}$. Thus $w$ is not in $N^+(x) \cup A_x$ so $w$ must be in $Z_x$, $P_x$, or $R_x$.
[**Case 1:**]{} *$w \in Z_x$.* By the definition of resolved, right before the current update, either $w$ was in $\cal{M}$ or there was a vertex $y\in N^-(w)$ in $\cal{M}$. If $w$ was in $\cal{M}$ then $w$ was removed from $\cal{M}$ during the current update. Every vertex that is removed from $\cal{M}$ (either in step 2 or step 3) has a neighbor that was added to $\cal{M}$ during the current update. By part 2 of the Lemma, the set of vertices that is added to $\cal{M}$ is disjoint from the set of vertices that is removed from $\cal{M}$. Thus, after processing the current update, $w$ has a neighbor in $\cal{M}$, a contradiction.
On the other hand, if $y\in N^-(w)$ was in $\cal{M}$ right before the current update, then during the current update, $y$ was removed from $\cal{M}$. Then in step 4, all vertices in $N^+(y)$ with no neighbors in $\cal M$ are added to $\cal{M}$. This includes $w$, a contradiction.
[**Case 2:**]{} *$w\in P_x$.* Right before the current update, $w$ was in $A_y$ for some $y$. During the current update, $y$ was removed from $\cal{M}$. In step 4, all vertices in $A_y$ with no neighbors in $\cal M$ are added to $\cal{M}$. This includes $w$, a contradiction.
[**Case 3:**]{} *$w\in R_x$.* By Observation \[obs\], every vertex removed from $\cal{M}$ during step 2 is in $S^-$. By construction for all vertices $y \in S^-$, if $R_y$ is nonempty then we add every vertex $R_y$ to $S^+$. Thus, if $x$ was removed from $\cal M$ in step 2, then $w$ was added to $S^+$ in step 1. Also, if $x$ was removed from $\cal M$ in step 3, then $x=v$ and <span style="font-variant:small-caps;">Process(x)</span> was called, which adds $w$ to $S^+$. So, in either case $w$ is in $S^+$. We add an MIS in the graph induced by $S^+$ to $\cal{M}$, so either $w$ or a neighbor of $w$ must be in $\cal{M}$, a contradiction.
Runtime analysis of updating $\cal{M}$ {#sec:Mrun}
======================================
Let $T$ be the amortized update time of the black box dynamic edge orientation algorithm and let $D$ is the out-degree of the orientation. Ultimately, we will apply the algorithm of Brodal and Fagerberg [@BrodalF99] which achieves $D=O(\alpha)$ and $T=O(\alpha+\log n)$. In this section we show that updating $\cal M$ takes amortized time $O(\alpha D)$.
Let $U$ be the total number of updates and let $\Delta^+$ and $\Delta^-$ be the total number of additions of vertices to $\cal{M}$ and removals of vertices from $\cal{M}$, respectively, over the whole computation.
Let $S^+_i$ and $S^-_i$ be the sets $S^+$ and $S^-$, respectively, constructed while processing the $i^{th}$ update (note that this is a different definition of $S^+_i$ than the definition in earlier sections).
\[lem:delta\] The amortized number of changes to $\cal M$ is $O(1)$ per update.
The proof is based on the observation that each edge update that causes $|\cal{M}|$ to decrease, decreases $|\cal{M}|$ by 1. Thus, on average $|\cal{M}|$ can only increase by at most 1 per update. Then, we apply Lemma \[lem:bigs+\] to argue that the total number of changes to $\cal{M}$ per update is within a constant factor of the net change in $|\cal{M}|$ per update. The details follow. For every edge deletion, at most 1 vertex is added to $\cal M$ and no vertices are removed from $\cal M$. For every edge insertion, Lemma \[lem:M\] implies that at most $|S^-|$ vertices are removed from $\cal{M}$ and at least $\frac{|S^+|}{2 \alpha}$ vertices are added to $\cal{M}$. Thus, the net increase in $|\cal{M}|$ is at least $\frac{|S^+|}{2 \alpha}-|S^-|$, which by Lemma \[lem:bigs+\] is at least $|S^-|$ if $|S^-|>1$. Otherwise, $|S^-|\leq 1$ so the net decrease in $|\cal{M}|$ is at most 1. Thus, the increase in $|\cal{M}|$ over the whole computation, which is equivalent to $\Delta^+-\Delta^-$, is at least $\sum_{i=1}^U |S^-_i|-U$. For every edge update at most $|S^-|$ vertices are removed from $\cal{M}$ (Lemma \[lem:M\]), so $\sum_{i=1}^U |S^-_i|\geq \Delta^-$. Thus, we have $$\label{eqn:deltas}
\Delta^+-\Delta^-\geq \sum_{i=1}^U |S^-_i|-U \geq \Delta^--U.$$
We now show that $\Delta^- = O(U)$ and $\Delta^+ = O(U)$. Because the graph is initially empty, $\cal{M}$ initially contains every vertex. Thus, over the whole computation, the number of additions to $\cal{M}$ cannot exceed the number of removals from $\cal{M}$. That is, $\Delta^+\leq \Delta^-$.
Combining this with Equation , we have $\Delta^- = O(U)$ and $\Delta^+ = O(U)$.
\[lem:sumS\]$\sum_{i=1}^U |S^+_i| =O(\alpha U)$.
By Lemma \[lem:M\], $\Delta^+\geq\sum_{i=1}^U \frac{|S^+_i|}{2 \alpha}$. By Lemma \[lem:delta\], $\Delta^+= O(U)$. Thus, $\sum_{i=1}^U |S^+_i| =O(\alpha U)$.
Constructing $S^+$ and $S^-$ takes amortized time $O(\alpha D)$.
By construction, it takes constant time to decide to add a single vertex to $S^+$, however, the same vertex could be added to $S^+$ multiple times. A vertex $w$ can be added to $S^+$ only when <span style="font-variant:small-caps;">Process</span>($x$) is called for some $x\in N^+(w)$. <span style="font-variant:small-caps;">Process</span> is called at most once per vertex (per update), so each vertex is added to $S^+$ a maximum $|N^+(w)|\leq D$ times. Therefore, over the whole computation it takes a total of $O(D \sum_{i=1}^U |S^+_i|)$ time to build $S^+$.
To build $S^-$, we simply scan the out-neighborhood of every vertex in $S^+$. Thus, over the whole computation it takes $O(D \sum_{i=1}^U |S^+_i|)$ time to build $S^-$.
By Lemma \[lem:sumS\], $\sum_{i=1}^U (|S^+_i|) =O(\alpha U)$ so the amortized time to construct $S^+$ and $S^-$ is $O(\alpha D)$.
Updating $\cal M$ given $S^+$ and $S^-$ takes amortized time $O(D^2\log n)$.
In step 1 we begin by constructing the graph induced by $S^+$. To do this, it suffices to scan the out-neighborhood of each vertex in $S^+$ since every edge in the graph induced by $S^+$ must be outgoing of some vertex in $S^+$. This takes time $O(|S^+| D)$. Next, we find $\cal M'$, which takes time $O(|S^+|)$ by Lemma \[lem:MIS\].
In step 2, for each vertex $w$ in $\cal M'$, we scan $N^+(v)$, which takes time $O(|S^+| D)$. By Lemma \[lem:sumS\], steps 1 and 2 together take amortized time $O(\alpha D)$.
Step 3 takes constant time.
In step 4, we scan the 2-hop out-neighborhood of each vertex $w$ that has been removed from $\cal{M}$, as well as the out-neighborhood of some vertices in $A_w$. Over the whole computation, this takes time $O(\Delta^-(D^2+\alpha D\log n))$, which is amortized time $O(D^2+\alpha D\log n)$ by Lemma \[lem:delta\].
In total the amortized time is $O(D^2+\alpha D\log n)=O(D^2\log n)$.
Updating the data structure
===========================
Satisfying the Main Invariant
-----------------------------
The most interesting part of the runtime analysis involves correcting violations to the Main Invariant. This is also the bottleneck of the runtime. The Main Invariant is violated when a vertex $v$ is added to $\cal{M}$. In this case, $A_v$ is empty and needs to be populated.
We begin by analyzing the runtime of two basic processes that happen while updating the data structure: adding a vertex to some active set (Lemma \[lem:add\]) and removing a vertex from some active set (Lemma \[lem:rem\]).
Recall that $T$ is the amortized update time of the black box dynamic edge orientation algorithm and $D$ is the out-degree of the orientation.
\[lem:add\] Suppose vertex $v$ is not in any active set. Adding $v$ to some active set and updating the data structure accordingly takes time $O(D)$.
When we add a vertex $v$ to some $A_u(i)$, for all $w \in N^+(v)$ this could causes a violation to the Consistency Invariant. To remedy this, it suffices to remove $v$ from whichever set it was previously in with respect to $w$ (which is not $A_w$) and add it to $P_w(i)$.
\[lem:rem\] Removing a vertex $v$ from some active set $A_u$ and updating the data structure accordingly takes time $O(D \log n)$.
When we remove a vertex $v$ from some $A_u(i)$, this leaves bucket $A_u(i)$ not full so the Full Invariant might be violated. To remedy this, we move a vertex, the *replacement vertex*, from a higher bucket or the residual set to $A_u(i)$. That is, the replacement vertex $w$ is chosen to be any arbitrary vertex from $R_u\cup A_u(i+1)\cup\dots\cup A_u(b)\cup P_u(i+1)\cup \dots \cup P_u(b)$. We can choose a vertex from this set in constant time by maintaining a list of all non-empty $P_x(i)$ and $A_x(i)$ for each vertex $x$. If $w$ is chosen from $P_u$, we remove $w$ from $A_{a(w)}$ before adding $w$ to $A_u(i)$.
The removal of $w$ from its previous bucket in its previous active set may leave this bucket not full, so again the Full Invariant might be violated and again we remedy this as described above, which sets off a chain reaction. The chain reaction terminates when either there does not exist a viable replacement vertex or until the replacement vertex comes from the residual set. Since the index of the bucket that we choose the replacement vertex from increases at every step of this process, the length of this chain reaction is at most $b$.[^6]
For each vertex $v$ that we add to an active set, we have already removed $v$ from its previous active set, so Lemma \[lem:add\] applies. Overall, we move at most $b$ vertices to a new bucket and by Lemma \[lem:add\], for each of these $b$ vertices we spend time $O(D)$. Thus, the runtime is $O(b D)=O(D \log n)$.
\[lem:mtime\] The time to update the data structure in response to a violation of the Main Invariant triggered the addition of a single vertex to $\cal M$ is $O(D \alpha \log^2 n)$.
To satisfy the Main Invariant, we need to populate $A_v$. We fill $A_v$ in order from bucket 1 to bucket $b$. First, we add the vertices in $R_v$ until either $A_v$ is full or $R_v$ becomes empty. If $R_v$ becomes empty, then we start adding the vertices of $P_v(i)$ in order from $i=b$ to $i=1$; however, we only add vertex $u$ to $A_v$ if this causes $B(u)$ to decrease. Once we reach a vertex $u$ in $P_v$ where moving $u$ to the lowest numbered non-full bucket of $A_v$ does not cause $B(u)$ to decrease, then we stop populating $A_v$. Each time we add a vertex $u$ to $A_v$ from $P_v$, we first remove $u$ from $A_{a(u)}$ and apply Lemma \[lem:rem\]. Also, each time we add a vertex to $A_v$, we apply Lemma \[lem:add\]. We note that this method of populating $A_v$ is consistent with the Main Invariant.
We add at most $s b=O(\alpha \log n)$ vertices to $A_v$ and for each one we could apply Lemmas \[lem:rem\] and \[lem:add\] in succession. Thus, the total time is $O(\alpha D \log^2 n)$.
Satisfying the rest of the invariants
-------------------------------------
Recall the 4 phases of the algorithm from Section \[sec:setup\]. We update the data structure in phases 2 and 4.
### Phase 2 of the algorithm
In phase 2 we need to update the data structure to reflect the changes to $\cal M$ that occur in phase 1. We note that updating $\cal M$ could cause immediate violations to only the Resolved Invariant, the Empty Active Set Invariant, and the Main Invariant. That is, in the process of satisfying these invariants, we will violate other invariants but the above three invariants are the only ones directly violated by changes to $\cal{M}$. We prove one lemma for each of these three invariants (the lemma for the Main Invariant was already proven as Lemma \[lem:mtime\]). As a technicality, in phase 2 of the algorithm we update the data structure ignoring the newly added edge and instead wait until the data structure update in phase 4 of the algorithm to insert and orient this edge. This means that during the data structure update in phase 2 of the algorithm, $\cal M$ may not be maximal with respect to the graph that we consider.
The time bounds in the following Lemmas (and Lemma \[lem:mtime\]) are stated with respect to a single change to $\cal{M}$. By Corollary 10, the amortized number of changes to $\cal M$ is $O(1)$ per update, so the time spent per change to $\cal M$ is the same as the amortized time per update.
\[lem:restime\] The time to update the data structure in response to a violation of the Resolved Invariant triggered by a single change to $\cal M$ is $O(D^2 \log n)$.
Suppose $v$ is removed from $\cal{M}$. Previously every vertex in $N^+(v)\cup\{v\}$ was resolved and now some of these vertices might be unresolved. To address the Resolved Invariant, for each newly unresolved vertex $u\in N^+(v)\cup\{v\}$, for all $w \in N^+(u)$, we need to remove $u$ from $Z_w$. To accomplish this, we do the following. For all $u \in N^+(v)$, if $|M^-(u)|=0$, then we know that $u$ has become unresolved. In this case, we scan $N^+(u)$ and for each $w \in N^+(u)$, we remove $u$ from $Z_w$. Now, for each such $w$, we need to add $u$ to either $A_w$, $P_w$, or $R_w$. To do this, we find the vertex $x \in N^+(u) \cap \cal M$ with the smallest $|A_x|$. If $A_x$ is not full, then we add $u$ to the lowest bucket of $A_x$ that’s not full and apply Lemma \[lem:add\]. If $A_x$ is full then for all $w \in N^+(u)$ we add $u$ to $R_w$.
There are at most $D+1$ vertices in $N^+(v)\cup\{v\}$ and for each $u\in N^+(v)$ we spend $O(D)$ time scanning $N^+(u)$. Then, for each $u$, we may apply Lemma \[lem:add\]. Thus, the runtime is $O(D^2)$.
Suppose $v$ is added to $\cal{M}$. Some vertices in $N^+(v)\cup\{v\}$ may have previously been unresolved, but now they are all resolved. We scan $N^+(v)\cup\{v\}$ and for all $u \in N^+(v)$ and all $w \in N^+(u)$, we remove $u$ from whichever set it is in with respect to $w$ and add $u$ to $Z_w$. If we removed $u$ from $A_w$, then we update the data structure according to Lemma \[lem:rem\].
There are $D+1$ vertices in $N^+(v)\cup\{v\}$ and each $u\in N^+(v)$ can only be removed from a single $A_w$ because the Consistency Invariant says that each vertex is in at most one active set. Thus, the runtime is $O(D^2 \log n)$.
\[lem:emtime\]The time to update the data structure in response to a violation of the Empty Active Set Invariant triggered by a single change to $\cal M$ is $O(D \alpha \log n)$.
The Empty Active Set Invariant can be violated only upon removal of a vertex $v$ from $\cal{M}$. To satisfy the invariant, we need to empty $A_v$. For each vertex $u$ in $A_v$, if $|M^-(u)|>0$, then $u$ is resolved and otherwise $u$ is unresolved. First, we address the case where $u$ is resolved. For each vertex $w \in N^+(u)$ we remove $u$ from whichever set it is in with respect to $w$ (which is not $A_w$ since $u$ is in $A_v$) and add $u$ to $Z_w$.
There are $s b=O(\alpha \log n)$ vertices $u$ in $A_v$ and for each $u$ we spend $O(D)$ time scanning $N^+(u)$. Thus, the runtime for dealing with the resolved vertices in $A_v$ is $O(D \alpha \log n)$.
Now, we address the case where $u$ in $A_v$ is unresolved. For each $u$ in $A_v$ that is unresolved, and for each $w \in N^+(u)$, we need to add $u$ to either $A_w$, $P_w$, or $R_w$. To do this, we perform the same procedure as in Lemma \[lem:restime\]: We find the vertex $x$ in $N^+(u) \cap \cal M$ with the smallest $|A_x|$. If $A_x$ is not full, then we add $u$ to the lowest bucket of $A_x$ that’s not full and apply Lemma \[lem:add\]. If $A_x$ is full then for all $w \in N^+(u)$ we add $u$ to $R_w$.
There are $s b=O(\alpha \log n)$ vertices $u$ in $A_v$ and for each such $u$ we spend $O(D)$ time scanning $N^+(u)$. Then, for each $u$, we may apply Lemma \[lem:add\]. Thus, the runtime for dealing with unresolved vertices in $A_v$ is $O(D \alpha \log n)$.
### Phase 4 of the algorithm
In phase 4 of the algorithm, we need to update the data structure to reflect the edge reorientations that occur in phase 3. If the update was an insertion, we also need to update the data structure to reflect this new edge. We note that reorienting edges could cause immediate violations only to the Resolved Invariant and the Orientation Invariant. That is, in the process of satisfying these invariants, we will violate other invariants but the above two invariants are the only ones directly violated by edge reorientations and insertions. Recall that $T$ is the amortized update time of the black box dynamic edge orientation algorithm.
\[lem:edgetime\]The total time to update the data structure in response to edge reorientations and insertions is $O(T D \log n)$.
Edge reorientations and insertions can cause violations only to the Resolved Invariant and the Orientation Invariant. Consider an edge $(u,v)$ that is flipped or inserted so that it is now oriented towards $v$. If the edge was flipped, then previously $v$ was in either $Z_u$, $A_u$, $P_u$, or $R_u$, and now we remove $v$ from this set. If $v$ was in $A_u$, then we process the removal of $v$ from $A_u$ according to Lemma \[lem:rem\] and add $v$ to the active set of another vertex if possible. To do this, we perform the same procedure as in Lemma \[lem:restime\]: we find the vertex $x \in N^+(u) \cap \cal M$ with the smallest $|A_x|$. If $A_x$ is not full, then we add $u$ to the lowest bucket of $A_x$ that’s not full and apply Lemma \[lem:add\]. If $A_x$ is full then for all $w \in N^+(u)$ we add $u$ to $R_w$.
In the above procedure we apply Lemma \[lem:rem\], spend $O(D)$ time scanning $N^+(v)$, and apply Lemma \[lem:add\]. This takes time $O(D \log n)$.
Before the edge $(u,v)$ was flipped or inserted, $u$ was not in $Z_v$, $A_v$, $P_v$, or $R_v$ and now we need to add $u$ to one of these sets.
- If $u$ is resolved, we add $u$ to $Z_v$.
- If $u$ is in an active set and the number of the lowest non-full bucket of $A_v$ is less than $B(u)$, then we remove $u$ from $A_{a(u)}$, applying Lemma \[lem:rem\], and add $u$ to the lowest non-full bucket of $A_v$, applying Lemma \[lem:add\].
- If $A_v$ is not full and $u$ is not in any active set then we add $u$ to the lowest non-full bucket of $A_v$, applying Lemma \[lem:add\].
- Otherwise, if $u$ is in the active set of some vertex, we add $u$ to $P_v$ in the appropriate bucket.
- Otherwise, $u$ is not in any active set and we add $u$ to $R_v$.
The slowest of the above options is to apply Lemmas \[lem:rem\] and \[lem:add\] in succession which takes time $O(D \log n)$.
We have shown that updating the data structure takes time $O(D \log n)$ per edge flip, which is amortized time $O(D T \log n)$ per update.
Combining the runtime analyses of updating $\cal{M}$, running the edge orientation algorithm, and updating the data structure, the total runtime of the algorithm is $O(\alpha D \log^2 n + D^2 \log n + T D \log n)$. The edge orientation algorithm of Brodal and Fagerberg [@BrodalF99] achieves $D=O(\alpha)$ and $T=O(\alpha+\log n)$. Thus, the runtime of our algorithm is $O(\alpha^2 \log^2 n)$.
[^1]: IBM Research, TJ Watson Research Center, Yorktown Heights, New York, USA
[^2]: IBM Research, TJ Watson Research Center, Yorktown Heights, New York, USA
[^3]: IBM Research, TJ Watson Research Center, Yorktown Heights, New York, USA. Supported by the IBM Herman Goldstine Postdoctoral Fellowship.
[^4]: Massachusetts Institute of Technology, Cambridge, Massachusetts, USA. Supported by an NSF Graduate Fellowship.
[^5]: A preliminary version of this paper appeared in the proceedings of ICALP’18.
[^6]: We note that the length of the chain reaction can be shortened by choosing the replacement vertex from the highest possible bucket. However, we could be forced to choose from bucket $i+1$ (if all buckets higher than $A_u(i+1)$ or $P_u(i+1)$ are empty).
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We perform three-body model calculations for a $sd$-shell hypernucleus $^{19}_{\ \Lambda}$F ($^{17}_{\ \Lambda}{\rm O}+p+n$) and its core nucleus $^{18}$F ($^{16}{\rm O}+p+n$), employing a density-dependent contact interaction between the valence proton and neutron. We find that the $B(E2)$ value from the first excited state (with spin and parity of $I^\pi=3^+$) to the ground state ($I^\pi=1^+$) is slightly decreased by the addition of a $\Lambda$ particle, which exhibits the so called shrinkage effect of $\Lambda$ particle. We also show that the excitation energy of the $3^+$ state is reduced in $^{19}_{\ \Lambda}$F compared to $^{18}$F, as is observed in a $p$-shell nucleus $^{6}$Li. We discuss the mechanism of this reduction of the excitation energy, pointing out that it is caused by a different mechanism from that in $^{7}_{\Lambda}$Li.'
author:
- 'Y. Tanimura'
- 'K. Hagino'
- 'H. Sagawa'
title: 'Impurity effect of $\Lambda$ particle on the structure of $^{18}$F and $^{19}_{~\Lambda}$F'
---
[INTRODUCTION]{} It has been of great interest in hypernuclear physics to investigate how $\Lambda$ particle affects the core nucleus when it is added to a normal nucleus. A $\Lambda$ particle may change various nuclear properties, e.g., nuclear size and shape [@MBI83; @MHK11; @LZZ11; @IsKDO11p; @IsKDO12], cluster structure [@IsKDO11sc], the neutron drip line [@VPLR98; @ZPSV08], the fission barrier [@MiCH09], and the collective excitations [@Yao11; @MH12]. Such effects caused by $\Lambda$ on nuclear properties are referred to as an impurity effect. Because $\Lambda$ particle can penetrate deeply into a nucleus without the Pauli principle from nucleons, a response of the core nucleus to an addition of a $\Lambda$ may be essentially different from that to non-strange probes. That is, $\Lambda$ particle can be a unique probe of nuclear structure that cannot be investigated by normal reactions.
The low-lying spectra and electromagnetic transitions have been measured systematically in $p$-shell hypernuclei by high precision $\gamma$-ray spectroscopy [@HT06]. The experimental data have indicated a shrinkage of nuclei due to the attraction of $\Lambda$. A well-known example is $^7_{\Lambda}$Li, for which the electric quadrupole transition probability, $B(E2)$, from the first excited state ($3^+$) to the ground state ($1^+$) of $^6{\rm Li}$ is considerably reduced when a $\Lambda$ particle is added [@Ta01; @HiKMM99]. This reduction of the $B(E2)$ value has been interpreted as a shrinkage of the distance between $\alpha$ and $d$ clusters in $^{6}$Li. On the other hand, a change of excitation energy induced by a $\Lambda$ particle depends on nuclides. If one naively regards a di-cluster nucleus as a classical rigid rotor, shrinkage of nuclear size would lead to a reduction of the moment of inertia, increasing the rotational excitation energy. However, $^6$Li and $^8$Be show a different behavior from this naive expectation. That is, the spin averaged excitation energy decreases in $^6$Li [@Uk06] while it is almost unchanged in $^8$Be [@Ak02].
Recently Hagino and Koike [@HK11] have applied a semi-microscopic cluster model to $^6$Li, $^{7}_{\Lambda}$Li, $^8$Be, and $^9_{\Lambda}$Be to successfully account for the relation between the shrinkage effect and the rotational spectra of the two nuclei simultaneously. They argue that a Gaussian-like potential between two clusters leads to a stability of excitation spectrum against an addition of a $\Lambda$ particle, even though the intercluster distance is reduced. This explains the stabilization of the spectrum in $^8$Be. In the case of lithium one has to consider also the spin-orbit interaction between $^4$He/$^5_{\Lambda}$He and the deuteron cluster. Because of the shrinkage effect of $\Lambda$, the overlap between the relative wave function and the spin-orbit potential becomes larger in $^7_{\Lambda}$Li than in $^6$Li. This effect lowers the $3^+\otimes\Lambda_{s_{1/2}}$ state more than the 3$^+$ state in $^6$Li, making the rotational excitation energy in $^7_{\Lambda}$Li smaller than in $^6$Li.
These behaviors of the spectra may be specific to the two-body cluster structure. $^6$Li and $^8$Be have in their ground states well-developed $\alpha$ cluster structure. In heavier nuclei, on the other hand, cluster structure appears in their excited states while the ground and low-lying states have a mean-field-like structure. In this respect, it is interesting to investigate the impurity effect on a $sd$-shell nucleus $^{18}$F, in which the mean-field structure and $^{16}{\rm O}+d$ cluster structure may be mixed[@SNN76; @LMTT76; @ISHA67; @BMR79; @BFP77; @M83]. Notice that the ground and the first excited states of $^{18}$F are $1^+$ and $3^+$, respectively, which are the same as $^6$Li. We mention that a $\gamma$-ray spectroscopy measurement for $^{19}_{~\Lambda}$F is planned at J-PARC facility as the first $\gamma$-ray experiment for $sd$-shell hypernuclei [@JPARC; @Sendai08].
In this paper we employ a three-body model of $^{16}{\rm O}+p+n$ for $^{18}$F and of $^{17}_{\ \Lambda}{\rm O}+p+n$ for $^{19}_{\ \Lambda}$F, and study the structure change of $^{18}$F caused by the impurity effect of a $\Lambda$ particle. This model enables us to describe both mean-field and cluster like structures of these nuclei. We discuss how $\Lambda$ particle affects the electric transition probability $B(E2,3^+\to1^+)$, the density distribution of the valence nucleons, and the excitation energy. Of particular interest is whether the excitation energy increases or decreases due to the $\Lambda$ particle. We discuss the mechanism of its change in comparison with the lithium nuclei.
The paper is organized as follows. In Sec. \[sec:model\], we introduce the three-body model to describe $^{18}$F and $^{19}_{\ \Lambda}$F. In Sec. \[sec:result\], we present the results and discuss the relation between the shrinkage effect and the energy spectrum. In Sec. \[sec:summary\], we summarize the paper. \[sec:intro\]
[THE MODEL]{}
[Hamiltonian]{} We employ a three-body model to describe the $^{18}$F and $^{19}_{~\Lambda}$F nuclei. We first describe the model Hamiltonian for the $^{18}$F nucleus, assuming the $^{16}$O + $p + n$ structure. After removing the center-of-mass motion, it is given by $$\begin{aligned}
H&=&\frac{{\mbox{\boldmath $p$}}_p^2}{2m}+\frac{{\mbox{\boldmath $p$}}_n^2}{2m}+V_{pC}({\mbox{\boldmath $r$}}_p)+V_{nC}({\mbox{\boldmath $r$}}_n) \nonumber \\
&&+V_{pn}({\mbox{\boldmath $r$}}_p,{\mbox{\boldmath $r$}}_n)+\frac{({\mbox{\boldmath $p$}}_p+{\mbox{\boldmath $p$}}_n)^2}{2A_{C}m},
\label{eq:H}\end{aligned}$$ where $m$ is the nucleon mass and $A_C$ is the mass number of the core nucleus. $V_{pC}$ and $V_{nC}$ is the mean field potentials for proton and neutron, respectively, generated by the core nucleus. These are given as $$\begin{aligned}
V_{nC}({\mbox{\boldmath $r$}}_n)=V^{(N)}(r_n),\ V_{pC}({\mbox{\boldmath $r$}}_p)=V^{(N)}(r_p)+V^{(C)}(r_p),
\label{eq:pot}\end{aligned}$$ where $V^{(N)}$ and $V^{(C)}$ are the nuclear and the Coulomb parts, respectively. In Eq. (\[eq:H\]), $V_{pn}$ is the interaction between the two valence nucleons. For simplicity, we neglect in this paper the last term in Eq. (\[eq:H\]) since the core $^{16}$O is much heavier than nucleons. Then the Hamiltonian reads $$\begin{aligned}
H=h(p)+h(n)+V_{pn},
\label{eq:Htot}\end{aligned}$$ where the single-particle Hamiltonians are given as $$\begin{aligned}
h(p)=\frac{{\mbox{\boldmath $p$}}_p^2}{2m}+V_{pC}({\mbox{\boldmath $r$}}_p),\
h(n)=\frac{{\mbox{\boldmath $p$}}_n^2}{2m}+V_{nC}({\mbox{\boldmath $r$}}_n).
\label{eq:hsp}\end{aligned}$$
In this paper, the nuclear part of the mean-field potential, $V^{(N)}$, is taken to be a spherical Woods-Saxon type $$\begin{aligned}
V_n(r)=\frac{v_0}{1+e^{(r-R)/a}}
+({\boldsymbol \ell}\cdot{\mbox{\boldmath $s$}})\frac{1}{r}\frac{d}{dr}
\frac{v_{\ell s}}{1+e^{(r-R)/a}},
\label{eq:WS}\end{aligned}$$ where the radius and the diffuseness parameters are set to be $R=1.27A_C^{1/3}$ fm and $a=0.67$ fm, respectively, and the strengths $v_0$ and $v_{\ell s}$ are determined to reproduce the neutron single-particle energies of $2s_{1/2}$ ($-3.27$ MeV) and $1d_{5/2}$ ($-4.14$ MeV) orbitals in $^{17}$O [@TWC93]. The resultant values are $v_0=-49.21$ MeV and $v_{\ell s}=21.6\ {\rm MeV}\cdot
{\rm fm}^2$. The Coulomb potential $V^{(C)}$ in the proton mean field potential is generated by a uniformly charged sphere of radius $R$ and charge $Z_Ce$, where $Z_C$ is the atomic number of the core nucleus. For the pairing interaction $V_{pn}$ we employ a density-dependent contact interaction, which is widely used in similar three-body calculations for nuclei far from the stability line[@BeEs91; @EsBeH97; @HSCS07; @OiHS10]. Since we have to consider both the iso-triplet and iso-singlet channels in our case of proton and neutron, we consider the pairing interaction $V_{pn}$ given by $$\begin{aligned}
&&V_{pn}({\mbox{\boldmath $r$}}_p,{\mbox{\boldmath $r$}}_n) \nonumber \\
&=&\hat{P}_sv_s\delta^{(3)}({\mbox{\boldmath $r$}}_p-{\mbox{\boldmath $r$}}_n)
\biggl[1+x_s\biggl(\frac{1}{1+e^{(r-R)/a}}\biggr)^{\alpha_s}\biggr]\nonumber \\
&+&\hat{P}_tv_t\delta^{(3)}({\mbox{\boldmath $r$}}_p-{\mbox{\boldmath $r$}}_n)
\biggl[1+x_t\biggl(\frac{1}{1+e^{(r-R)/a}}\biggr)^{\alpha_t}\biggr],
\label{eq:pairing}\end{aligned}$$ where $\hat{P}_s$ and $\hat{P}_t$ are the projectors onto the spin-singlet and spin-triplet channels, respectively: $$\begin{aligned}
\hat{P}_s=\frac{1}{4}-\frac{1}{4}{\boldsymbol \sigma}_p\cdot{\boldsymbol \sigma}_n,\
\hat{P}_t=\frac{3}{4}+\frac{1}{4}{\boldsymbol \sigma}_p\cdot{\boldsymbol \sigma}_n.
$$ In each channel in Eq. (\[eq:pairing\]), the first term corresponds to the interaction in vacuum while the second term takes into account the medium effect through the density dependence. Here, the core density is assumed to be a Fermi distribution of the same radius and diffuseness as in the mean field, Eq. (\[eq:WS\]). The strength parameters, $v_s$ and $v_t$ are determined from the proton-neutron scattering length as [@EsBeH97] $$\begin{aligned}
v_s&=&\frac{2\pi^2\hbar^2}{m}\frac{2a^{(s)}_{pn}}{\pi-2a^{(s)}_{pn}k_{\rm cut}},\\
v_t&=&\frac{2\pi^2\hbar^2}{m}\frac{2a^{(t)}_{pn}}{\pi-2a^{(t)}_{pn}k_{\rm cut}},
\label{eq:v0pair}\end{aligned}$$ where $a^{(s)}_{pn}=-23.749$ fm and $a^{(t)}_{pn}=5.424$ fm [@KoNi75] are the empirical p-n scattering length of the spin-singlet and spin-triplet channels, respectively, and $k_{\rm cut}$ is the momentum cut-off introduced in treating the delta function. The density dependent terms have two parameters, $x$ and $\alpha$, for each channel, which are to be determined so as to reproduce the ground and excited state energies of $^{18}$F (see Sec. III).
[Model space]{} The Hamiltonian, Eq. (\[eq:Htot\]), is diagonalized in the valence two-particle subspace. The basis is given by a product of proton and neutron single-particle states: $$\begin{aligned}
\begin{aligned}
h(\tau)\bigl|\psi^{(\tau)}_{n\ell jm}\bigr\rangle
=\epsilon_{n\ell j}^{(\tau)}\bigl|\psi^{(\tau)}_{n\ell jm}\bigr\rangle,\
\tau=p\ {\rm or}\ n,
\end{aligned}
$$ where the single-particle continuum states can be discretized in a large box. Here, $n$ is the principal quantum number, $\ell$ is the orbital angular momentum, $j$ is the total angular momentum of the single-particle state, and $m=j_z$ is the projection of the total angular momentum $j$. $\epsilon_{n\ell j}^{(\tau)}$ is the single-particle energy. The wave function for states of the total angular momentum $I$ is expanded as $$\begin{aligned}
|\Psi_{IM_I}\rangle=\sum_{\alpha\beta}C_{\alpha\beta}^{I}|\alpha\beta,IM_I\rangle,
\label{eq:wf3body}\end{aligned}$$ where $C_{\alpha\beta}^{I}$ are the expansion coefficients. The basis state $|\alpha\beta,IM_I\rangle$ is given by the product $$\begin{aligned}
&&\langle{\mbox{\boldmath $r$}}_p{\mbox{\boldmath $r$}}_n|\alpha\beta,IM_I\rangle \nonumber \\
&=&\phi^{(p)}_{\alpha}(r_p)\phi^{(n)}_{\beta}(r_n)
[{\mathscr Y}_{\ell_{\alpha}j_{\alpha}}(\hat{{\mbox{\boldmath $r$}}}_p)
{\mathscr Y}_{\ell_{\beta}j_{\beta}}(\hat{{\mbox{\boldmath $r$}}}_n)]_{IM_I},
\label{eq:basis}\end{aligned}$$ where $\alpha$ is a shorthanded notation for single-particle level $\{n_{\alpha},\ell_{\alpha},j_{\alpha}\}$, and similarly for $\beta$. $\phi^{(\tau)}_{\alpha}(r_{\tau})$ is the radial part of the wave function $\psi_\alpha^{(\tau)}$ of level $\alpha$, and ${\mathscr Y}_{\ell jm}=
\sum_{m'm''}\langle \ell m' \frac{1}{2}m''|jm \rangle Y_{\ell m'}\chi_{\frac{1}{2}m''}$ is the spherical spinor, $\chi_{\frac{1}{2}m''}$ being the spin wave function of nucleon. $\ell_{\alpha}+\ell_{\beta}$ is even (odd) for positive (negative) parity state. Notice that we do not use the isospin formalism, with which the number of the basis states, Eq. (\[eq:basis\]), can be reduced by explicitly taking the anti-symmetrization. Instead, we use the proton-neutron formalism without the anti-symmetrization in order to take into account the breaking of the isospin symmetry due to the Coulomb term $V^{(C)}$ in Eq. (\[eq:pot\]).
As shown in the Appendix A, the matrix elements of the spin-singlet channel in $V_{pn}$ identically vanish for the $1^+$ and $3^+$ states. Thus we keep only the spin-triplet channel interaction and determine $x_t$ and $\alpha_t$ from the binding energies of the two states from the three-body threshold. In constructing the basis we effectively take into account the Pauli principle, and exclude the single-particle $1s_{1/2}$, $1p_{3/2}$, and $1p_{1/2}$ states, that are already occupied by the core nucleons. The cut-off energy $E_{\rm cut}$ to truncate the model space are related with the momentum cut-off in Eq. (\[eq:v0pair\]) by $E_{\rm cut}=\hbar^2k_{\rm cut}^2/m$. We include only those states satisfying $\epsilon^{(p)}_{\alpha}+\epsilon^{(n)}_{\beta}\leq E_{\rm cut}$.
[Addition of a $\Lambda$ particle]{}
Similarly to the $^{18}$F nucleus, we also treat $^{19}_{\ \Lambda}$F as a three-body system composed of $^{17}_{\ \Lambda}{\rm O}+p+n$. We assume that the $\Lambda$ particle occupies the $1s_{1/2}$ orbital in the core nucleus and provides an additional contribution to the core-nucleon potential, $$\begin{aligned}
V^{(N)}(r)\rightarrow V^{(N)}(r)+V_{\Lambda}(r).
$$ We construct the potential $V_{\Lambda}$ by folding the $N$-$\Lambda$ interaction $v_{N\Lambda}$ with the density $\rho_{\Lambda}$ of the $\Lambda$ particle: $$\begin{aligned}
V_{\Lambda}(r)=\int d^3r_{\Lambda}\ \rho_{\Lambda}({\mbox{\boldmath $r$}}_{\Lambda})
v_{N\Lambda}({\mbox{\boldmath $r$}}-{\mbox{\boldmath $r$}}_{\Lambda}).
$$ We use the central part of a $N$-$\Lambda$ interaction by Motoba [*et al.*]{} [@MBI83]: $$\begin{aligned}
v_{\Lambda N}(r)=v_{\Lambda}e^{-r^2/b_v^2},
\label{eq:vNL}\end{aligned}$$ where $b_v=1.083$ fm and we set $v_{\Lambda}=-20.9$ MeV, which is used in the calculation for $^6$Li in Ref. [@HK11]. The density $\rho_{\Lambda}(r)$ is given by that of a harmonic oscillator wave function $$\begin{aligned}
\rho_{\Lambda}(r)=(\pi b_{\Lambda}^2)^{-3/2}e^{-r^2/b_{\Lambda}^2},
$$ where we take $b_{\Lambda}=\sqrt{(A_C/4)^{1/3}(A_Cm+m_{\Lambda})/A_Cm_{\Lambda}}\cdot 1.358$ fm, following Refs. [@MBI83] and [@HK11].
The total wave function for the $^{19}_{~\Lambda}$F nucleus is given by $$|\Psi^{\rm tot}_{JM}\rangle = \left[|\Phi_{I_c}\rangle|\Psi_I\rangle\right]^{(JM)},$$ where $J$ is the total angular momentum of the $^{19}_{~\Lambda}$F nucleus, $|\Phi_{I_c}\rangle$ is the wave function for the core nucleus, $^{17}_{~\Lambda}$O, in the ground state with the spin-parity of $I_c^\pi=1/2^+$, and $|\Psi_I\rangle$ is the wave function for the valence nucleons with the angular momentum $I$ given by Eq. (\[eq:wf3body\]). As we use the spin-independent $N$-$\Lambda$ interaction in Eq. (\[eq:vNL\]), the doublet states with $J=I\pm 1/2$ are degenerate in energy.
[$E2$ transition and the polarization charge]{} We calculate the reduced electric transition probability, $B(E2,3^+\to1^+)$, as a measure of nuclear size. In our three-body model, the $E2$ transition operator $Q_{2\mu}$ is given by $$\begin{aligned}
&&Q_{2\mu} \nonumber \\
&=&\frac{(Z_Ce+e^{(n)})m^2+e^{(p)}(M_C+m)^2}{(M_C+2m)^2}r_p^2Y_{2\mu}(\hat{{\mbox{\boldmath $r$}}}_p) \nonumber \\
&+&\frac{(Z_Ce+e^{(p)})m^2+e^{(n)}(M_C+m)^2}{(M_C+2m)^2}r_n^2Y_{2\mu}(\hat{{\mbox{\boldmath $r$}}}_n) \nonumber \\
&+&2\frac{Z_Cem^2-(e^{(p)}+e^{(n)})m(M_C+m)}{(M_C+2m)^2}\nonumber \\
&&\times\sqrt{\frac{10\pi}{3}}r_pr_n[Y_{1}(\hat{{\mbox{\boldmath $r$}}}_p)Y_1(\hat{{\mbox{\boldmath $r$}}}_n)]^{({2\mu})}.
\label{eq:Q}\end{aligned}$$ Here, $M_C$ is the mass of the core nucleus, that is, $A_Cm$ for $^{18}$F and $A_Cm+m_{\Lambda}$ for $^{19}_{\ \Lambda}$F, where $m_{\Lambda}$ is the mass of $\Lambda$ particle. In Eq. (\[eq:Q\]), the effective charges of proton and neutron are given as $$\begin{aligned}
e^{(p)}=e+e^{(p)}_{\rm pol},\ e^{(n)}=e^{(n)}_{\rm pol},
$$ respectively. Here we have introduced the polarization charge $e_{\rm pol}^{(\tau)}$ to protons and neutrons to take into account the core polarization effect (in principle one may also consider the polarization charge for the $\Lambda$ particle, but for simplicity we neglect it in this paper). Their values are determined so as to reproduce the measured $B(E2)$ values of $1/2^+\to5/2^+$ transitions in $^{17}$F ($64.9\pm1.3\ e^2{\rm fm}^4$) and $^{17}$O ($6.21\pm0.08\ e^2{\rm fm}^4$) [@Aj83]. In our model we calculate them as single-particle transitions in $^{17}$F ($^{16}$O+$p$) and in $^{17}$O ($^{16}$O+$n$). The resultant values are $e^{(p)}_{\rm pol}=0.098e$ and $e^{(n)}_{\rm pol}=0.40e$, which are close to the values given in Ref. [@SMA04].
The $B(E2)$ value from the 3$^+$ state to the 1$^+$ ground state is then computed as, $$B(E2, 3^+\to 1^+)=
\frac{1}{7}\,
\left|\langle \Psi_{J=1}\|Q_2\|\Psi_{J=3}\rangle\right|^2,$$ where $\langle \Psi_{J=1}\|Q_2\|\Psi_{J=3}\rangle$ is the reduced matrix element. We will compare this with the corresponding value for the $^{19}_{~\Lambda}$F nucleus, that is, $$\begin{aligned}
&&\frac{1}{7}\,
\left|\langle \Psi_{I=1}\|Q_2\|\Psi_{I=3}\rangle\right|^2 \nonumber \\
&=&\frac{1}{8}\,\left|\langle [\Phi_{I_c}\Psi_{I=1}]^{J=3/2}\|Q_2\|
[\Phi_{I_c}\Psi_{I=3}]^{J=7/2}\rangle\right|^2,
\label{eq:BE2}
\\
&=&
\frac{1}{6}\,
\left|\langle [\Phi_{I_c}\Psi_{I=1}]^{J=3/2}\|Q_2\|
[\Phi_{I_c}\Psi_{I=3}]^{J=5/2}\rangle\right|^2 \nonumber \\
&&+\frac{1}{6}\,\left|\langle [\Phi_{I_c}\Psi_{I=1}]^{J=1/2}\|Q_2\|
[\Phi_{I_c}\Psi_{I=3}]^{J=5/2}\rangle\right|^2,
\label{eq:BE2-2}\end{aligned}$$ which is valid in the weak coupling limit [@Ta01; @HiKMM99; @IsKDO11sc; @IsKDO12] (see Appendix B for the derivation).
\[sec:model\]
[RESULTS AND DISCUSSION]{}
We now numerically solve the three-body Hamiltonians. Because $^{18}$F is a well bound nucleus so that the cut-off does not have to be high, we use $E_{\rm cut}=10$ MeV. We have confirmed that the result does not drastically change even if we use a larger value of the cut-off energy, $E_{\rm cut}$= 50 MeV. Especially the ratio ($\approx 0.96$) of the $B(E2)$ value for $^{19}_{\ \Lambda}$F to that for $^{18}$F is quite stable against the cut-off energy. We fit the parameters in the proton-neutron pairing interaction, $x_t$ and $\alpha_t$, to the energy of the ground state ($-9.75$ MeV) and that of the first excited state ($-8.81$ MeV) of $^{18}$F. Their values are $x_t=-1.239$ and $\alpha_t=0.6628$ for $E_{\rm cut}=10$ MeV. We use the box size of $R_{\rm box}=30$ fm.
The obtained level schemes of $^{18}$F and $^{19}_{\ \Lambda}$F as well as the $B(E2)$ values are shown in Fig. \[fig:lv-s\]. The $B(E2,3^+\to1^+)$ value is reduced, which indicates that the nucleus shrinks by the attraction of $\Lambda$. In fact, as shown in Table \[tb:rms\], the root mean square (rms) distance between the core and the center-of-mass of the two valence nucleons, $\langle r^2_{C-pn} \rangle^{1/2}$, and that between the proton and the neutron ,$\langle r^2_{p-n} \rangle^{1/2}$, slightly decrease by adding $\Lambda$.
To make the shrinkage effect more visible, we next show the two-particle density. The two-particle density $\rho_2({\bf r}_p,{\bf r}_n)$ is defined by $$\begin{aligned}
&&\rho_2({\mbox{\boldmath $r$}}_p,{\mbox{\boldmath $r$}}_n) \nonumber\\
&&=\sum_{\sigma_p\sigma_n}
\langle{\mbox{\boldmath $r$}}_p\sigma_p,{\mbox{\boldmath $r$}}_n\sigma_n|\Psi_{IM}\rangle
\langle\Psi_{IM}|{\mbox{\boldmath $r$}}_p\sigma_p,{\mbox{\boldmath $r$}}_n\sigma_n\rangle,
\label{eq:rho2}\end{aligned}$$ where $\sigma_p$ and $\sigma_n$ is the spin coordinates of proton and neutron, respectively. Setting $\hat{{\mbox{\boldmath $r$}}}_p=0$, the density is normalized as $$\begin{aligned}
&&
\int_0^{\infty}4\pi r_p^2dr_p\int_0^{\infty}r_n^2dr_n \nonumber\\
&\times&\int_0^{\pi}2\pi\sin\theta_{pn}d\theta_{pn}\ \rho_2(r_p,r_n,\theta_{pn})=1,
$$ where $\theta_{pn}=\theta_n$ is the angle between proton and neutron. In Fig. \[fig:density1\] we show the ground state density distributions of $^{18}$F (the upper panel) and $^{19}_{\ \Lambda}$F (the lower panel) as a function of $r=r_p=r_n$ and $\theta_{pn}$. They are weighted by a factor of $8\pi^2r^4\sin{\theta_{pn}}$. The distribution slightly moves inward after adding a $\Lambda$ particle. To see the change clearer, we show in Fig. \[fig:density-diff\] the difference of the two-particle densities, $\rho_2(^{19}_{\ \Lambda}{\rm F})-\rho_2(^{18}{\rm F})$, for both the $1^+$ and the $3^+$ states. One can now clearly see that the density distribution is pulled toward the origin by additional attraction caused by the $\Lambda$ particle both for the $1^+$ and the $3^+$ states.
(8,3)(0.5,0) (.5,.9)[$1^+$]{} (.5,3.3)[$3^+$]{} (7.1,-.1)[$1^+\otimes\Lambda_{s_{1/2}}$]{} (7.1,1.6)[$3^+\otimes\Lambda_{s_{1/2}}$]{} (1.6,3.5)[$0.94$ MeV]{} (5.2,1.8)[$0.66$ MeV]{} (1.9,.5)[$^{18}$F]{} (5.4,-.5)[$^{19}_{\ \Lambda}$F]{} (2,3.3)[(0,-1)[2.2]{}]{} (5.5,1.6)[(0,-1)[1.55]{}]{} (2.1,2)[$15.8\ e^2{\rm fm}^4$]{} (5.6,.8)[$15.2\ e^2{\rm fm}^4$]{} (0,0)(0,2.4)[2]{}[ (1,1)[(1,0)[2.5]{}]{} ]{} (0,-0.9945)(0,1.685)[2]{}[ (4.5,1)[(1,0)[2.5]{}]{} ]{} (3.9,2.4)[$\approx$]{} (3.85,0.45)[$\approx$]{} ( 3.50000, 1.00000)( 3.52083, 0.97928)( 3.54167, 0.95856) ( 4.50000, 0.00550)( 4.47917, 0.02622)( 4.45833, 0.04694) ( 0.00000, 0.00000)( 0.16667,-0.16575)[ 6]{}[ ( 3.50000, 1.00000)( 3.54167, 0.95856)( 3.58333, 0.91713) ]{} ( 3.50000, 3.40000)( 3.51786, 3.36947)( 3.53571, 3.33895) ( 4.50000, 1.69050)( 4.48214, 1.72103)( 4.46429, 1.75155) ( 0.00000, 0.00000)( 0.14286,-0.24421)[ 7]{}[ ( 3.50000, 3.40000)( 3.53571, 3.33895)( 3.57143, 3.27789) ]{}
------------------------------- ---------------------- ----------------------------------- ---------------------------------- --------------- ----------
$\langle r^2_{C-pn}\rangle^{1/2}$ $\langle r^2_{p-n}\rangle^{1/2}$ $\theta_{pn}$ $P(S=1)$
(fm) (fm) (deg) (%)
$1^+$ $^{18}$F 2.72 5.38 89.6 58.6
$1^+\otimes\Lambda_{s_{1/2}}$ $^{19}_{\ \Lambda}$F 2.70 5.33 89.5 58.4
$3^+$ $^{18}$F 2.71 5.42 84.9 85.0
$3^+\otimes\Lambda_{s_{1/2}}$ $^{19}_{\ \Lambda}$F 2.68 5.35 84.7 85.9
------------------------------- ---------------------- ----------------------------------- ---------------------------------- --------------- ----------
: The core-pn and p-n rms distances, the opening angle between the valence nucleons, and the probability for the spin-triplet component for the ground and the first excited states of $^{18}$F and $^{19}_{\ \Lambda}$F.
\[tb:rms\]
Let us next discuss the change in excitation energy. As shown in Fig. 1, it is deceased by addition of $\Lambda$, similar to $^{6}$Li and $^{7}_{\Lambda}$Li. In order to clarify the mechanism of this reduction we estimate the energy gain of each valence configuration, treating $V_{\Lambda}(r_p)+V_{\Lambda}(r_n)=\Delta V$ by the first order perturbation theory: $$\begin{aligned}
\Delta E_I&=&\langle\Psi^{(IM_I)}_{^{18}{\rm F}}|\Delta V|\Psi^{(IM_I)}_{^{18}{\rm F}}\rangle \nonumber\\
&=&\sum_{j_{\alpha}\ell_{\alpha}}\sum_{j_{\beta}\ell_{\beta}}
\sum_{n_{\alpha}n_{\alpha'}}\sum_{n_{\beta}n_{\beta'}}
C_{n_{\alpha'}\ell_{\alpha}j_{\alpha},n_{\beta'}\ell_{\beta}j_{\beta}}^{I*}
C_{n_{\alpha}\ell_{\alpha}j_{\alpha},n_{\beta}\ell_{\beta}j_{\beta}}^{I} \nonumber\\
&\times&\biggl[\biggr.
\delta_{n_{\beta'}n_{\beta}}\int_0^{\infty}r_p^2dr_p\
\phi^{(p)}_{n_{\alpha'}\ell_{\alpha}j_{\alpha}}(r_p)^*\,V_{\Lambda}(r_p)
\phi^{(p)}_{n_{\alpha}\ell_{\alpha}j_{\alpha}}(r_p) \nonumber\\
&\biggl. +&\delta_{n_{\alpha'}n_{\alpha}}\int_0^{\infty}r_n^2dr_n\
\phi^{(n)}_{n_{\beta'}\ell_{\beta}j_{\beta}}(r_n)^*\,V_{\Lambda}(r_n)
\phi^{(n)}_{n_{\beta}\ell_{\beta}j_{\beta}}(r_n)\biggr] \nonumber\\
&\equiv&\sum_{j_{\alpha}\ell_{\alpha}j_{\beta}\ell_{\beta}}
\Delta\epsilon^{(I)}_{j_{\alpha}\ell_{\alpha}j_{\beta}\ell_{\beta}},
$$ where $\Delta\epsilon^{(I)}_{j_{\alpha}\ell_{\alpha}j_{\beta}\ell_{\beta}}$ is the contribution of each configuration to the total approximate energy change $\Delta E_I$. We show in the first and the second columns in Table \[tb:comp\] dominant valence configurations and their occupation probabilities. In the third and the fourth columns of the Table \[tb:comp\] are the energy gains of each configuration $\Delta\epsilon$ and $\Delta\epsilon/P$, respectively, where $P$ is the occupation probability in the $^{18}$F nucleus of the corresponding configuration. Notice that $\Delta\epsilon/P$ is the largest for the $s\otimes s$ configuration, and the second largest for the $s\otimes d$ and $d\otimes s$, because $s$ wave states have more overlap with the $\Lambda$ occupying the $1s_{1/2}$ orbital. In the ground state of $^{18}$F both proton and neutron occupy $s$-wave states with a probability of 11.63 %, while in the excited $3^+$ state one of the two valence nucleons occupies an $s$-wave state with a probability of 55.71 %. Thus the $3^+$ state has much more $s$-wave component than the $1^+$ state. Therefore the valence nucleons in the $3^+$ state have more overlap with the $\Lambda$ particle and gain more binding compared to the ground state. In fact, as one can see in Table \[tb:comp\], the probabilities of the configurations with $s$-wave grow up by adding $\Lambda$.
We have repeated the same calculations by turning off the core-nucleon spin-orbit interaction in Eq.(\[eq:WS\]), which is the origin of the core-deuteron spin-orbit interaction. We have confirmed that the excitation energy still decreases without the spin-orbit interaction. We have also carried out calculations for the ground ($0^+$) and the first excited ($2^+$) levels in $^{18}$O and $^{19}_{\ \Lambda}$O with only the spin-singlet (iso-triplet) channel interaction. In this case the spin-orbit interaction between the core and two neutrons is absent. In this case also the excitation energy decreases by adding a $\Lambda$ particle. Therefore we find that the mechanism of the reduction of the excitation energy in $^{18}$F is indeed different from the case of lithium, where the $LS$ interaction between the core and deuteron plays an important role in the latter.
$1^+$ state
-------------------------------------- ---------- ---------------------- -------------------- ---------
Configuration $\Delta\epsilon$ $\Delta\epsilon/P$
$^{18}$F $^{19}_{\ \Lambda}$F (MeV) (MeV)
$\pi_{d_{5/2}}\otimes \nu_{d_{5/2}}$ 53.69 % 54.29 % $-0.38$ $-0.71$
$\pi_{d_{3/2}}\otimes \nu_{d_{5/2}}$ 15.85 % 15.02 % $-0.10$ $-0.63$
$\pi_{d_{5/2}}\otimes \nu_{d_{3/2}}$ 15.41 % 14.57 % $-0.10$ $-0.65$
$\pi_{s_{1/2}}\otimes \nu_{s_{1/2}}$ 11.63 % 12.76 % $-0.13$ $-1.12$
$3^+$ state
$\pi_{d_{5/2}}\otimes \nu_{d_{5/2}}$ 38.30 % 35.98 % $-0.27$ $-0.70$
$\pi_{s_{1/2}}\otimes \nu_{d_{5/2}}$ 28.30 % 29.64 % $-0.26$ $-0.92$
$\pi_{d_{5/2}}\otimes \nu_{s_{1/2}}$ 27.41 % 28.68 % $-0.26$ $-0.95$
: Dominant configurations and their occupation probabilities for the $1^+$ and $3^+$ states of $^{18}$F and $^{19}_{~\Lambda}$F. The energy gains of each configuration estimated by the first order perturbation theory are also shown in the third and fourth columns, where $P$ is the occupation probability in the $^{18}$F nucleus.
\[tb:comp\]
\[sec:result\]
[SUMMARY]{} We have calculated the energies of the lowest two levels and $E2$ transitions of $^{18}$F and $^{19}_{\ \Lambda}$F using a simple three-body model. It is found that $B(E2,3^+\to1^+)$ is slightly reduced, as is expected from the shrinkage effect of $\Lambda$. We have indeed seen that the distance between the valence two nucleons and the $^{16}$O core decreases after adding a $\Lambda$ particle. We also found that excitation energy of the $3^+$ state is decreased. We observed that the $3^+$ state has much more $s$-wave component than the ground state and thus gains more binding coupled with the $\Lambda$ occupying $1s_{1/2}$ orbital. This leads to a conclusion that the excitation energy of the first core excited state $3^+\otimes \Lambda_{s_{1/2}}$ of $^{19}_{\ \Lambda}$F becomes smaller than the corresponding excitation in $^{18}$F. We have pointed out that the mechanism of this reduction is different from that of $^6$Li and $^7_{\Lambda}$Li, where the core-deutron spin-orbit interaction plays an important role [@HK11]. This may suggest that the information on the wave function of a core nucleus can be studied using the spectroscopy of $\Lambda$ hypernuclei.
In this paper, we used a spin-independent $N$-$\Lambda$ interaction. To be more realistic and quantitative, it is an interesting future work to employ a spin dependent $N$-$\Lambda$ interaction and explicitly take into account the core excitation. It may also be important to explicitly take into account the tensor correlation between the valence proton and neutron.
\[sec:summary\]
We thank T. Koike, T. Oishi, and H. Tamura for useful discussions. The work of Y. T. was supported by the Japan Society for the Promotion of Science for Young Scientists. This work was also supported by Grant-in-Aid for JSPS Fellows under the program number 24$\cdot$3429 and the Japanese Ministry of Education, Culture, Sports, Science and Technology by Grant-in-Aid for Scientific Research under the program number (C) 22540262.
Matrix elements of $V_{pn}$
===========================
In this appendix we explicitly give an expression for the matrix elements of the proton-neutron pairing interaction $V_{pn}$ given by Eq. (\[eq:pairing\]). They read $$\begin{aligned}
&&\langle\alpha'\beta',IM|V_{pn}|\alpha\beta,IM\rangle \nonumber\\
&=&
\langle\alpha'\beta',IM|
\bigl[\hat{P}_s\delta^{(3)}({\mbox{\boldmath $r$}}_p-{\mbox{\boldmath $r$}}_n)g_s(r_p)+\bigr. \nonumber\\
&&\hspace{1.5cm}\bigl.\hat{P}_t\delta^{(3)}({\mbox{\boldmath $r$}}_p-{\mbox{\boldmath $r$}}_n)g_t(r_p)\bigr]
|\alpha\beta,IM\rangle,
$$ where we have defined $$\begin{aligned}
g_{s}(r)=v_{s}\biggl[1+x_s\biggl(\frac{1}{1+e^{(r-R)/a}}\biggr)^{\alpha_s}\biggr],
$$ and similarly for $g_t(r)$. By rewriting the basis into the $LS$-coupling scheme one obtains
$$\begin{aligned}
&&({\rm the~singlet\ term}) \nonumber\\
&=&
\frac{(-)^{\ell_{\alpha}+j_{\beta}-\ell_{\alpha'}-j_{\beta'}}}{8\pi}
\hat{j}_{\alpha}\hat{j}_{\alpha'}\hat{j}_{\beta}\hat{j}_{\beta'}
\hat{\ell}_{\alpha}\hat{\ell}_{\alpha'}\hat{\ell}_{\beta}\hat{\ell}_{\beta'}
\left\{\begin{array}{ccc}
j_{\alpha} & j_{\beta} & I\\
\ell_{\beta} & \ell_{\alpha} & \frac{1}{2}
\end{array}\right\}
\left\{\begin{array}{ccc}
j_{\alpha'} & j_{\beta'} & I\\
\ell_{\beta'} & \ell_{\alpha'} & \frac{1}{2}
\end{array}\right\}
\left(\begin{array}{ccc}
\ell_{\alpha} & \ell_{\beta} & I\\
0 & 0 & 0
\end{array}\right)
\left(\begin{array}{ccc}
\ell_{\alpha'} & \ell_{\beta'} & I\\
0 & 0 & 0
\end{array}\right)
\nonumber\\
&&\times
\int_0^{\infty}r^2dr\ \phi^{(p)}_{\alpha'}(r)^*\phi^{(n)}_{\beta'}(r)^*
\phi^{(p)}_{\alpha}(r)\phi^{(n)}_{\beta}(r)g_s(r),
$$
and $$\begin{aligned}
&&({\rm the~triplet\ term}) \nonumber\\
&=&
\frac{3}{4\pi}
\hat{j}_{\alpha}\hat{j}_{\alpha'}\hat{j}_{\beta}\hat{j}_{\beta'}
\hat{\ell}_{\alpha}\hat{\ell}_{\alpha'}\hat{\ell}_{\beta}\hat{\ell}_{\beta'}
\sum_{L}\hat{L}^2
\left\{\begin{array}{ccc}
\ell_{\alpha} & \ell_{\beta} & L\\
\frac{1}{2} & \frac{1}{2} & 1\\
j_{\alpha} & j_{\beta} & I
\end{array}\right\}
\left\{\begin{array}{ccc}
\ell_{\alpha'} & \ell_{\beta'} & L\\
\frac{1}{2} & \frac{1}{2} & 1\\
j_{\alpha'} & j_{\beta'} & I
\end{array}\right\}
\left(\begin{array}{ccc}
\ell_{\alpha} & \ell_{\beta} & L\\
0 & 0 & 0
\end{array}\right)
\left(\begin{array}{ccc}
\ell_{\alpha'} & \ell_{\beta'} & L\\
0 & 0 & 0
\end{array}\right) \nonumber\\
&&\times
\int_0^{\infty}r^2dr\ \phi^{(p)}_{\alpha'}(r)^*\phi^{(n)}_{\beta'}(r)^*
\phi^{(p)}_{\alpha}(r)\phi^{(n)}_{\beta}(r)g_t(r),
$$ where $\hat j=\sqrt{2j+1}$. From these equations, it is apparent that for odd (even) $I$ and even (odd) parity states, such as 1$^+$ and 3$^+$, the singlet term always vanishes because $$\left(\begin{array}{ccc}
\ell_{\alpha} & \ell_{\beta} & I\\
0 & 0 & 0
\end{array}\right)=0,$$ for $\ell_\alpha+\ell_\beta+I$ = odd.
Extraction of the core transition from $B(E2)$ values of a hypernucleus
=======================================================================
We consider a hypernucleus with a $\Lambda$ particle weakly coupled to a core nucleus. In the weak coupling approximation, the wave function for the hypernucleus with angular momentum $J$ and its $z$-component $M$ is given by $$\begin{aligned}
|JM\rangle &=& [|I\rangle\otimes|j_\Lambda\rangle]^{(JM)} \nonumber \\
&=&\sum_{M_I,m_\Lambda}
\langle I M_I j_\Lambda m_\Lambda|JM\rangle |IM_I\rangle |j_\Lambda m_\Lambda\rangle,\end{aligned}$$ where $|IM_I\rangle$ and $|j_\Lambda m_\Lambda\rangle$ are the wave functions for the core nucleus and the $\Lambda$ particle, respectively. Suppose that the operator $\hat{T}_{\lambda\mu}$ for electromagnetic transitions is independent of the $\Lambda$ particle. Then, the square of the reduced matrix element of $\hat{T}_{\lambda\mu}$ between two hypernuclear states reads (see Eq. (7.1.7) in Ref. [@Edmonds] as well as Eq. (6-86) in Ref. [@BM75]), $$\begin{aligned}
|\langle J_f\|T_\lambda\|J_i\rangle |^2
&=& (2J_i+1)(2J_f+1)
\left\{\begin{array}{ccc}
I_f & J_f & j_\Lambda \\
J_i & I_i & \lambda
\end{array}\right\}^2 \nonumber \\
&&\times\langle I_f\|T_\lambda\|I_i\rangle^2.\end{aligned}$$ Notice the relation (see Eq. (6.2.9) in Ref.[@Edmonds]), $$\sum_{J_f} (2J_f+1)
\left\{\begin{array}{ccc}
I_f & J_f & j_\Lambda \\
J_i & I_i & \lambda
\end{array}\right\}^2 =\frac{1}{2I_i+1}.$$ This yields $$\begin{aligned}
\sum_{J_f}B(E\lambda; J_i\to J_f) &=&
\sum_{J_f}\frac{1}{2J_i+1}
|\langle J_f\|T_\lambda\|J_i\rangle |^2 \nonumber\\
&=&
\frac{1}{2I_i+1}\langle I_f\|T_\lambda\|I_i\rangle^2,\end{aligned}$$ which is nothing but the $B(E\lambda)$ value of the core nucleus from the state $I_i$ to the state $I_f$. This proves Eqs. (\[eq:BE2\]) and (\[eq:BE2-2\]) in Sec. II for the specific case of $I_i=3$ and $I_f=1$.
[99]{} T. Motoba, H. Band$\bar{\rm o}$, and K. Ikeda, Prog. Theor. Phys. [**70**]{}, 189 (1983).
Myaing Thi Win, K. Hagino, and T. Koike, Phys. Rev. C [**83**]{}, 014301 (2011).
Bing-Nan Lu, En-Guang Zhao, and Shan-Gui Zhou, Phys. Rev. C [**84**]{}, 014328 (2011).
M. Isaka, M. Kimura, A. Doté, and A. Ohnishi, Phys. Rev. C [**83**]{}, 044323 (2011).
Masahiro Isaka, Hiroaki Homma, Masaaki Kimura, Akinobu Doté, and Akira Ohnishi, Phys. Rev. C [**85**]{}, 034303 (2012).
Masahiro Isaka, Masaaki Kimura, Akinobu Doté, and Akira Ohnishi, Phys. Rev. C [**83**]{}, 054304 (2011).
D. Vretenar, W. Poschl, G. A. Lalazissis, and P. Ring, Phys. Rev. C [**57**]{}, 1060(R) (1998).
X.-R. Zhou, A. Polls, H.-J. Schulze, and I. Vidana, Phys. Rev. C [**78**]{}, 054306 (2008).
F. Minato, S. Chiba, and K. Hagino, Nucl. Phys. [**A831**]{}, 150 (2009); F. Minato and S. Chiba, [*ibid*]{}. [**856**]{}, 55 (2011).
J.M. Yao, Z.P. Li, K. Hagino, M. Thi Win, Y. Zhang, and J. Meng, Nucl. Phys. [**A868**]{}, 12 (2011).
F. Minato and K. Hagino, Phys. Rev. C[**85**]{}, 024316 (2012).
O. Hashimoto and H. Tamura, Prog. Part. Nucl. Phys. [**57**]{}, 564 (2006), and the references therein.
K. Tanida [*et al.*]{}, Phys. Rev. Lett. [**86**]{}, 1982 (2001). E. Hiyama, M. Kamimura, K. Miyazaki, and T. Motoba, Phys. Rev. C [**59**]{}, 2351 (1999).
M. Ukai [*et al.*]{}, Phys. Rev. C 73, 012501(R) (2006).
H. Akikawa [*et al.*]{}, Phys. Rev. Lett. 88, 082501 (2002).
K. Hagino and T. Koike, Phys. Rev. C [**84**]{}, 064325 (2011).
H. Tamura et al., J-PARC 50 GeV PS proposal E13 (2006), \[http://j-parc/NuclPart/pac\_0606/pdf/p13-Tamura.pdf\].
T. Koike, in Proceedings of Sendai International Symposium on Strangeness in Nuclear and Hadronic System (SENDAI08), edited by K. Maeda et al. (World Scientific, Singapore, 2010), p. 213.
T. Sakuda, S. Nagata, and F. Nemoto, Prog. Theor. Phys. [**56**]{}, 1126 (1976).
M. LeMere, Y.C. Tang, and D.R. Thompson, Phys. Rev. C[**14**]{}, 1715 (1976).
T. Inoue, T. Seve, K.K. Huang, and A. Arima, Nucl. Phys. [**A99**]{}, 305 (1967).
B. Buck, A.C. Merchant, and N. Rowley, Nucl. Phys. [**A327**]{}, 29 (1979).
B. Buck, H. Friedrich, and A.A. Pilt, Nucl. Phys. [**A290**]{}, 205 (1977).
A.C. Merchant, J. of Phys. G[**9**]{}, 1169 (1983).
D. R. Tilley, H. R. Weller, C. M. Cheves, Nucl. Phys. [**A564**]{}, 1 (1993).
G. F. Bertsch and H. Esbensen, Ann. Phys. (NY) [**209**]{}, 327 (1991).
H. Esbensen, G. F. Bertsch, and K. Hencken, Phys. Rev. C [**56**]{}, 3054 (1997).
K. Hagino, H. Sagawa, J. Carbonell, and P. Schuck, Phys. Rev. Lett. [**99**]{} 022506(2007).
T. Oishi, K. Hagino, and H. Sagawa, Phys. Rev. C [**82**]{}, 024315 (2010).
L. Koester and W. Nistler, Z. Phys. A [**272**]{}, 189 (1975).
F. Ajzenberg-Selove, Nucl. Phys. [**A392**]{}, 1 (1983).
Y. Suzuki, H. Matsumura, and B. Abu-Ibrahim, Phys. Rev. C [**70**]{}, 051302(R) (2004).
A.R. Edmonds, [*Angular Momentum in Quantum Mechanics*]{}, (Princeton University Press, Princeton, New Jersey, 1960).
A. Bohr and B.R. Mottelson, [*Nuclear Structure*]{}, (W.A. Benjamin, Reading, MA, 1975), Vol. II.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We discuss a recent analysis of $t\bar{t}$ threshold effects and its implications for the determination of electroweak parameters. We show that the new formulation, when applied to the $\rho$ parameter for $m_t\approx158$ GeV, gives a result of similar magnitude to those previously obtained. In fact, it is quite close to our “resonance” calculation. We also present a simple estimate of the size of the threshold effects based on an elementary Bohr-atom model of $t\bar{t}$ resonances.'
author:
- |
Bernd A. Kniehl\
II. Institut für Theoretische Physik, Universität Hamburg\
Luruper Chaussee 149, 22761 Hamburg, Germany\
\
Alberto Sirlin\
Department of Physics, New York University\
4 Washington Place, New York, NY 10003, USA
title: |
-3cm
DESY 93-194ISSN 0418-9833
NYU-Th-93/12/01
hep-ph/9401243
December 1993
1.5cm Observations Concerning the Magnitude of $t \bar{t}$ Threshold Effects on Electroweak Parameters
---
16.cm 23.cm
‘@=11 tempcntc citex\[\#1\]\#2[@fileswauxout tempcnta@tempcntb@neciteacite[forciteb:=\#2citeo]{}[\#1]{}]{} citeo[tempcnta>tempcntbciteacitea[,]{} tempcnta=tempcntbtempcnta]{} ‘@=12
Introduction
============
The analysis of threshold effects involving heavy quarks and their contribution to the determination of electroweak parameters has been the subject of a number of studies in the past [@fad; @kwo; @str; @kue; @kni]. Very recently, F.J. Ynduráin has discussed, in the framework of the Coulombic approximation, $t\bar{t}$ threshold effects on the renormalized vacuum-polarization function $\Pi(s)-\Pi(0)$ associated with conserved vector currents [@ynd]. Here $\Pi(s)$ is the unrenormalized function defined according to $\Pi_{\mu\nu}^V(q)=(q^2g_{\mu\nu}-q_\mu q_\nu)\Pi(q^2)$, with $s=q^2$. For $m_t=\sqrt3m_Z\approx158$ GeV and $s=m_Z^2$, the author of Ref. [@ynd] finds that the threshold effects are significantly smaller than the perturbative ${{\cal O}}(\alpha_s)$ calculation. From this observation he concludes that threshold effects are generally small for $s\ll4m_t^2$ and that the “large” results reported in Ref. [@kni] concerning their contribution to electroweak parameters are not supported by his “detailed, rigorous calculation.” However, $t\bar t$ threshold effects influence electroweak parameters chiefly through the $\rho$ parameter. The function $\Pi(s)-\Pi(0)$, although very important in its own right, has very little to do with the effects discussed in Ref. [@kni]. In fact, it is well known that heavy particles of mass $m^2\gg s$ decouple in this amplitude. For this reason, it should be obvious that the leading effects discussed in Ref. [@kni] do not arise from this amplitude. Thus, conclusions drawn on the work of Ref. [@kni] from the discussion of $\Pi(s)-\Pi(0)$ are without foundation. In order to show in the simplest possible way what the correct conclusions ought to be, in Section 2 we apply the formulation of Ref. [@ynd] to the study of leading threshold contributions to the $\rho$ parameter for $m_t \approx 158$ GeV and other values of $m_t$. In principle, this requires only minor modifications of the relevant formulae of Ref. [@ynd], mainly in the prefactors. However, for reasons explained in that Section, we find it necessary to re-evaluate the threshold corrections of that paper. We then find that, contrary to the conclusions of Ref. [@ynd], this leads to results similar in magnitude to those reported in Ref. [@kni]. In fact, the approach of Ref. [@ynd] gives threshold corrections to the $\rho$ parameter quite close to our own “resonance” calculation. In order to make more transparent the size of these leading threshold corrections relative to the perturbative ${{\cal O}}(\alpha_s)$ calculations, in Section 3 we present a simple estimate based on an elementary Bohr-atom model of toponia. We also briefly comment on the shifts induced in electroweak parameters and the magnitude of ${{\cal O}}(\alpha_s^2)$ corrections.
Threshold corrections to the $\rho$ parameter
=============================================
For large $m_t$, the dominant threshold effects discussed in Ref. [@kni] can be related to corrections to the $\rho$ parameter. As the current $\bar\psi_t\gamma^\mu\psi_t$ is conserved and the $\rho$ parameter is defined at $s=0$, it is clear that $t\bar{t}$ threshold effects arise in this case from the contributions of the axial-vector current. As explained in Ref. [@kni], on account of the Ward identities such contributions involve ${\mathop{\Im m}\nolimits}\lambda^A(s,m_t,m_t)$, where $\lambda^A$ is a longitudinal part of the axial-vector polarization tensor $\Pi_{\mu\nu}^A$. Furthermore, for non-relativistic, spin-independent QCD potentials, $-{\mathop{\Im m}\nolimits}\lambda^A$ can be identified to good approximation with ${\mathop{\Im m}\nolimits}\Pi$ (in Ref. [@kni], $\Pi$ is called $\Pi^V(s)/s$). In particular, we note that both amplitudes receive contributions from $nS$ states. One then finds that, in the formulation Ref. [@kni], the leading threshold correction to the $\rho$ parameter is given by (cf. Eqs. (5.1, 5.2a) of Ref. [@kni]) $$\label{one}
\delta(\Delta\rho)_{thr}=-{G_F\over2\pi\sqrt2}\int ds^\prime\,
{\mathop{\Im m}\nolimits}\Pi_{thr}(s^\prime).$$ Here ${\mathop{\Im m}\nolimits}\Pi_{thr}(s')$ denotes contributions from the threshold region not taken into account in the usual perturbative ${{\cal O}}(\alpha_s)$ calculation.
Our aim is to employ the analysis of Ref. [@ynd] to calculate Eq. (\[one\]) and then to compare the answer with our results. In the formulation of that paper, based on the Coulombic approximation, there are two contributions to ${\mathop{\Im m}\nolimits}\Pi_{thr}(s^\prime)$. One of them arises from the toponium resonances and is called ${\mathop{\Im m}\nolimits}\Pi_{pole}(s^\prime)$. The other represents a summation of $(\alpha_s/v)^n\ (n=1,2,\ldots)$ terms, integrated over a small range above threshold, after subtracting corresponding ${{\cal O}}(\alpha_s)$ contributions. The first one can be obtained from the expression [@ynd] $$\label{two}
{\mathop{\Im m}\nolimits}\Pi_{pole}(s)=N_c\sum_n\delta(s-M_n^2)
{\left|\tilde R_{n0}^{(0)}(0)\right|^2\over M_n}
\left[1+{3\beta_0\alpha_s\over2\pi}\left(\ln{n\mu\over C_F\alpha_s m_t}
+\psi(n+1)-1\right)\right],$$ where $N_c=3$, $C_F=(N_c^2-1)/(2N_c)=4/3$, $n_f=5$, $\beta_0=11-2n_f/3=23/3$, $M_n$ is the mass of the $nS$ toponium resonance in a Coulombic potential, $\left|\tilde R_{n0}^{(0)}(0)\right|^2=C_F^3\tilde\alpha_s^3(\mu)m_t^3/(2n^3)$ is the square of its radial wave function at the origin, and $\tilde\alpha_s(\mu)=\alpha_s(\mu)(1+b\alpha_s(\mu)/\pi)$, with $b=\gamma_E(11N_c-2n_f)/6+(31N_c-10n_f)/36\approx3.407$ for toponia [@ren]. Inserting Eq. (\[two\]) into Eq. (\[one\]), we obtain $$\begin{aligned}
\label{three}
\delta(\Delta\rho)_{pole}&=&-x_t\pi\zeta(3)C_F^3\tilde\alpha_s^3(\mu)
\left[1+{3\beta_0\alpha_s\over2\pi}\left(\ln{\mu\over C_F\alpha_sm_t}
-\gamma_E\right)\right. \nonumber\\
& &+\left.{3\beta_0\alpha_s\over2\pi\zeta(3)}\sum_{n=2}^\infty{1\over n^3}
\left(\ln n+\sum_{k=2}^\infty{1\over k}\right)\right],\end{aligned}$$ where $x_t=(N_cG_Fm_t^2/8\pi^2\sqrt2\,)$ is the Veltman correction to the $\rho$ parameter [@vel]. In the numerical evaluation of the threshold corrections to $\Pi(s)-\Pi(0)$ reported in Ref. [@ynd], $\mu$ is chosen to be $m_Z$, $m_t=\sqrt3m_Z$ is assumed, and $\alpha_s(m_Z)=0.115\pm0.01$ is taken. Unless stated otherwise, we shall adopt these values in the following. Furthermore, $(15/16)\pi\zeta(3)C_F^3\tilde\alpha_s^3(\mu)$ is found to equal $1.81\times10^{-2}$, the summation $\sum_{n=2}^\infty$ is neglected, and $\{1+(3\beta_0\alpha_s/2\pi)[\ln(\mu/C_F\alpha_sm_t)-\gamma_E]\}$ is given as 1.08. It is apparent that the numerical value given in Ref. [@ynd] for the last factor is too low. The correct value is 1.315. Inclusion of the neglected sum raises the expression between square brackets in Eq. (\[three\]) to 1.437, instead of 1.08 [@pol]. This leads to $$\begin{aligned}
\delta(\Delta\rho)_{pole}&=&-{16\over15}\,1.816\times10^{-2}\cdot1.437\,x_t
\nonumber\\
\label{four}
&=&-0.0278\,x_t.\end{aligned}$$
The contribution from the small range above threshold can be gleaned from Ref. [@ynd]. Using $v=(1-4m_t^2/s)^{1/2}$ as integration variable ($v$ is the top-quark velocity in the center-of-mass frame) and approximating $(1-v^2)^{-2}\approx1$ in the integrand, the contribution to $\Delta\rho$ equals $-(16/15)\delta_{thr}x_t$, where $\delta_{thr}$ is a quantity studied in Ref. [@ynd]. For $s\ll4m_t^2$, $\delta_{thr}$ can be written as $$\begin{aligned}
\label{fivea}
\delta_{thr}&=&{15\over4}\int_0^{v_0}dv\,v
\left[{B(v)\over1-e^{-B(v)/v}}-v-{\pi\over2}C_F\alpha_s(m_t)\right],\\
\label{fiveb}
B(v)&=&\pi C_F\alpha_s(\mu)\left\{1+{\alpha_s\over\pi}\left[b
+{\beta_0\over2}\left(\ln{C_F\alpha_s\mu\over4m_tv^2}-1\right)\right]\right\},\end{aligned}$$ where an overall factor $(1-v^2/3)$ has been omitted under the integral. In Eq. (\[fivea\]), the term involving $B(v)$ represents a summation of $(\pi C_F\alpha_s/v)^n$ contributions valid for large values of this parameter, while the last two correspond to the subtraction of the perturbative calculation up to ${{\cal O}}(\alpha_s)$ in the small-$v$ limit. We have evaluated the latter at $m_t$, as this is demonstrably the proper scale to be employed in the perturbative calculation [@kni]. We note that the integrand of Eq. (\[fivea\]) is renormalization-group invariant through ${{\cal O}}(\alpha_s^2)$ [@sca]. In Ref. [@ynd], the value $v_0=\pi C_F\alpha_s(m_t)/\sqrt2\approx0.314$ is chosen, the exponential and terms of higher order in $\alpha_s$ are neglected, and the answer given as $$\label{six}
\delta_{thr}={15\over16}\pi^3C_F^3\alpha_s^2(m_t)
\left({1\over2}\alpha_s(\mu)-{\sqrt2\over3}\alpha_s(m_t)
+{b\over\pi}\alpha_s^2(\mu)
+{\beta_0\over2\pi}\alpha_s^2(\mu)\ln{\mu\over2\pi^2C_F\alpha_sm_t}\right).$$ Although the analytic summation of $(\pi C_F\alpha_s/v)^n$ terms is theoretically interesting, there are unfortunately a number of problems in the evaluation of $\delta_{thr}$ carried out in Ref. [@ynd]:
1. Evaluation of Eq. (\[six\]) as it stands gives a negative result, $\delta_{thr}=-3.82\times10^{-3}$ [@wro], which obviously contradicts the well-known fact that multi-gluon exchanges lead to an enhancement of the $t\bar t$ excitation curve.
2. Equation (\[six\]) is not renormalization-group invariant, as the coefficient of $\alpha_s(\mu)$ does not match correctly that of $\ln\mu$. Subject to the approximations explained before, the correct, renormalization-group invariant expression is obtained by including an additional term $[\alpha_s(\mu)-\alpha_s(m_t)]/2$ within the parentheses of Eq. (\[six\]). We note in passing that this additional term may be traced to a change of scale in the last term of Eq. (\[fivea\]), from the arbitrary value $\mu$ to the proper physical choice $m_t$. For $\mu=m_Z$, the value of the corrected expression is $\delta_{thr}=-3.83\times10^{-4}$, i.e., essentially zero, and very different from the value reported in Ref. [@ynd].
3. As we have a near cancellation of relatively large contributions and, moreover, the result should be positive, it is clear that, for the chosen value of $v_0$, the neglect of the exponential in Eq. (\[fivea\]) is not justified. In fact, evaluating numerically Eq. (\[fivea\]) with the exponential included, we find $\delta_{thr}=1.55\times10^{-2}$. This value is positive, as it should be, and much larger than the answer obtained without the exponential. Actually, it is 2.9 times larger than the result reported in Ref. [@ynd].
We also note that the integrand of Eq. (\[fivea\]) vanishes at $1.002\,v_0$, where $v_0$ is defined above Eq. (\[six\]). Thus, $v_0$ is a reasonable value to use as the upper limit of integration because at that point the resummed series and the perturbative ${{\cal O}}(\alpha_s)$ contributions nearly coincide; evaluation of the integrals up to the point where the integrand actually vanishes leads to negligible changes. On the other hand, a possible weakness of the method is that $v_0$ is rather large and for such values of $v$ it is not clear that the resummed expression is valid. Nevertheless, for our present purpose, which is the evaluation of the contribution to $\Delta\rho$ according to the prescriptions of Ref. [@ynd], we use the above value of $v_0$. For the same reason, we use $\mu=m_Z$, although this is not a characteristic scale in connection with the $\rho$ parameter; using the appropriate value, $\mu=m_t$, would make the radiative correction and the overall result slightly smaller. However, we include the effect of an additional overall factor $(1-v^2/3)/(1-v^2)^2$, which should be appended to the integrand of Eq. (\[fivea\]) in the case of $\Delta\rho$; it increases the result by about 4.3%. We then find that the contribution to $\Delta\rho$ from the range $0\le v\le v_0$ above threshold is $-(16/15)1.55\times10^{-2}\cdot1.043\,x_t=-0.0172\,x_t$. Combining this result with Eq. (\[four\]), we find that, after correcting the errors discussed above, the formulation of Ref. [@ynd] leads to $\delta(\Delta\rho)=-0.0450\,x_t$. For comparative purposes, we rescale this result to the case $\alpha_s(m_Z)=0.118$, the value used in our calculations, and obtain, for $m_t=\sqrt3m_Z$, $$\label{seven}
\delta(\Delta\rho)_{thr}= -0.0486\,x_t
\qquad\mbox{(Ref.~\cite{ynd})}.$$
In our own work we have applied two different methods to evaluate the imaginary parts near threshold. The first one is the resonance approach of Ref. [@kue], which assumes the existence of narrow, discrete $t\bar t$ bound states characterized by $R_n(0)$ and $M_n$. Moreover, in Ref. [@kue] a specific interpolation procedure is developed to implement the matching of the higher resonances and the continuum evaluated perturbatively to ${{\cal O}}(\alpha_s)$. The second one is the Green-function (G.F.) approach of Ref. [@str], which takes into account the smearing of the resonances by the weak decay of its constituents and leads to a continuous excitation curve. Both approaches make use of realistic QCD potentials, the Richardson and the Igi-Ono potentials, which reproduce accurately charmonium and bottonium spectroscopy and are expected to describe well toponia, too. These QCD potentials contain a term linear in the inter-quark distance, $r$, to account for the confinement of color. Detailed studies reveal that the shape of the Green function is not very sensitive to the long-distance behaviour of the potential [@exc]. This may be understood by observing that the top quarks decay before they are able to reach large distances. The rapid weak decay of the top quarks, which causes the screening of the long-distance effects, is properly taken into account in the Green-function approach of Ref. [@kni], while it is not implemented in the resonance approaches of Refs. [@kni] and [@ynd]. In the latter case, it is clearly more consistent theoretically and more realistic phenomenologically to keep the linear term of the potential, as is done in Ref. [@kni] but not in Ref. [@ynd]. On the other hand, both resonance and Green-function approaches of Ref. [@kni] effectively resum the contributions of soft multi-gluon exchanges in the ladder approximation [@str; @pri]. This automatically includes the final-state interactions emphasized in Ref. [@ynd]. However, we stress that all these methods are based on a non-relativistic approximation. There are additional contributions due to the exchange of hard gluons, which give rise to sizeable reduction factors [@bar], e.g., $(1-3C_F\alpha_s/\pi)$ in the case of $\Delta\rho$ [@kni].
Because of the more complicated potentials used in our two approaches, we have to rely on numerical computations. For $m_t=\sqrt3m_Z$ and $\alpha_s(m_Z)=0.118 $, we find $$\begin{aligned}
\label{eighta}
\delta(\Delta\rho)_{thr}&=&-(0.034\pm0.010)x_t
\qquad\mbox{(G.F.)},\\
\label{eightb}
\delta(\Delta\rho)_{thr}&=&-(0.042\pm0.013)x_t
\qquad\mbox{(res.)},\end{aligned}$$ where we have included the 30% error estimate given in Ref. [@kni]. The corresponding perturbative ${{\cal O}}(\alpha_s)$ contribution [@djo] is, for $m_t=\sqrt3m_Z$, $$\begin{aligned}
\delta(\Delta\rho)_{\alpha_s}&=&-{2\alpha_s(m_t)\over3\pi}
\left({\pi^2\over3}+1\right)x_t \nonumber\\
\label{nine}
&=&-0.0991\,x_t.\end{aligned}$$ Unlike $\delta(\Delta\rho)_{thr}$, $\delta(\Delta\rho)_{\alpha_s}$ obtains important contributions arising from the non-conserved vector and axial-vector currents associated with the $W$-boson vacuum-polarization function.
It is apparent that the result for $\delta(\Delta\rho)_{thr}$ obtained in the formulation of Ref. [@ynd] (Eq. (\[seven\])) is of the same magnitude as our two evaluations (Eqs. (\[eighta\]), (\[eightb\])). It amounts to 49% of the $\delta(\Delta\rho)_{\alpha_s}$ correction, while our results of Eqs. (\[eighta\]), (\[eightb\]) correspond to 34% and 42%, respectively. Thus, it is somewhat larger than our resonance calculation and significantly larger than our G.F. result. Part of the difference is due to the fact that we have included a hard-gluon correction [@bar], $(1-3C_F\alpha_s(M_n)/\pi)$ (cf. Eqs. (4.2b, 4.3d) of Ref. [@kni]), which has not been incorporated into Eqs. (\[three\]), (\[four\]). Note that Eq. (\[fivea\]) must not be multiplied by this factor, since only non-relativistic terms are subtracted in the integrand of that equation. If this correction is applied to Eqs. (\[three\]), (\[four\]), $\delta(\Delta\rho)_{pole}$ becomes $-0.0245\,x_t$ and, rescaled to $\alpha_s(m_Z)=0.118$, the overall result in the formulation of Ref. [@ynd] is $-0.0450\,x_t$, instead of Eq. (\[seven\]). This is quite close to our resonance calculation (Eq. \[eightb\]). For the reasons explained in Ref. [@kni] (see also the discussion), for values of $m_t\ge130$ GeV we have expressed a preference for the G.F. approach. On the other hand, the three calculations amount to only 3.4%–4.5% of the ${{\cal O}}(\alpha)$ contribution, $x_t$.
[TABLE I. $t\bar t$ threshold effects on $\Delta\rho$ relative to the Veltman correction, $-100\times\delta(\Delta\rho)_{thr}/x_t$, as a function of $m_t$. The calculation based on Ref. [@ynd] (total \[6\]) is compared with our previous resonance (res. \[5\]) and G.F. (G.F. \[5\]) results. For completeness, the contributions from below (pole \[6\]) and above (thr. \[6\]) threshold are also displayed separately in the first case. The hard-gluon correction is included in the three calculations and the input value $\alpha_s(m_Z)=0.118$ is used.]{}\
$m_t$ \[GeV\] pole \[6\] thr. \[6\] total \[6\] res. \[5\] G.F. \[5\]
--------------- ------------ ------------ ------------- ------------ ------------
120.0 2.84 1.99 4.83 4.81 3.58
140.0 2.73 1.92 4.65 4.43 3.43
157.9 2.64 1.86 4.50 4.17 3.36
180.0 2.55 1.79 4.35 3.91 3.35
200.0 2.47 1.75 4.22 3.71 3.41
220.0 2.41 1.70 4.11 3.53 3.51
The above results have been obtained for $m_t=\sqrt3m_Z$, the value employed in Ref. [@ynd]. In Table I, we compare the calculation of $\Delta\rho$ using the formulation of Ref. [@ynd] with our own resonance and G.F. evaluations, over the range 120 GeV${}\le m_t\le220$ GeV. We have checked that the numbers given in the third and fourth columns of Table I do not change when we identify $v_0$ in Eq. (\[fivea\]) with the zero of the integrand, i.e., the point where the resummation of $(\pi C_F\alpha_s/v)^n$ terms matches the perturbative expression. It is apparent from Table I that the general features described above hold over the large range 120 GeV${}\le m_t\le220$ GeV. In fact, the three calculations are similar in magnitude. Moreover, the formulation of Ref. [@ynd] gives results somewhat larger but quite close to our resonance calculation, the agreement being particularly good at low $m_t$ values, where the resonance picture is expected to work best.
[TABLE II. Perturbative ${{\cal O}}(\alpha_s)$ and $t\bar t$ threshold [@ynd] contributions to $\Delta\rho$ relative to the Veltman correction, $-100\times\delta(\Delta\rho)_{\alpha_s}/x_t$ and $-100\times\delta(\Delta\rho)_{thr}/x_t$, for $m_t=\sqrt3m_Z$ as a function of $\mu_{pert}$. The input value $\alpha_s(m_Z)=0.118$ is used.]{}\
$\mu_{pert}$ pert. ${{\cal O}}(\alpha_s)$ total \[6\] sum
-------------- ------------------------------ ------------- -------
$m_t/2$ 10.96 4.05 15.00
$m_t$ 9.91 4.50 14.42
$2m_t$ 8.87 5.18 14.05
In order to illustrate the stability of the results with respect to a change of the scale, $\mu_{pert}$, employed in the perturbative ${{\cal O}}(\alpha_s)$ calculations, in Table II we show the values of $\delta(\Delta\rho)_{\alpha_s}$ (cf. Eq. (\[nine\])), $\delta(\Delta\rho)_{thr}$, and their sum for $m_t=\sqrt3m_Z$ and $\mu_{pert}=m_t/2,m_t,2m_t$. Here $\delta(\Delta\rho)_{thr}$ is calculated on the basis of Eqs. (\[three\]) and (\[fivea\]), with $\alpha_s(m_t)$ replaced by $\alpha_s(\mu_{pert})$ in the last term of Eq. (\[fivea\]), and $v_0$ chosen as the zero of the integrand (the factor $(1-v^2/3)/(1-v^2)^2$ discussed before Eq. (\[seven\]) is also appended). We see that there are variations in $\delta(\Delta\rho)_{thr}$ of $-10\%$ to 15%, which are not particularly large. Interestingly, they partly compensate the corresponding variations in $\delta(\Delta\rho)_{\alpha_s}$. Indeed, the overall QCD correction, $\delta(\Delta\rho)_{\alpha_s}+\delta(\Delta\rho)_{thr}$, which is the physically relevant quantity, changes by only $-3\%$ to 4%, a remarkably small variation.
Bohr-atom estimate and other observations
=========================================
It is instructive to make a simple estimate of the threshold effects on $ \Delta\rho$ by using an elementary Bohr-atom model of toponium [@fad]. We have already employed this model to show that it leads to values of $\left|R_{10}(0)\right|^2$ within 20% of those obtained with the Richardson potential [@fan]. Now we want to apply it to illustrate the order of magnitude of $\delta(\Delta\rho)_{thr}$. Then, instead of Eq. (\[three\]), we have $$\label{ten}
\delta(\Delta\rho)_{thr}=-x_t\,\pi C_F^3\sum_n{\alpha_s^3(k_n)\over n^3},$$ where $k_n =C_F\alpha_s(k_n)m_t/(2n)$ is the momentum of the top quark in the $nS$ orbital of the Bohr-atom model. We note that in this elementary estimate we have evaluated $\alpha_s$ at scale $k_n$. This is a simple generalization of Ref. [@fad], where, for the ground state, $\alpha_s$ is evaluated at $k_1=C_F\alpha_s(k_1)m_t/2$. In particular, for $m_t=\sqrt{3} m_Z$ and $\alpha_s(m_Z)=0.118$, we have $ k_1=16.7$ GeV and $\alpha_s(k_1)=0.159$. In the rough estimate of Eq. (\[ten\]), we have also disregarded the continuum enhancement above threshold. Because $k_n$ scales as $1/n$, $\alpha_s(k_n)$ increases with $n$. We have iteratively evaluated $k_n$ and $\alpha_s(k_n)$ up to $n=50$ and found $\sum_{n=1}^{50}\alpha_s^3(k_n)/n^3=5.56\times10^{-3}$, which is 1.38 times the $1S$ contribution. The sum converges rapidly; the first 12 terms already yield a factor of 1.36. Taking this to be an approximate estimate of the enhancement factor due to the resonances with $n\ge 2$, and normalizing Eq. (\[ten\]) relative to Eq. (\[nine\]), we have $$\label{eleven}
{\delta(\Delta\rho)_{thr}\over\delta(\Delta\rho)_{\alpha_s}}
= 11.3 {\alpha_s^3 (k_1)\over\alpha_s(m_t)}.$$ For $m_t=\sqrt3m_Z$ and $\alpha_s(m_Z)=0.118$, we have $\alpha_s(m_t)=0.109$, $\alpha_s(k_1)=0.159$, and Eq. (\[eleven\]) gives 42%, which is rather close to the results obtained from the detailed resonance approaches, i.e., Eqs. (\[seven\]) ,(\[eightb\]) divided by Eq. (\[nine\]). Although Eq. (\[eleven\]) is a rough estimate, it allows us to understand why the threshold effects on the $\rho$ parameter, although nominally of ${{\cal O}}(\alpha_s^3)$, can be as large as $\approx40\%$ of the ${{\cal O}}(\alpha_s)$ contribution. Two factors are apparent: one is a large numerical coefficient, $\approx11$, and the other is that the natural scale in the threshold contribution is $k_1\ll m_t$, so that $\alpha_s(k_1)$ is considerably larger than $\alpha_s(m_t)$.
A relevant question is how these effects compare with unknown ${{\cal O}}(\alpha_s^2)$ contributions. The leading ${{\cal O}}(\alpha_s)$ corrections to the $\rho$ parameter are $\approx10\%$ (cf. Eq. (\[nine\])). If the same ratio holds between ${{\cal O}}(\alpha_s^2)$ and ${{\cal O}}(\alpha_s)$, the threshold corrections we have discussed would be roughly 3 to 4 times larger for $m_t\approx160$ GeV. It is known that the use of the running top-quark mass absorbs most of the ${{\cal O}}(\alpha_s)$ corrections proportional to $m_t^2$ in the $\rho$ parameter and $Z\to b\bar b$ amplitudes. If this was a general feature of the perturbative expansion, one could control the bulk of such contributions and the threshold effects would neatly stand out. However, it is impossible to ascertain these features without detailed ${{\cal O}}(\alpha_s^2)$ calculations, which are not available at present. The effect of these threshold corrections on the electroweak parameters are not particularly large for $m_t\le220$ GeV. For example, for $m_H=250$ GeV and $m_t=(130,160,200)$ GeV we found [@fan] shifts $\Delta m_W=-(42,55,77)$ MeV from the ${{\cal O}}(\alpha_s)$ contributions, $\Delta m_W=-(14,19,27)$ MeV from threshold effects evaluated with the resonance method, and $\Delta m_W=-(10,16,25)$ MeV from threshold effects evaluated in the G.F. approach. The ratio of threshold to ${{\cal O}}(\alpha_s)$ shifts in $m_W$ are similar but not identical to those we encountered in $\Delta\rho$. The reason is that the range of $m_t$ values of interest is not really in the asymptotic regime, and the sub-leading ${{\cal O}}(\alpha_s)$ and threshold contributions to $\Delta r$ [@sir] (the relevant correction to calculate $m_W$) have somewhat different $m_t$ dependences. For example, for $m_t=160$ GeV and $m_H=250$ GeV, the ratios for $m_W$ shifts are 0.35 in the resonance approach and 0.29 in the G.F. method [@fan]. For $m_t=m_Z$ (220 GeV), the corresponding ratios amount to 0.25 (0.34) in the resonance approach and to 0.19 (0.34) in the G.F. method, with the latter giving somewhat lower values below $m_t=220$ GeV.
Conclusions and discussion
==========================
1. We have pointed out that the conclusions of Ref. [@ynd] concerning the magnitude of the threshold effects discussed in Ref. [@kni] are without foundation. They are based on the consideration of a vector amplitude $\Pi(s)-\Pi(0)$, which, although very important in other applications, has a very small effect in the analysis of Ref. [@kni]. This should be obvious because heavy particles of mass $m^2\gg s$ decouple in this amplitude.
2. We have applied the analysis of ${\mathop{\Im m}\nolimits}\Pi_{thr}(s^\prime)$ given in Ref. [@ynd] to study the threshold corrections to the $\rho$ parameter for $m_t=\sqrt3m_Z$ and, after re-evaluating the quantities involved, found a result that is similar in magnitude to those we have reported previously. Contrary to the conclusions of Ref. [@ynd], it is somewhat larger than our resonance calculation and significantly larger than our Green-function (G.F.) evaluation. In particular, when hard-gluon contributions are included, the result derived in the approach of Ref. [@ynd] is quite close to our own resonance calculation, well within the theoretical errors quoted previously [@kni]. As illustrated in Table I, similar conclusions apply over the large range 120 GeV${}\le m_t\le220$ GeV. We have also pointed out that the formulation of Ref. [@ynd] leads, for $m_t=\sqrt3m_Z$, to values of $\delta(\Delta\rho)_{\alpha_s}+\delta(\Delta\rho)_{thr}$ which are remarkably stable with respect to changes in the scale employed in the perturbative calculation (see Table II).
3. In our opinion, the fact that the formulation of Ref. [@ynd], when applied to $\Delta\rho$, gives results similar to ours (and, in fact, quite close to our own resonance calculations), supports the notion that these threshold effects can be reasonably estimated. By the same token, it does not support recent claims of large ambiguities in the threshold calculations [@hal].
4. It is worthwhile to point out that, when the corrected values of $\delta_{pole}$ and $\delta_{thr}$ obtained in the present paper are applied, the threshold effects in $\Pi(s)-\Pi(0)$ amount to $\approx24\%$ of the perturbative ${{\cal O}}(\alpha_s)$ corrections evaluated at scale $m_t$. Here $m_t=\sqrt3m_Z$ and $\alpha_s(m_Z)=0.115$ have been assumed and, for simplicity, the hard-gluon correction is not included. (The numerical evaluation reported in Ref. [@ynd] gives, instead, 15%). These numbers are somewhat smaller than the effects we encountered in the $\Delta\rho$ case: 34% in the G.F. approach and 42% in the resonance framework (cf. Eqs. (\[seven\]), (\[eightb\]) divided by Eq. (\[nine\])). However, we find it neither extraordinary nor unusual that radiative-correction effects may vary by factors of 1.42, 1.75 or, for that matter, 2.8, when applied to very different amplitudes. In particular, the logic behind the conclusion of Ref. [@ynd] seems rather strange to us. It is apparently based on the curious argument that a 15% correction in $\Pi(s)-\Pi(0)$ is considered to be very small and that, as a consequence, a 34% or 42% effect in $\Delta\rho$ is intolerably large.
5. It is well known that, for large $m_t$, the widths of the individual top quarks become larger than the $1S$–$2S$ mass difference, so that the bound-state resonances lose their separate identities and smear into a broad threshold enhancement [@str]. For this reason, we have expressed a preference to use the resonance formulation for $m_t\le130$ GeV and the G.F. approach for $m_t\ge130$ GeV [@kni]. However, we do not regard either method as “rigorous” and, in fact, in Ref. [@kni] we have assigned an estimated 30% uncertainty to their evaluation. In the analysis of electroweak parameters, we have found that the resonance approach of Ref. [@kue] and the G.F. formulation of Ref. [@str] lead to similar results over a wide $m_t$ range, with the latter giving somewhat smaller values for $m_t\le220$ GeV.
6. In order to make the relative size of the threshold effects more readily understandable, we have presented a simple estimate based on an elementary Bohr-atom model of toponium (cf. Eq. (\[eleven\])), and briefly commented on the possible magnitude of ${{\cal O}}(\alpha_s^2)$ corrections.
7. We have not discussed here the non-perturbative contributions connected with the existence of a gluon condensate, since they are known to be exceedingly small in the case of the $t\bar t$ threshold [@yak]; this has also been noticed in Ref. [@ynd].
**ACKNOWLEDGMENTS**
We would like to thank Paolo Gambino, Gustav Kramer, Hans Kühn, Alfred Mueller, and Peter Zerwas for useful discussion. This research was supported in part by the National Science Foundation under Grant No. PHY–9017585.
[99]{}
V.S. Fadin and V.A. Khoze, Pis’ma Zh. Eksp. Teor. Fiz., 417 (1987) \[JETP Lett. [**46**]{}, 525 (1987)\]; Yad. Fiz. [**48**]{}, 487 (1988) \[Sov. J. Nucl. Phys. [**48**]{}, 309 (1988)\].
W. Kwong, Phys. Rev. [**D 43**]{}, 1488 (1991).
M.J. Strassler and M.E. Peskin, Phys. Rev. [**D 43**]{}, 1500 (1991).
S. Güsken, J.H. Kühn, and P.M. Zerwas, Phys. Lett. [**155B**]{}, 185 (1985); Nucl. Phys. [**B262**]{}, 393 (1985); J.H. Kühn and P.M. Zerwas, Phys. Rep. [**167**]{}, 321 (1988); B.A. Kniehl, J.H. Kühn, and R.G. Stuart, Phys. Lett. [**B 214**]{}, 621 (1988).
B.A. Kniehl and A. Sirlin, Phys. Rev. [**D 47**]{}, 883 (1993) and references cited therein.
F.J. Ynduráin, “On the Evaluation of Threshold Effects in Processes Involving Heavy Quarks,” Madrid University Report No. FTUAM 38/93 (November 1993).
In Ref. [@ynd] there is a footnote claiming that an expression in Ref. [@kni] involving $|R_n(0)|^2$ is not renormalization-group invariant. This is not the case, as in our discussion $|R_n(0)|^2$ and the other terms in that expression are evaluated at their physical scales (i.e., they do not depend on an arbitrary scale $\mu$). The same would occur in Eq. (\[two\]) if $\mu$ was identified with the relevant physical scales, which are roughly the top-quark momenta in the Bohr-atom orbitals. In Eq. (\[two\]), the explicit $\mu$ dependence compensates for the fact that $|\tilde R_{n0}^{(0)}|^2$ is evaluated at scale $\mu$.
M. Veltman, Nucl. Phys. [**B 123**]{}, 89 (1977).
This correction shifts the value of the quantity called $\delta_{pole}$ in Ref. [@ynd] from $1.96\times10^{-2}$ [@ynd] to $2.61\times10^{-2}$.
Had we used an arbitrary scale $\mu$ in the calculation of the last term in Eq. (\[fivea\]), this requirement would no longer be satisfied.
In Ref. [@ynd], the numerical value $\delta_{thr}=5.4\times10^{-3}$ is reported, which is neither consistent with Eq. (\[fivea\]) nor with Eq. (\[six\]).
See, e.g., Fig. 33 in W. Bernreuther et al., [*Proceedings of the Workshop on $e^+e^-$ Collisions at 500 GeV: The Physics Potential*]{}, Munich, Annecy, Hamburg, 1991, ed. by P.M. Zerwas, DESY Report No. 92–123A (August 1992) p. 255.
J.H. Kühn and P.M. Zerwas, private communications.
The origin of such effects is explained, for example, in R. Barbieri, R. Gatto, R. Kögerler, and Z. Kunzst, Phys. Lett., 455 (1975); Nucl. Phys. [**B 105**]{}, 125 (1976). See also S. Titard and F.J. Ynduráin, University of Michigan Report No.UM–TH 93–25 (September 1993).
A. Djouadi and C. Verzegnassi, Phys. Lett. [**B 195**]{}, 265 (1987); A. Djouadi, Nuovo Cim. [**100 A**]{}, 357 (1988); B.A. Kniehl, Nucl. Phys. [**B 347**]{}, 86 (1990).
S. Fanchiotti, B. Kniehl, and A. Sirlin, Phys. Rev., 307 (1993).
A. Sirlin, Phys. Rev. [**D 22**]{}, 971 (1980).
M.C. Gonzalez-Garcia, F. Halzen, and R.A. Vázquez, “On the Precision of the Computation of the QCD Corrections to Electroweak Vacuum Polarizations,” UW-Madison Report No. MAD/PH/804 (December 1993).
V.S. Fadin and O.I. Yakovlev, Yad. Fiz. [**53**]{}, 1111 (1991) \[Sov. J. Nucl. Phys. [**53**]{}, 688 (1991)\].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Causal Dynamical Triangulations (CDT) is a non-perturbative quantisation of general relativity. Hořava-Lifshitz gravity on the other hand modifies general relativity to allow for perturbative quantisation. Past work has given rise to the speculation that Hořava-Lifshitz gravity might correspond to the continuum limit of CDT. In this paper we add another piece to this puzzle by applying the CDT quantisation prescription directly to Hořava-Lifshitz gravity in 2 dimensions. We derive the continuum Hamiltonian, and we show that it matches exactly the Hamiltonian derived from canonically quantising the Hořava-Lifshitz action. Unlike the standard CDT case, here the introduction of a foliated lattice does not impose further restriction on the configuration space and, as a result, lattice quantisation does not leave any imprint on continuum physics as expected.'
author:
- Lisa Glaser
- 'Thomas P. Sotiriou'
- Silke Weinfurtner
bibliography:
- 'ref.bib'
title: 'Extrinsic curvature in $2$-dimensional Causal Dynamical Triangulation'
---
Introduction
============
Causal Dynamical Triangulations (CDT) is a non-perturbative approach to quantum gravity that discretises spacetime into a foliated simplicial manifold. It is an attempt to extend to gravity the lattice methods that have proven very powerful for quantum chromodynamics. CDT has made it possible to numerically explore the path integral over geometries in both $3$ and $4$ dimensions [@ambjorn_nonperturbative_2001; @ambjorn_spectral_2005; @benedetti_spectral_2009; @ambjorn_second-order_2011; @ambjorn_effective_2014; @ambjorn_recent_2015; @benedetti_spacetime_2015]. In $2$ dimensions the model can be solved analytically [@ambjorn_non-perturbative_1998] and gives rise to a continuum Hamiltonian.
Extending lattice methods to gravity is not straightforward. Instead of calculating field configurations on a fixed lattice, the lattice itself becomes the object of the dynamics. The presence of a time foliation is crucial. The precursor to CDT is the theory of Dynamical Triangulations (DT), where the discretisation is implemented by approximating spacetime through simplicial complexes [@ambjorn_dynamical_1995], with each $d$ dimensional simplicial complex consisting of $d$-simplices of flat space glued together along their $d-1$ dimensional faces. In these configurations curvature is concentrated at the $d-2$ dimensional faces of the simplices. The action on the space of simplicial complexes is the Regge action for discretised spacetimes [@regge_general_1961]. Simulations of this theory uncovered the existence of two phases, neither of which resembles a continuum spacetime in a suitable limit. The first is known as the crumpled phase. Simplices are all glued together as closely as possible and in the limit of infinite size the Hausdorff dimension is infinite as well. The other phase is the branched polymer phase, where the simplices form long chains and the Hausdorff dimension of the resulting space is $2$ [@ambjorn_dynamical_1995].
The solution Ambjørn and Loll proposed for this problem was to force the simplicial complex to have a foliated structure [@ambjorn_non-perturbative_1998]. This gives rise to a unique timelike direction. The length of a timelike edge over the length of a spacelike edge is a free parameter, $a_t$. The path integral over these foliated simplicial complexes shows that the resulting geometries are much better behaved. The $2$-dimensional model can be solved analytically in different ways [@durhuus_spectral_2009; @ambjorn_non-perturbative_1998], which lead to the same result. These approaches have been extended to include matter [@ambjorn_new_2012] or local topology changes [@ambjorn_topology_2008].
In $3$ and $4$ dimensions analytic methods are no longer fruitful and CDT has been explored through computer simulations. These have shown that there exists a region in CDT parameter space in which the average Hausdorff dimension of geometries agrees with the dimension of the building blocks, and in which the evolution of spacelike slices follows a mini-superspace action [@ambjorn_semiclassical_2005; @benedetti_spacetime_2015]. This phase has also given rise to the first predictions of a varying spectral dimension [@ambjorn_spectral_2005], which has been found independently in many other approaches [@lauscher_fractal_2005; @horava_spectral_2009; @sotiriou_dispersion_2011](see also Ref. [@carlip_dimensional_2015] for a review and comparison).
Using foliated simplicial complexes might have led to a phase with desirable properties, but introducing a foliation is a thorny issue. Even though the path integral in CDT sums over different foliations it only sums over geometries that actually admit a global foliation. It is thus unclear if one should expect to recover general relativity in the continuum limit or a theory in which all geometries admit a global foliation.
Hořava-Lifshitz (HL) gravity [@horava_quantum_2009] is a typical example of a theory with this characteristic. It is a continuum theory with a preferred foliation whose defining symmetry are foliation-preserving diffeomorphisms. Due to the existence of this foliation, one can add higher-order spatial derivatives without increasing the number of time derivatives. This leads to a modification of the propagators at high momenta that renders the theory power-counting renormalizable. In fact, a certain version of HL gravity called [*projectable*]{} [@horava_quantum_2009; @sotiriou_phenomenologically_2009] has recently been shown to be renormalizable beyond power counting in 4-dimensions [@Barvinsky:2015kil]. In this version the lapse function of the preferred foliation is assumed to be space-independent, which drastically reduces the number of terms in the action and makes the theory tractable. On the other hand, there are serious infrared viability issues concerning projectable 4-dimensional HL gravity [@horava_quantum_2009; @Sotiriou:2009bx; @Koyama:2009hc; @Weinfurtner:2010hz; @Mukohyama:2010xz; @Sotiriou:2010wn] and this suggest that the full non-projectable version [@Blas:2009qj] might be phenomenologically preferable.[^1]
It has been shown in Ref. [@horava_spectral_2009] that the spectral dimension in HL gravity exhibits qualitatively the same behaviour as in CDT in 4 dimensions, i.e. it changes from 4 to 2 in the ultraviolet. Ref. [@Sotiriou:2011mu] has focused on the simpler case of 3 dimensions, but it has shown that the complete flow of the spectral dimension of (non-projectable) HL gravity from 3 to 2 can reproduce precisely the flow of of the spectral dimension in 3-dimensional CDT. Interestingly, a certain resemblance can also be found when comparing the Lifshitz phase diagram to the phase diagram of CDT. The measured volume profile of spacelike slices in CDT can be fit with a mini-superspace action derived from either HL gravity or general relativity [@ambjorn_cdt_2010; @benedetti_spacetime_2015]. These are indications for a connection between HL gravity and CDT in the continuum limit.
A strong piece of evidence that CDT and HL gravity might be related comes from comparing the Hamiltonians of the $2$d theories. This comparison has been done with projectable HL gravity. In CDT a continuum Hamiltonian can be derived from the analytic solution of the $2$d theory, while in projectable HL gravity a Hamiltonian can be derived through canonical quantisation. These two Hamiltonians have been compared and found to agree, up to a specific rescaling [@ambjorn_2d_2013].
The CDT action in 2 dimensions is the discretized version of the Einstein–Hilbert action $$\begin{aligned}
\label{saction}
S_{2d CDT}= \frac{1}{2 \kappa} \int {\mathrm{d}}x^2 \sqrt{-g} (R- 2 \Lambda) \to \lambda \,{\mathcal{N}}\;,\end{aligned}$$ where $\kappa$ is a dimensionless parameter, $g$ is the determinant of the two dimensional metric $g_{\mu\nu}$, $R$ is the corresponding Ricci scalar and $\Lambda$ the cosmological constant. ${\mathcal{N}}$ is the total number of simplices and $\lambda$ is the discrete analogue of the cosmological constant. The action of projectable HL gravity in 2 dimensions is [@Sotiriou:2011dr]
$$\begin{aligned}
\label{hlaction}
S_{2d HL} = \frac{1}{2 \kappa} \int {\mathrm{d}}x {\mathrm{d}}t N \sqrt{h} [(1-\lambda_{HL}) K^2 - 2 \Lambda] \;,\end{aligned}$$
where $K$ is the mean curvature of the slices of the preferred foliation, $h$ is the induced metric, and $\lambda_{HL}$ is an extra coupling with respect to GR. For $\lambda_{HL}=1$ the only term that survives is the cosmological constant. This is also the case for the Einstein–Hilbert action (modulo topological consideration) considering that the Ricci scalar is a total divergence in 2 dimensions. Though one can in principle absorb the coefficient of $K^2$ in the HL gravity action by suitably redefining the cosmological constant and multiplying the action by a suitable coefficient, this can only be done if no coupling to matter is present and strictly when $\lambda_{HL}\neq 1$.
The fact that the discretised version of the Einstein-Hilbert action and the canonical quantisation of action (\[hlaction\]) lead to the same Hamiltonian, up to a rescaling, that can be interpreted as fixing $(1-\lambda_{HL})$, is quite intriguing. It implies that lattice regularisation of general relativity via CDT does not lead back to general relativity in the continuum limit, but instead to a theory with a preferred foliation. Since CDT restrict the configuration space to that of foliated triangulations, a possible interpretation would be that this restriction leaves its imprint in the continuum limit. In this perspective, there seems to be a mismatch between the configuration space and the symmetries of the action in CDT. It is thus very tempting to promote the configuration space restriction into an actual symmetry of the (continuum) action, [*i.e.*]{} start from a discretisation of an action that is invariant under only foliation-preserving diffeomorphisms, as is the case for HL gravity.
To this end, instead of applying the CDT prescription to a discretised version of action as in Ref. [@ambjorn_2d_2013], we apply it to a discretised version of action (\[hlaction\]). We derive the corresponding continuum Hamiltonian and we compare it with both the standard CDT continuum Hamiltonian and the Hamiltonian one obtains after canonically quantising HL gravity. We show that, for all boundary conditions, we can recover the Hamiltonian for HL gravity, including a free parameter corresponding to $\lambda_{HL}$. That is, the initial action, and the continuum action one would infer by assigning an action to the continuum Hamiltonian match exactly and share the same continuum symmetries, unlike the case of standard CDT, studied in Ref. [@ambjorn_2d_2013].
The rest of the paper is organised as follows. In Section \[sec:ext\] we find a discrete realisation of the extrinsic curvature squared term for $2$d CDT, which we include in the action in Section \[sec:sum\] where we also solve the resulting model analytically. In Section \[sec:H\] we use this analytic solution to derive the Hamiltonian for $2$d CDT with extrinsic curvature terms included, and compare this to the Hamiltonian of projectable $2$d HL gravity.
\[sec:ext\]A discrete extrinsic curvature
=========================================
Our first task is to find an appropriate discretisation for the extrinsic curvature of constant time slices. To this end we will follow the lines of Ref. [@dittrich_counting_2006], where the extrinsic curvature was used to define trapped surfaces in a triangulation. It is convenient to actually consider the extrinsic curvature of half-integer time slices $t + \frac{1}{2}$. This avoids the curvature singularities at the d-2 simplices in the integer $t$ slices. The extrinsic curvature is concentrated at the joints, or d-1 simplices.
The extrinsic curvature of a spacelike surface $\Sigma$ in a manifold $M$ is given by $$\begin{aligned}
K_{ab}=- h^{c}_{a} \nabla_c n_b\end{aligned}$$ with $n^b$ a unit vector normal to the surface $\Sigma$ and $h_{ac}$ the induced metric on $\Sigma$. To calculate the extrinsic curvature of the half integer $t$-slices we need the unit vectors normal to the two pieces of the constant time surface $n^{a}_{(i)}$ and the spacelike unit tangent vectors along the constant time surface $s^{a}_{(i)}$, $$\begin{aligned}
n^{a}_{(i)}&= \text{cosh}\left( \rho_{(i)} \right) \mathbf{e^{a}_0} + \text{sinh}\left( \rho_{(i)} \right) \mathbf{e^{a}_1} \,,\\
s^{a}_{(i)}&= \text{sinh}\left( \rho_{(i)} \right) \mathbf{e^{a}_0} + \text{cosh}\left( \rho_{(i)} \right) \mathbf{e^{a}_1} \;.\end{aligned}$$ $\rho_{(i)}$ is the angle between the normal vector of the $t+\frac{1}{2}$ surface and the d-1 simplex at which the curvature is located. For the 2d case this is sketched in Figure \[fig:upupdowndown\].
The triangles used in CDT are isosceles with a spacelike edge of length $\ell$ at the base and two timelike edges of length $a_t \ell$. Hence, the angle $\rho$ depends on the base angle $\alpha$, $$\begin{aligned}
\alpha= \text{arccos} \left( \frac{1}{2 a_t} \right) \;.\end{aligned}$$ The relative length parameter $a_t$ lies in the interval $\frac{1}{2}<a_t < \infty$, with the limiting cases clearly being excluded as degenerate, since for $a_t=\frac{1}{2}$ the triangle becomes a spacelike line, and for $a_t= \infty$ it turns into two parallel timelike lines. This gives us a range for the angle $0 < \alpha < \frac{\pi}{2}$, as we would expect for the base angle of a triangle. Using this and Fig. \[fig:upupdowndown\] we can determine that for the down-down transition $\rho$ is given as $$\begin{aligned}
\label{eq:rho}
\rho(x_1) &= \alpha - \frac{\pi}{2} & \rho(x_2) &= -\alpha + \frac{\pi}{2}\,, \end{aligned}$$ whereas for the up-up transition $\rho$ has the opposite sign.
One can embed any two triangles into a local Minkowski system such that the kink between them is flat. The covariant derivative then simplifies to the normal coordinate derivative. It is straightforward to see from Fig. \[fig:upupdowndown\] that the derivative of the normal vector will diverge as one moves over the kink. In Ref. [@dittrich_counting_2006] this is resolved by introducing a class of smoothing functions $\delta_\epsilon$ that converge to the delta function as $\epsilon \to 0$. The angle can then be written as $$\begin{aligned}
\rho(x)= \rho_{(1)} + \Delta \rho \int_{-\epsilon}^{x} \delta_\epsilon(x') {\mathrm{d}}x'\;,\end{aligned}$$ with $\Delta \rho=\rho_{(2)}-\rho_{(1)}$.
The induced metric can be written as $h_{a}^c=\eta^{c}_{a} +n_{a}n^{c}$, and one can then calculate the extrinsic curvature as $$\begin{aligned}
\label{Kdef}
K^{ab}(x)= - \delta_\epsilon(x) \Delta \rho \;\text{cosh}(\rho(x)) s^a(x) s^b(x)\;.\end{aligned}$$ From this one can calculate the integrated extrinsic curvature scalar as $$\begin{aligned}
K&=\int \lim_{\epsilon \to 0} K(x){\mathrm{d}}x= \int \lim_{\epsilon \to 0} K^{ab}(x)\eta_{ab} {\mathrm{d}}x\\
&=\int \delta_\epsilon(x) \Delta \rho \;\text{cosh}(\rho(x)){\mathrm{d}}x \;.\label{eq:Kint}\end{aligned}$$ Plugging the $\rho$ values from equation into equation above, we find the integrated extrinsic curvature. The integrated curvature when passing over down-down, up-up, and up-down transitions are, respectively, $$\begin{aligned}
K_{{\downarrow \downarrow}}&= (2 \alpha-\pi) \text{cosh}(\alpha - \pi/2) \; ;\\
K_{{\uparrow \uparrow}}&= -(2 \alpha-\pi) \text{cosh}(-\alpha + \pi/2) \; ;\\
K_{\uparrow \downarrow} &=0 \;.\end{aligned}$$
Here we are not actually interested in the integrated extrinsic curvature itself, but instead in the integral over the extrinsic curvature squared. Defining $K^2$ by taking the square of is problematic due to the presence of the smoothing function $\delta_\epsilon$. This issue can be easily avoided. The smoothing function has been introduced in eq. (\[Kdef\]) in order to regularise the curvature on the kink. One can do the same for $K^2$ by defining $$\begin{aligned}
\label{K^2def}
K^2(x)= - \delta_\epsilon(x) (\Delta \rho)^2 \;\left[\text{cosh}(\rho(x))\right]^2\;.\end{aligned}$$ That can be understood as “pilling off” the smoothing function from the definition of $K^{ab}(x)$ in eq. (\[Kdef\]) before taking the square and then regularising the result. One can then simply define the integrated squared extrinsic curvature as $$\begin{aligned}
K^2=\int \lim_{\epsilon \to 0} K^2 (x){\mathrm{d}}x \;.\end{aligned}$$ Using this prescription the contribution to the extrinsic curvature squared at each d-1 simplex is $$\begin{aligned}
K_{{\downarrow \downarrow}}^2 &= (2 \alpha-\pi)^2 \text{cosh}^2(\alpha - \pi/2)\\
K_{{\uparrow \uparrow}}^2 &= (2 \alpha-\pi)^2 \text{cosh}^2(-\alpha + \pi/2) \;.\end{aligned}$$ Due to the symmetry properties of the hyperbolic cosine these are the same, hence we shall call this term $K ^2$. Since $0 < \alpha < \pi/2$ one has that $\pi^2 \text{cosh}\left(\pi/2\right)^2 >K^2>0$. We can tune the contribution from each edge by changing the relative edge-length between space and time, but we can not make the contribution vanish or exceed a certain value.
\[sec:sum\]Summing over the simplicial configurations
=====================================================
We can now include the extrinsic curvature squared term in the simplicial action for a triangulation $T$ $$\begin{aligned}
S(T)= \lambda {\mathcal{N}}+ \mu \sum_{\text{transitions}} K^2 \;,\end{aligned}$$ where $\mu$ is the discrete coupling equivalent to $(1-\lambda)/(2 \kappa)$ and transitions refers to all ${{\uparrow \uparrow}}, {{\downarrow \downarrow}}$ transitions, since for $\uparrow \downarrow$ transitions the extrinsic curvature vanishes. Using this discrete action we can calculate the sum over configurations following the method set out in [@ambjorn_non-perturbative_1998].
The first step is to calculate the transition function $T^{(s)}_{i j}(g,a,1)$ for a transition from $i$ initial edges to $j$ final edges in one time-step. In ordinary CDT each configuration from $i$ to $j$ edges has the same weight, since it has the same overall number of triangles. However, in our case the curvature square term adds different weights to different configurations. After calculating $T^{(s)}_{i j}(g,a,1)$ the next step is to calculate the generating function $\theta^{(s)}(x,y | g,a,1)$. Switching from the transition function to the generating function is similar to switching from a micro canonical ensemble to a grand canonical ensemble in thermodynamics. Using the generating function makes many calculations easier, especially taking the continuum limit, in which necessarily $i,j \to \infty$. It is possible to calculate the generating function for $t$ time-steps by gluing together several generating functions, but for us this step is unnecessary. Instead we will take the continuum limit and expand the generating function to obtain the Hamiltonian of the theory.
Ref. [@di_francesco_integrable_2000] has modified the CDT action by adding a term that contributes at ${{\uparrow \uparrow}}, {{\downarrow \downarrow}}$ transitions. The key motivation for adding this terms was to capture the influence of higher curvature corrections. In simplicial triangulations the curvature at a given vertex is proportional to $v-6$, where $v$ is the number of triangles adjacent to the vertex. Hence, in order to construct a term that influences the local curvature they propose to add to the action the terms $|v_1 -3|$ and $|v_2-3|$, where $v_1$ is the number of triangles adjacent to a vertex in the slice above it and $v_2$ is the number of triangles adjacent in the slice below it. Attaching a weight of $a^{|v_1-3|/2+|v_2-3|/2}$ to each vertex is equivalent to attaching a weight of $a$ to each ${{\uparrow \uparrow}}$ or ${{\downarrow \downarrow}}$ transition. The generating function for such a modification has been calculated in Ref. [@di_francesco_integrable_2000]. The extra term in our action leads to the same contribution to the discrete path integral as that considered in Ref. [@di_francesco_integrable_2000]. Hence, even though the physical motivation we used to justify this modification of the action is distinct from that used in Ref. [@di_francesco_integrable_2000], we can nonetheless use the results obtained there.
As we will discuss in more detail latter, the Hamiltonian depends on the boundary conditions and there are more than one options. Ref. [@ambjorn_non-perturbative_1998] applied the closed loop conditions, whereas Ref. [@di_francesco_integrable_2000] solve their model using so called staircase boundary conditions. The latter require that the strip of spacetime has a triangle pointing up on its leftmost edge and a triangle pointing down on its rightmost edge. This is called a staircase because it resembles one in the dual graph description. Each of these up/down pointing final triangles has a weight of $\sqrt{g}$ attached. This is necessary to match to the original result for periodic boundary conditions, as will be explained later.
![\[fig:Tij\]A triangulation going from $i$ initial to $j$ final edges is split into $k$ bunches of $n_r,m_r$ upwards / downwards pointing triangles.](sliceTij.pdf){width="\columnwidth"}
In order to write the sum over all triangulations we define $g=Exp(-\lambda)$ and $a=Exp(- \mu K^2)$. For $\mu = 0$ the extrinsic curvature contribution vanishes and we recover the standard CDT results. The one time-step transfer matrix connecting $i$ initial to $j$ final edges is given by $$\begin{aligned}
T^{(s)}_{i j}(g,a,1)&= \sum^{\text{min}(i,j)}_{ k=1} {\hspace{-15pt}}\sum_{\substack{n_r,m_r \\ r=1,2,\dots,k \\ \sum n_r =i \sum m_r=j}} {\hspace{-15pt}}g^{i+j-1} a^{\sum (n_r-1) + \sum (m_r -1)} \\
&= \sum^{\text{min}(i,j)}_{ k=1} {\hspace{-15pt}}\sum_{\substack{n_r,m_r \\ r=1,2,\dots,k \\ \sum n_r =i \sum m_r=j}} {\hspace{-15pt}}g^{i+j-1} a^{i-k + j-k} \;.\end{aligned}$$ The $i$ intitial and $j$ final edges can be divided into $k$ bunches of adjacent upwards-pointing triangles and $k$ bunches of adjacent downwards-pointing triangles. We denote the number of triangles in the $r$-th bunch of upwards-pointing triangles as $n_r$ and of the $r$-th bunch of downwards-pointing triangles as $m_r$. This is illustrated in Fig. \[fig:Tij\].
![image](staircase+anti.pdf){width="90.00000%"}
Each composition of $i$ into $k$ terms and $j$ into $k$ terms gives the same weight for a fixed $k$. The sum over the compositions $n_r$ and $m_r$ is then just the number of different compositions, leading to $$\begin{aligned}
T^{(s)}_{ij}(g,a,1)&= g^{i+j-1} a^{i+j} \sum^{\text{min}(i,j)}_{ k=1} a^{-2k} \binom{i-1}{k-1} \binom{j-1}{k-1}\;.\end{aligned}$$ While each composition into $k$ terms has the same weight, the factor $a$ changes the weight for different $k$, hence leading to a different weighting of the individual geometries than that found in standard CDT. The next step is to introduce the generating function, for a single time step $$\begin{aligned}
& \theta^{(s)}(x,y | g,a,1) = \sum_{i,j} x^i y^j T^{(s)}_{i,j}(g,a) \label{eq:xysum}\\
&= \frac{1}{g} \sum_{ k\geq 1} a^{-2k} \sum_{i\geq k} \binom{i-1}{k-1} ( a g x)^i \sum_{j\geq k} \binom{j-1}{k-1} ( a g y)^j \label{eq:sums}\\
&= \frac{1}{g} \sum_{ k\geq 1} a^{-2k} \frac{ (agx)^k}{(1-agx)^k} \frac{(agy)^k}{(1-agy)^k} \label{eq:eries}\\
&= \frac{ g x y}{1- ag(x+y) - g^2(1-a^2)xy} \;. \label{eq:onestep}\end{aligned}$$ Diagonalising this single-step generating function and taking it $t$-th power yields a generating function for multiple time steps, $t$. Finally, Di Francesco [*et al.*]{} take the continuum limit of this function.
We will not repeat this calculation here and instead directly derive a continuum Hamiltonian using equation and the composition rule for $t$-step generating functions. For this we need to understand the radius of convergence of the sums in eq. . In order to take a continuum limit the coupling constants $g,x,y$ need to be tuned towards their critical values $x_c,y_c,g_c$ which are reached at the radius of convergence. At these critical values all terms in the sum in eq. make contributions of the same order of magnitude.
This becomes intuitive when looking at eq. to determine the values of $x_c,y_c$ at the critical point. The series converges for $x,y<1$, but only in the limit $x,y \to 1$ do loops of all lengths contribute equally. Since the continuum limit consists of taking the length of the edges to zero, while taking the number of edges to infinity, we see that only the limit $x,y \to 1$ will lead to loops of non zero macroscopic length. With $x_c,y_c =1$ fixed we can then determine the radius of convergence of . We find two possible solutions $g_c=1/(\pm 1 +a)$. Since our solution should smoothly connect to the standard solution for which $a=1, g_c= 1/2$, we conclude that $$\begin{aligned}
\label{eq:xcycgc}
x_c&=1 &y_c&=1& g_c&=\frac{1}{1+a} \;.\end{aligned}$$
In addition to the different couplings, the number of geometries included in the sum is also dependent on the boundary conditions imposed. As already mentioned before, Di Francesco [*et al.*]{} impose staircase boundary conditions, as these allow one to easily count the possible compositions. Ordinarily CDT is solved with periodic boundary conditions with or without a marked point. For our discussion it will be useful to calculate everything for all three of these possible boundary conditions, since we will find that they all find an interpretation in the continuum.
In order to compare the result for staircase boundary conditions with the known results for periodic boundary conditions with one marked point on the in-going boundary Di Francesco [*et al.*]{} [@di_francesco_integrable_2000] glue the staircase together with an anti-staircase (Fig. \[fig:stairs\]). An anti-staircase is defined such that the outermost triangles can be glued onto those of the staircase in a way that reproduces the periodic results. See Fig. \[fig:stairs\].
This gluing leads to a two loop correlator with periodic boundary conditions and marked points on both the in-going and outgoing loop. Attempting to glue the staircase into a single loop would have resulted in a seam with an enforced pattern with down-up down-up (or up-down up-down) pointing triangles. The number of configurations with the anti-staircase boundary condition is the same as that of staircase configurations, hence the one step generating functions are identical. Gluing the configurations together corresponds to simply multiplying the generating functions, and dividing by $xy$ to remove doubled boundary links. One then has that $$\begin{aligned}
\theta^{(2)}(x,y | g,a,1)&= \frac{\theta^{(s)}(x,y | g,a)^2}{xy} \nonumber \\
&= \frac{g^2 x y }{(1- a g(x+y) - g^2(1-a^2)xy)^2} \;.\end{aligned}$$ This is the one step generating function for a propagator with a point marked on both the ingoing and outgoing loops.[^2] To convert it to the generating function for the propagator with a marked point only on the incoming loop we unmark the outgoing loop by dividing the amplitude $T^{(2)}_{ij}(g,a,1)$ by a factor of $i$. In the generating function this corresponds to calculating $$\begin{aligned}
&\theta^{(1)}(x,y|g,a,1)= \int_{0}^{y} \frac{{\mathrm{d}}\tilde y}{\tilde y} \theta^{(2)}(x, \tilde y| g,a,1) \;.\end{aligned}$$ We then find $$\begin{aligned}
\label{eq:onestepmark}
&\theta^{(1)}(x,y|g,a,1)= \nonumber \\
&\frac{g^2 x y }{(1-a g x)(1- a g (x+y) - g^2(1-a^2)xy)} \end{aligned}$$ which in the limit $a \to 1$ agrees with the result in [@ambjorn_non-perturbative_1998]. To complete the possible cases we can also calculate the unmarked propagator with periodic boundary conditions, by removing the mark from the incoming loop through $\int_{0}^{x} {\mathrm{d}}\tilde x/ \tilde x$, and find $$\begin{aligned}
\label{eq:onestepnomark}
\theta^{(0)}(x,y|g,a,1)&= \log\left( \frac{(1-a g y)(1-a g x)}{1- a g(x+y) -g^2 (1-a^2) x y } \right) \;.\end{aligned}$$
\[sec:H\]Deriving a Hamiltonian
===============================
We can derive a Hamiltonian for the development of the loop-loop correlator by combining eq. with the composition rule for the generating functions. For the generating functions with one marked point, or staircase boundary conditions one has $$\begin{aligned}
&\theta(x,y|g,a,t_1+t_2) =\nonumber \\
& \oint \frac{{\mathrm{d}}z}{2 \pi i z} \theta(x,z^{-1}|g,a,t_1) \theta(z,y|g,a,t_2) \;,\end{aligned}$$ where the contour is chosen such that the singularities of $\theta(x,z^{-1}|g,a,t_1)$ are included but those of $\theta(z,y|g,a,t_2)$ are not. This gluing rule is the same for $\theta^{(s)}(x,y|g,a,t)$ and $\theta^{(1)}(x,y|g,a,t)$, since in both cases there is only one consistent way to glue two geometries together along the final / initial boundary. For $\theta^{(0)}(x,y|g,a,t)$ the composition rule is slightly more complicated [@zohren_causal_2009] $$\begin{aligned}
&\theta^{(0)}(x,y|g,a,t_1+t_2) \nonumber \\
&=\oint \frac{{\mathrm{d}}z'}{2 \pi i z'^2} \partial_z \theta^{(0)}(x,z|g,a,t_1)\bigg|_{z=\frac{1}{z'}} {\hspace{-15pt}}\theta^{(0)}(z',y|g,a,t_2)\;,\end{aligned}$$ taking into account that the final/ initial loops of length $l$ can be consistently glued together in $l$ different ways. Inserting or , or to the suitable one of the two expressions above corresponds to calculating them for $t_1=1$, with $t_2=t-1$. This yields $$\begin{aligned}
\theta^{(s)}(x,y|g,a,t) =& \frac{ g x \; \theta^{(s)}\left(\frac{g a + g^2 x(1-a^2)}{1- a g x},y|g,a,t-1\right)
}{ g a + g^2 x (1-a^2)} \\
\theta^{(1)}(x,y|g,a,t) =& \frac{ g x \;\theta^{(1)}\left(\frac{g a + g^2 x(1-a^2)}{1- a g x},y|g,a,t-1\right)
}{ (1-a g x)( a + g x (1-a^2))}\\
\theta^{(0)}(x,y|g,a,t)=& \theta^{(0)}\left(\frac{g a + g^2 x(1-a^2)}{1- a g x},y|g,a,t-1\right) \nonumber\\
& -\theta^{(0)}\left(g a,y|g,a,t-1\right)\;.\end{aligned}$$
One can calculate the continuum Hamiltonian via an expansion in the lattice spacing. In the continuum limit the lattice length $\ell$ is taken to zero in such a way that the coupling constants $x,y,g$ are tuned towards their critical points $x_c,y_c,g_c$ which we determined in eq. . We assume the following scaling around these values $$\begin{aligned}
\label{eq:lattice1}
x&= e^{- \ell X}= 1- \ell X +\frac{1}{2} \ell^2 X^2 +O(\ell^3) \\
y&= e^{- \ell Y} = 1- \ell Y +\frac{1}{2} \ell^2 Y^2+O(\ell^3)\\
g&= \frac{1}{a+1} e^{- \ell^2 \Lambda} = \frac{1}{a+1} (1- \ell^2 \Lambda) +O(\ell^4)\end{aligned}$$ with $a$, and hence $a_t$, kept constant. Since the length of each time step also scales to zero we introduce $t = \tau/\ell$. The scaling we chose is consistent with that in Ref. [@ambjorn_non-perturbative_1998], albeit with a slight modification to match the condition $g \to \frac{1}{a+1}$ in the $\ell \to 0$ limit. It also matches the scaling chosen in Ref. [@di_francesco_integrable_2000] up to a redefinition of the cosmological constant, $\Lambda \to a \Lambda/2$. $\Lambda$ is a numerical constant and such a redefinition is legitimate. However it will become clear that the scaling we chose is preferable when we compare our Hamiltonian to the literature.
We denote the continuum propagators as $$\begin{aligned}
\Theta(X,Y|\Lambda ,a, \tau)&= \lim_{\ell \to 0} \;\ell\; \theta(x,y|g,a,t) \end{aligned}$$ where $x,y,g,t$ are understood as the functions of $\ell$ defined in and $\theta$ without a superscript denotes any of the 3 generating functions $\theta^{(s)}$, $\theta^{(1)}$, and $\theta^{(0)}$. We can then expand $\theta(x,y|g,a,t)$ to first order in $\ell$. This leads to a heat kernel equation $$\begin{aligned}
\label{eq:HPDE}
\partial_\tau \Theta(X,Y| \tau , \sqrt{\Lambda},a) &=- H_X \Theta(X,Y| \tau , \sqrt{\Lambda},a) \;, \end{aligned}$$ with the Hamiltonians $$\begin{aligned}
H^{(s)}_X&= a X + (a X^2 -2 \Lambda )\partial_X \\
H^{(1)}_X&= 2 a X + (a X^2 -2 \Lambda )\partial_X \\
H^{(0)}_X&= (a X^2 -2 \Lambda )\partial_X \;.\end{aligned}$$ From these we can calculate the Hamiltonian acting on $G(L_1,L_2|\tau, \sqrt{\Lambda},a)$ with an inverse Laplace transform, $$\begin{aligned}
H^{(s)}_L&= -a L \partial_L^2 -a \partial_L + 2 \Lambda L \\
H^{(1)}_L&= -a L \partial_L^2 + 2 \Lambda L \\
H^{(0)}_L&= -a L \partial_L^2 -2 a \partial_L + 2 \Lambda L \;.\end{aligned}$$ It is worth pointing out that the Hamiltonian for the staircase boundary condition is the same as one could derive for an amplitude with two marked points, assuming again that the correct gluing rule is used.
We can now compare these Hamiltonians with the Hamiltonian derived for HL gravity in Ref. [@ambjorn_2d_2013], where we have reinstated a constant $\zeta=1/(4(1-\lambda_{HL}))$ that is absorbed into the loop length in that paper. The Hamiltonian from HL gravity actually has three possible forms, depending on the ordering of the operators. The ordering choice corresponds to the different possible boundary conditions that can be imposed in CDT. The three possible Hamiltonians are[^3] $$\begin{aligned}
H_{-1}&= - \zeta L \partial_L^2 + 2 \Lambda L \,,\\
H_{0}&=- \zeta L \partial_L^2 - \zeta \partial_L +2 \Lambda L \,,\\
H_{1}&=- \zeta L \partial_L^2 -2\zeta \partial_L +2 \Lambda L \,.\end{aligned}$$ Identifying $\zeta$ with $a$ there is a complete matching, with $H_{-1}$ matching the Hamiltonian for the single marked loop $H^{(1)}_L$, $H_{0}$ matching the one for the staircase boundary conditions $H^{(s)}_L$, and $H_{1}$ is matching the Hamiltonian for the an unmarked loop $H^{(0)}_L$.
Conclusions
===========
In this paper we have applied the CDT prescription for quantisation to a discretisation of the action of projectable HL gravity instead of the Einstein-Hilbert action. We have calculated the corresponding continuum Hamiltonians for different boundary conditions and we have shown that they match exactly the Hamiltonians one obtains from the canonical quantisation of HL gravity for different orderings of the operators.
This result is far from surprising and it seems to support the idea that the introduction of a lattice in the quantisation scheme leaves continuum physics unaffected even when the lattice is dynamical. However, this issue is more subtle and this can be better appreciated when our results are interpreted in conjunction with the result of Ref. [@ambjorn_2d_2013]. It was shown there that the continuum Hamiltonian for standard 2d CDT agrees with the Hamiltonian for projectable 2d HL gravity up to a rescaling of the loop length $L$ and the cosmological constant $\Lambda$ in HL gravity by a factor $\zeta=1/[4(1-\lambda_{HL})]$. In other words, the starting action did not have a preferred foliation, the final Hamilton did, presumably due to the fact that the configuration space is CDT is restricted to foliated triangulations. Hence, in that case lattice quantisation does seem to leave an imprint on continuum physics.
Combining these two results suggest strongly that if the lattice quantisation scheme is compatible with the symmetries of the original action then it does not affect continuum physics, whereas if the introduction of the lattice introduces further restrictions to the configuration space, then it actually modifies the continuum theory. In standard CDT the requirement that the triangulation be foliated is incompatible between the symmetries of the Einstein-Hilbert action (full diffeomorphisms) and this seems to lead to the generation of the extrinsic curvature terms in the continuum Hamiltonian.
Considering the process of taking the continuum limit as a form of renormalisation, one can compare this situation with work on the renormalisation group flow in HL gravity. The large number of couplings of HL gravity in more than 2 dimensions make a complete study challenging, but first studies of part of the parameter space have been done [@contillo_renormalization_2013; @rechenberger_functional_2013; @dodorico_asymptotic_2014]. Of particular interest is that they show that the isotropic plane $\lambda_{HL}=1$, which contains GR, is not a fixed plane of the flow [@contillo_renormalization_2013]. Hence, one expects to leave this plane through the generation of symmetry breaking terms.
As already mentioned, the continuum Hamiltonian(s) we derived here are in full agreement with the Hamiltonian(s) of HL gravity, whereas they only agree with the continuum Hamiltonian(s) of standard CDT derived in Ref. [@ambjorn_2d_2013] up to a rescaling of parameters. In the continuum theory this rescaling would correspond to a redefinition of the coupling constant and the cosmological constant, and it could also be seen as a reparametrization of time or the spatial coordinate. Hence, as already discussed in the introduction, it is only allowed without loss of generality if there is no coupling to matter. More generically, it would correspond to a fixing of the HL coupling $\lambda_{HL}$ \[to a value different than that corresponding to general relativity\]. This is a salient point that certainly deserves further investigation.
Some notes of caution are in order. Firstly, gravity in $2$ dimensions is significantly different that in higher dimensions, and hence special care needs to be taken in trying to generalize results in $2$d to higher dimension. For example, 2-dimensional general relativity and HL gravity are topological theories and hence quantisation is trivial. In fact, HL gravity is renormalizable in 2d without any anisotropy between space and time. Secondly, the discretisation of the extrinsic curvature squared term is not unique. Our choice was guided by a balance between physical motivation and solvability. An appropriate discretisation should lead to a good continuum limit and the one we chose manifestly does. However, alternative discretisation scheme do exist [@hamber_higher_1984; @ambjorn_quantum_1993].
Clearly, it would be very interesting to generalise our results to higher dimensions. While this might not be possible analytically, it can be done numerically. Some results for simulations of CDT plus higher curvature terms in $2+1$d already exist [@anderson_quantizing_2012]. It would also be particularly interesting to reexamine the discrete RG flow for CDT in $3$ or $4$ dimensions [@ambjorn_renormalization_2014], taking into account extrinsic curvature and higher derivative terms.
The authors would like to thank Jan Ambjørn and Renate Loll for helpful discussions. The work leading to this invention has received funding from the European Research Council under the European Union Seventh Framework Programme (FP7/2007-2013) / ERC Grant Agreement n. 306425 “Challenging General Relativity”. S.W. acknowledges financial support provided under the Royal Society University Research Fellow (UF120112), the Nottingham Advanced Research Fellow (A2RHS2) and the Royal Society Project (RG130377) grants.
[^1]: Other restricted version of HL gravity exist as well [@horava_quantum_2009; @Horava:2010zj; @Sotiriou:2010wn; @Vernieri:2011aa], but we will not discuss them here.
[^2]: The superscript $^{(2)}$ indicates the two marked points, similarly $^{(s)}$ indicates the staircase boundaries,$^{(1)}$ indicates a single marked point and $^{(0)}$ indicates no marked points.
[^3]: The subscripts here are identical to those in Ref. [@ambjorn_2d_2013], which were chosen to reflect the measure on which the Hamiltonian is hermitian.
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'We present initial results from a redshift survey carried out with the Low Resolution Imaging Spectrograph on the 10 m W. M. Keck Telescope in the Hubble Deep Field. In the redshift distribution of the 140 extragalactic objects in this sample we find 6 strong peaks, with velocity dispersions of ${\sim}400$[[km s$^{-1}$]{}]{}. The areal density of objects within a particular peak, while it may be non-uniform, does not show evidence for strong central concentration. These peaks have characteristics (velocity dispersions, density enhancements, spacing, and spatial extent) similar to those seen in a comparable redshift survey in a different high galactic latitude field (Cohen et al 1996), confirming that the structures are generic. They are probably the high redshift counterparts of huge galaxy structures (“walls”) observed locally.'
author:
- 'Judith G. Cohen, Lennox L. Cowie, David W. Hogg, Antoinette Songaila, Roger Blandford, Esther M. Hu and Patrick Shopbell'
title: REDSHIFT CLUSTERING IN THE HUBBLE DEEP FIELD
---
INTRODUCTION
============
The Hubble Deep Field (HDF hereafter; Williams et al 1996) has been surveyed to extraordinary depths, with point source detection limits around 29 mag in the $V$ and $I$ bands, in an intensive campaign by the Hubble Space Telescope in 1995 December. The images represent the deepest images ever taken in the optical and have already provided the basis for studies of deep visual counts (Williams et al 1996), faint object morphology (Abraham et al 1996), gravitational lensing (Hogg et al 1996), and high-redshift objects (Steidel et al 1996; Clements & Couch 1996). These studies represent only the beginning of a large number of scientific projects possible with the HDF data.
In this paper we present the first results of a ground based spectroscopic survey of galaxies in the HDF with the Keck Telescope. These observations were taken in order to provide a database of object redshifts for the use of the astronomical community and in order to expand the faint object redshift surveys of Cowie et al (1996) and Cohen et al (1996) to an additional field.
We assume an Einstein - de Sitter universe ($q_0 = 0.5$) with a Hubble constant 100$h$ [km s$^{-1}$]{}Mpc$^{-1}$.
REDSHIFT SAMPLE
===============
The HDF was selected on the basis of high galactic latitude, low extinction, and various positional constraints described by Williams et al (1996). Redshifts were acquired with the Low Resolution Imaging Spectrograph (Oke et al 1995) on the 10 m W. M. Keck Telescope over two rectangular strips 2 x 7.3 arcmin$^2$ centered on the HST field in 1996 January, March and April. One strip was aligned east-west while the second was aligned at a position angle of 30$^{\circ}$ to maximize the slit length that fell within the HDF itself, where the two strips overlap.
The sample selection is different in each of the two strips. The photometry and the definition of the sample for spectroscopic work are described in Paper II of this series, Cowie et al (1997). Plans exist to complete the sample in a number of photometric bandpasses, but in view of the great interest in the HDF and the many follow up studies in progress, we present this data before the complete sample is available.
Table 1 presents the redshifts of 140 extragalactic objects, about half of which are in the HDF itself and the remainder in the flanking fields. The median redshift $z$ of the extragalactic objects in the present sample is $z=0.53$. Only three are quasars or broad-lined AGNs. 12 Galactic stars were found as well. The radial velocity precision of our redshifts is unusually high for a deep redshift survey. We estimate that the uncertainty in $z$ for those objects with redshifts considered secure and accurate is $\approx 300$ [km s$^{-1}$]{}. Coordinates, crude ground based $R$ magnitudes in a 3 arc-sec diameter intended for object identification only, and redshifts are given in Table 1.
A more detailed account of the photometric and spectroscopic properties of the entire sample including photometry from $U$ through $K$ as well as a discussion of incompleteness in the sample selection and redshift identification is in preparation. These incompletenesses ought not to affect the present work.
REDSHIFT DISTRIBUTION
=====================
Velocity Peaks
--------------
The redshift histogram over the region 0.2 $< z < 0.9$ is is shown in Figure 1. It shows clear evidence of clustering. Velocity peaks were identified by choosing bins of variable width and centers so as to maximize their significance relative to occurring by chance in a smoothed velocity distribution (smoothing width 20,000 [km s$^{-1}$]{}) derived from the present sample (c.f. Cohen et al 1996). Using this procedure we isolate 6 peaks significant at better than 99.5 percent confidence (see Table 2). The fourth column in Table 2 gives a statistical significance parameter $X_{max}$. The fifth and sixth columns give the comoving transverse size corresponding to 1 arc-min and the comoving radial distance corresponding to ${\Delta}z = 0.001$. The density in velocity space within these peaks exceed the average density by a factor that ranges from 4 to as high as 30 for the peak at $z_p = 0.321$. 40 percent of the total sample lies within these peaks. Larger peaks including outliers are also highly significant. The local velocity dispersions for these peaks are strikingly small, ranging from 170 [[km s$^{-1}$]{}]{} to 600 [km s$^{-1}$]{}. These are upper bounds because they are comparable with our measurement errors. They are also similar to the results obtained in a high latitude field for which we carried out a deep redshift survey with LRIS earlier (Cohen et al 1996).
By itself, this sample is too small to measure the two point correlation function in velocity space. However, there is a $5\sigma$ excess correlation in the 500–1000[km s$^{-1}$]{} interval with a correlation scale $V_0\sim600\pm200$[km s$^{-1}$]{}([c.f. ]{}Carlberg [et al. ]{}(1997), Le Fèvre [et al. ]{}1996) which can be converted into comoving distance along the line of sight using the data in the sixth column of Table 2. There is no evidence for correlation with velocity differences in excess of 1000 [km s$^{-1}$]{}. No distinction between low and high redshift is discernible. There is no evidence for periodicity in the peak redshifts ([c.f. ]{}Broadhurst et al 1990).
Morphology Correlation
----------------------
If we make a simple morphological separation of the galaxies in the redshift survey into spirals, ellipticals and Peculiar/Mergers and use the HST images of the HDF and of the flanking fields to classify these galaxies (c.f. van den Bergh et al 1996), we find there is no indication of any difference in population between the background field galaxies and those in the redshift peaks. In particular, the redshift peaks do not contain a detectable excess of elliptical galaxies.
ANGULAR DISTRIBUTION
====================
The angular distribution of the entire sample and of the galaxies in the two most populous velocity peaks is shown in Figure 2. The peculiar shape is caused by the use of two LRIS strips with different position angles. The outline of the area covered is indicated by the solid lines, while the outline of the area of the WFCII observations in the HDF is indicated by the dashed lines. The galaxies associated with the 6 velocity peaks mostly exhibit a non-uniform distribution, though none show the strong central concentration characteristic of clusters. The redshift sample must be completed before it is possible to make quantitative statements.
Areal Density
-------------
The areal density of galaxies brighter than $0.1L^*$ (as defined at $K$) is computed for redshift peaks in the 0 hour field (Cohen et al 1996) and for the two largest peaks in the HDF, where the $K$ photometry is not fully assembled yet. Corrections have been applied for galaxies below the magnitude cutoff of the survey assuming a flat luminosity function at the faint end. To investigate a local analog to these structures, this is repeated for the Local Group, for the Virgo cluster (within a radius of 6$^{\circ}$ degrees from its center) using the survey of Kraan-Korteweg (1981) and within the core of the Coma Cluster using data from Thompson & Gregory (1980). In these local structures, the luminosity is determined at $B$ rather than at $K$. The results are given in Table 3, and suggest that the best local analog is the region of the Virgo cluster within 6$^{\circ}$ of its center, but although the areal density is a reasonable match, the velocity dispersion in the high redshift peaks is lower, often significantly lower, than one sees in the central region of the Virgo cluster.
DISCUSSION
==========
Effects of sample definition decisions
--------------------------------------
The conclusion of Cohen et al (1996), i.e., that a large fraction of the galaxy population at redshifts to unity lie in low velocity-dispersion structures, was based on a single field, but the confirmation of strong redshift-space clustering in the HDF suggests that the results are generic. The clustering seen here is stronger than that seen in other local and high-redshift surveys (Landy et al 1996, LeFèvre 1996, etc.) The difference is attributed most importantly to the high sampling density in a small field.
Structure Morphology
--------------------
At one level, these peaks may be no more than a manifestation of the fact that galaxies are correlated in both configuration and velocity space. The connection between spatial and velocity correlation functions is quite model-dependent ([e.g. ]{}Brainerd [et al. ]{}1996). Conversely, if we can gain an empirical understanding of this relationship, it can discriminate among cosmogonic models. We briefly comment upon some possibilities.
One explanation is that the velocity peaks represent structures in velocity space and are not prominent in real space. Such effects are sometimes seen in numerical simulations, [e.g. ]{}Park & Gott 1991, Bagla & Padmanabhan 1994. For example, they might be a “backside infall” into a large structure where the Hubble expansion opposes the infall so as to give more or less uniform recession velocity over a large interval of radial distance. The generic kinematic difficulty with this explanation is that in order for features like this not to have many more descendants in which the velocities have long ago crossed, the characteristic lifetimes must be a significant fraction of the age of the universe which, in turn, limits the mass density contrast to small values. Given that half of the galaxies lie in these structures, a large bias parameter must be invoked.
Alternatively, we may be observing structures that are spatially compact and have the shapes of spheres, filaments, or walls. We can argue against these features being clusters on the following grounds: (i) They do not exhibit central concentrations ([c.f. ]{}Sec. 4). (ii) The velocity dispersions are too small, $200-600$[[km s$^{-1}$]{}]{} as opposed to $600-1200$[km s$^{-1}$]{}. (iii) The space density of rich clusters is too low; the Palomar Deep Cluster Survey (Postman et al 1996) finds only 7 clusters per square degree out to $z\sim0.6$ with richness class $\ge1$. (iv) The redshift peaks do not show the excess of ellipticals characteristic of rich clusters (Dressler 1980).
Small quasi-spherical groups are a possibility. The mean free path is $\sim100 h^{-1}$ comoving Mpc. The observed structures extend laterally over at least $\sim6$ arc-min or $\sim2 h^{-1}$ Mpc, implying a space density $\sim 3 {\rm{x}} 10^{-3} h^{3}$ Mpc$^{-3}$, $\sim$1/3 the density of $L^*$ galaxies. Alternatively, we can associate the tentative velocity correlation scale of $V_0\sim600$[km s$^{-1}$]{} with a radial extent of $\sim4 h^{-1}$ Mpc and a lateral angular scale of $\sim12$ arc-minutes at $z \sim0.5$.
Filaments and walls have both been described in the theoretical literature ([e.g. ]{}Bond et al 1996, Shandarin et al 1995). Walls dominate if there is excess power on large scales and are observed locally ([e.g. ]{}in the Local Supercluster, deVaucouleurs 1975, and in local redshift surveys, de Lapparant et al 1986, Landy et al 1996). On this basis we speculate that the structures we are observing are actually walls.
There are two obvious follow up investigations which can address this hypothesis. The first is to perform similar redshift surveys in neighboring deep fields. If we assume that the wall normal is inclined at an angle $\theta$ to the line of sight and that the constituent galaxies move with the Hubble flow in two dimensions, then the variation of mean redshift with angular separation of the second survey $\Delta{\phi}$ and polar angle on the sky $\psi$ is $$\Delta z=2[(1+z)^{3/2}-(1+z)]\Delta{\phi}\tan\theta\sin\psi$$ For $z=0.5$, this is $\Delta z\sim2\times10^{-4}$ per arcminute and in order to see redshift displacements in excess of the velocity dispersion, the additional surveys must be displaced by $\sim20'$. With several lines of sight, it might be possible to test the above relation.
Secondly, wide field, multiband photometric surveys to the depth of the redshift survey are clearly important to see if there are indeed morphological and luminosity function differences between the galaxies within and outside the velocity peaks. Both investigations are underway.
We thank the Hubble Deep Field team, led by Bob Williams, for planning, taking, reducing, and making public the HDF images. We are grateful to George Djorgovski, Keith Matthews, Gerry Neugebauer, Paddy Padmanabhan, Mike Pahre, Tom Soifer and Jim Westphal for helpful conversations. The entire Keck user community owes a huge debt to Bev Oke, Jerry Nelson, Gerry Smith, and many other people who have worked to make the Keck Telescope a reality. We are grateful to the W. M. Keck Foundation, and particularly its president, Howard Keck, for the vision to fund the construction of the W. M. Keck Observatory. Support by NASA and the NSF is greatly appreciated.
[llll|llll|llll]{} 36 21.4 & 12 27.1 & — & 0.398 & 36 22.0 & 12 37.7 & 21.7 & 0.630 & 36 22.2 & 12 41.9 & 20.8 & 0.498\
36 22.7 & 13 00.2 & 20.0 & 0.472 & 36 22.9 & 13 46.9 & 20.4 & 0.485 & 36 24.9 & 13 01.0 & 20.3 & 0.518\
36 26.5 & 12 52.6 & 20.6 & 0.557 & 36 27.7 & 12 41.3 & 20.8 & 0.518 & 36 28.1 & 12 38.0 & 21.1 & 0.5185\
36 29.8 & 14 03.8 & 21.4 & 0.793 & 36 29.9 & 12 25.0 & 22.6 & 0.410 & 36 30.2 & 12 08.8 & 20.6 & 0.456\
36 31.0 & 12 36.9 & 21.3 & 0.456 & 36 31.7 & 12 41.1 & 21.3 & 0.528 & 36 32.6 & 12 44.1 & 21.3 & 0.562\
36 33.4 & 13 20.3 & 21.1 & 0.843 & 36 33.04 & 11 35.0 & 19.4 & 0.080 & 36 33.6 & 11 56.8 & 21.8 & 0.458\
36 34.4 & 12 41.5 & 22.3 & 1.219 & 36 34.8 & 12 24.5 & 19.5 & 0.562 & 36 36.1 & 13 20.3 & 22.1 & 0.680\
36 36.3 & 13 41.2 & 21.4 & 0.556 & 36 36.78 & 11 36.1 & 19.4 & 0.078 & 36 37.2 & 12 53.1 & 20.8 & 0.485\
36 37.4 & 12 41.0 & 20.5 & 0.458 & 36 37.6 & 11 49.5 & 22.1 & 0.838 & 36 38.89 & 12 20.7 & 22.9 & 0.609\
36 39.8 & 12 07.5 & 21.8 & 1.015 & 36 40.80 & 12 04.4 & 23.7 & 1.010 & 36 41.56 & 11 33.1 & 20.5 & 0.089\
36 41.85 & 12 06.3 & 21.9 & 0.432 & 36 42.85 & 12 17.6 & 21.3 & 0.454 & 36 43.07 & 12 43.2 & 23.0 & 0.847\
36 43.55 & 12 19.4 & 23.4 & 0.752 & 36 43.69 & 13 57.7 & 21.6 & 0.201 & 36 43.71 & 11 44.0 & 22.3 & 0.765\
36 43.88 & 12 51.2 & 21.8 & 0.557 & 36 44.09 & 12 48.9 & 22.0 & 0.555 & 36 44.11 & 12 41.3 & 24.2 & 0.873\
36 44.28 & 11 34.3 & 23.2 & 1.013 & 36 44.59 & 12 28.8 & 24.2 & 2.268 & 36 45.32 & 12 14.5 & 21.4 & 0\
36 45.86 & 12 02.4 & 24.6 & 0.679 & 36 46.10 & 11 42.9 & 22.6 & 1.016 & 36 46.25 & 14 05.6 & 22.6 & 0.960\
36 46.44 & 11 52.3 & 22.9 & 0.5035 & 36 46.45 & 14 08.6 & 23.1 & 0.130 & 36 46.68 & 12 38.1 & 23.0 & 0.320\
36 46.78 & 11 45.9 & 23.1 & 1.059 & 36 47.21 & 12 31.8 & 23.4 & 0.421 & 36 47.99 & 13 10.1 & 21.5 & 0.475\
36 48.5 & 13 29.2 & 23.9 & 0.958 & 36 48.51 & 11 42.3 & 23.2 & 0.962 & 36 49.29 & 13 12.3 & 22.7 & 0.478\
36 49.34 & 13 47.9 & 19.0 & 0.089 & 36 49.42 & 14 07.8 & 22.8 & 0.752 & 36 49.55 & 12 58.8 & 22.6 & 0.475\
36 49.64 & 13 14.2 & 22.4 & 0.475 & 36 50.15 & 12 40.8 & 21.4 & 0.474 & 36 50.18 & 12 46.9 & 22.8 & 0.680\
36 50.63 & 10 59.9 & 21.9 & 0.474 & 36 50.73 & 12 56.9 & 23.1 & 0.320 & 36 51.0 & 13 21.6 & 20.8 & 0.199\
36 51.02 & 10 32.2 & 21.2 & 0.410 & 36 51.35 & 13 01.6 & 22.2 & 0.089 & 36 51.61 & 12 21.3 & 22.3 & 0.299\
36 51.69 & 13 54.8 & 22.0 & 0.557 & 36 52.03 & 14 58.3 & 22.4 & 0.358 & 36 52.39 & 10 36.9 & 22.2 & 0.321\
36 52.59 & 12 21.0 & 24.0 & 0.401 & 36 52.68 & 13 55.7 & 22.7 & 1.355 & 36 52.71 & 14 32.9 & 21.2 & 0\
36 52.83 & 14 54.7 & 22.7 & 0.463 & 36 52.85 & 14 45.1 & 20.1 & 0.322 & 36 53.33 & 12 35.2 & 23.4 & 0.560\
36 53.54 & 15 26.0 & 18.7 & 0 & 36 53.57 & 13 09.4 & 22.1 & 0 & 36 53.77 & 12 55.0 & 22.0 & 0.642\
36 54.28 & 14 35.1 & 22.8 & 0.577 & 36 54.65 & 13 29.1 & 20.0 & 0 & 36 55.44 & 13 54.5 & 22.4 & 1.148\
36 55.45 & 12 46.4 & 23.1 & 0.790 & 36 55.50 & 14 00.9 & 23.9 & 0.559 & 36 56.26 & 12 42.4 & 19.9 & 0\
36 56.33 & 12 10.4 & 23.7 & 0.321 & 36 56.56 & 12 46.8 & 21.7 & 0.5185 & 36 57.14 & 12 27.1 & 23.4 & 0.561\
36 57.22 & 13 00.8 & 22.3 & 0.474 & 36 57.64 & 13 16.5 & 23.8 & 0.952 & 36 57.98 & 13 01.6 & 23.0 & 0.320\
36 58.22 & 12 15.2 & 22.9 & 1.020 & 36 58.29 & 15 49.4 & 21.7 & 0.457 & 36 58.56 & 12 23.0 & 24.2 & 0.682\
36 58.64 & 14 39.1 & 23.3 & 0.512 & 36 58.66 & 12 53.2 & 22.2 & 0.321 & 36 58.74 & 14 35.6 & 21.9 & 0.678\
36 58.76 & 16 38.9 & 20.0 & 0.299 & 36 59.43 & 12 22.7 & 24.5 & 0.472 & 36 59.79 & 14 50.6 & 22.5 & 0.761\
37 00.41 & 14 06.7 & 21.5 & 0.423 & 37 00.47 & 12 35.9 & 24.5 & 0.562 & 37 01.8 & 13 23.8 & 20.7 & 0.408\
37 01.81 & 15 10.9 & 22.9 & 0.938 & 37 02.3 & 13 43.0 & 21.3 & 0.559 & 37 02.5 & 13 48.3 & 22.7 & 0.513\
37 02.5 & 14 02.7 & 22.1 & 1.243 & 37 02.70 & 15 44.8 & 20.8 & 0.514 & 37 02.81 & 14 24.4 & 21.5 & 0.512\
37 03.21 & 16 46.9 & 23.0 & 0.744 & 37 03.6 & 13 54.3 & 21.7 & 0.745 & 37 03.82 & 14 42.0 & 22.3 & 0.475\
37 03.91 & 15 23.8 & 22.6 & 0.377 & 37 04.17 & 16 25.3 & 22.8 & 0.474 & 37 04.52 & 16 52.2 & 21.1 & 0.377\
37 04.56 & 14 30.0 & 22.0 & 0.561 & 37 04.73 & 14 55.8 & 21.2 & 0 & 37 04.91 & 15 47.4 & 23.4 & 0.533\
37 05.0 & 12 11.2 & 22.5 & 0.386 & 37 05.66 & 15 25.7 & 22.7 & 0.503 & 37 06.0 & 13 33.9 & 21.6 & 0.753\
37 06.81 & 14 30.3 & 21.2 & 0 & 37 07.0 & 12 14.7 & 21.4 & 0.655 & 37 07.0 & 11 58.5 & 22.4 & 0.593\
37 07.73 & 16 06.1 & 22.8 & 0.936 & 37 08.01 & 16 31.7 & 22.7 & 0 & 37 08.04 & 16 59.6 & 21.5 & 0.458\
37 08.1 & 12 53.2 & 21.9 & 0.838 & 37 08.1 & 13 21.6 & 22.7 & 0.785 & 37 08.20 & 14 54.8 & 22.8 & 0.565\
37 08.25 & 15 15.3 & 22.5 & 0.839 & 37 08.53 & 15 02.2 & 22.7 & 0.570 & 37 08.60 & 16 12.4 & 21.3 & 0\
37 08.8 & 12 02.8 & 22.6 & 0.855 & 37 09.46 & 14 24.3 & 22.0 & 0.476 & 37 09.79 & 15 25.0 & 20.0 & 0.597\
37 10.1 & 13 20.5 & 21.7 & 0.320 & 37 11.85 & 16 59.7 & 23.5 & 1.142 & 37 12.4 & 13 58.2 & 22.6 & 0.848\
37 12.58 & 15 43.4 & 22.3 & 0.533 & 37 13.0 & 13 57.2 & 22.0 & 1.016 & 37 13.59 & 15 12.0 & 22.1 & 0.524\
37 14.8 & 13 35.4 & 22.5 & 0.897 & 37 16.1 & 13 54.2 & 21.5 & 0.476 & 37 16.32 & 16 30.4 & 23.4 & 0\
37 16.4 & 13 11.2 & 21.9 & 0.898 & 37 17.0 & 13 57.4 & 20.7 & 0.336 & 37 16.52 & 16 44.7 & 22.7 & 0.557\
37 18.28 & 15 54.1 & 21.6 & 0.476 & 37 18.3 & 13 48.6 & 22.1 & 0.480 & 37 18.4 & 13 22.5 & 20.6 & 0.4755\
37 18.60 & 16 05.0 & 22.5 & 0.558 & 37 22.25 & 16 13.1 & 22.6 & 0
[crcrrr]{} 0.321 & 8 & 170 & 22 & 0.22 & 2.0 0.457 & 7 & 310 & 10 & 0.31 & 1.7 0.475 & 15 & 315 & 21 & 0.31 & 1.6 0.516 & 8 & 595 & 8 & 0.33 & 1.6 0.559 & 14 & 420 & 21 & 0.34 & 1.5 0.680 & 5 & 265 & 8 & 0.40 & 1.4
[crrrc]{} 0.392 & 3 & 1.03 & 3 & 465 0.429 & 14 & 1.19 & 13 & 615 0.581 & 23 & 1.86 & 19 & 410 0.675 & 8 & 2.30 & 7 & 405 0.766 & 7 & 2.72 & 7 & 670 0.475 (HDF) & 7 & 0.51 & 18 & 315 0.559 (HDF) & 7 & 0.64 & 17 & 420 Local Structures Local Group & 4 & 1.3 & 3 & $<100$ Virgo & 122 & 8.5 & 14 & 670 Coma & 248 & 2.0 & 125 & 1080
Abraham, R.G., Tanvir, N.R., Santiago, B.X., Ellis, R.S., Glazebrook, K. & van den Bergh, S., 1996, MNRAS, 279, L47 Bagla, J.S. & Padmanabhan, T., 1994, MNRAS, 266, 227 Bellanger, C. & de Lapparent, V., 1995, ApJ, 455, L103 Bingelli, B., Sandage, A. & Tammann, G.A., 1985, AJ, 90, 1681 Bond, J.R., Kofman, L. & Pogosyan, D., 1996, Nature, 380, 603 Brainerd, T.G., Bromley, B.C., Warren, M.S. & Zurek, W.H., 1996, ApJL, 464, L103 Broadhurst, T., Ellis, R., Koo, D. & Szalay, A., 1990, Nature, 343, 726 Carlberg, R.G., Cowie, L.L., Songaila, A. & Hu, E.M., 1997, in press Clements, D.L. & Couch, W.J., 1996, MNRAS, in press Cohen, J.G., Hogg, D.W., Pahre, M.A. & Blandford, R., 1996, ApJ, 462, L9 Colless, M. & Dunn, A.M., 1996, ApJ, 458, 435 Cowie, L.L. et al, 1997, in preparation Cowie, L.L., Songaila, A., Hu, E.M. & Cohen, J.G., 1996, AJ, in press de Lapparent, V., Geller, M. & Huchra, J.P., 1986, ApJ, 302, L1 de Vaucouleurs, G.H. 1975, in [*Galaxies and the Universe*]{}, ed. A.Sandage, M.Sandage, J. Kristian & G.Tammann, pg. 557 Dressler, A. 1980, ApJ, 236, 351 Hogg, D.W., Blandford, R.D., Kundić, T., Fassnacht, C.D. & Malhotra, S., 1996, ApJ, in press Kraan-Korteweg, R., 1981, AA, 104, 280 Landy, S.D., Shectman, S.A., Lin, H., Kirshner, R.P., Oemler, A.A. & Tucker, D., 1996, ApJ, 456, L1 LeFèvre, O., Hudon, D., Lilly, S.J., Crampton, D., Hammer, F. & Tresse, L., 1996, ApJ, 461, 534 Oke, J.B., et al, 1995, PASP 107, 3750 Park C.B. & Gott, J.R. 1991, MNRAS 249, 288 Postman M.A., Lubin L.M., Gunn J.E., Oke J.B., Hoessel J.G., Schneider D.P., Christensen J.A., 1996, AJ, 111, 615 Shandarin, S.F., Melott, A.L., Mcdavitt, K., Pauls, J.L. & Tinker, J., 1995, PhysRevL, 75, 7 Steidel, C.C., Giavalisco, M., Dickinson, M. & Adelberger, K.L., 1996, AJ, in press Thompson, L.A. & Gregory, S.A., 1980, ApJ, 242, 1 van den Bergh, S., Abraham, R.G., Ellis, R.S., Tanvir., N.R. & Santiago, B.X., 1996, preprint Williams, R.E., Blacker, B.S., Dickenson, M., Ferguson, H.C., Fruchter, A.S., Giavalisco, M., Gilliland, R.L., Lucas, R.A., McElroy, D.B., Petro, L.D. & Postman, M., 1996, in [*Science with the Space Telescope - II*]{}, ed. P.Benevenuti, F.D.Machetto & E.J.Schreier
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'This paper investigates the stochastic permanence of malaria and the existence of a stationary distribution for the stochastic process describing the disease dynamics over sufficiently longtime. The malaria system is highly random with fluctuations from the disease transmission and natural deathrates, which are expressed as independent white noise processes in a family of stochastic differential equation epidemic models. Other sources of variability in the malaria dynamics are the random incubation and naturally acquired immunity periods of malaria. Improved analytical techniques and local martingale characterizations are applied to describe the character of the sample paths of the solution process of the system in the neighborhood of an endemic equilibrium. Emphasis of this study is laid on examination of the impacts of (1) the sources of variability- disease transmission and natural death rates, and (2) the intensities of the white noise processes in the system on the stochastic permanence of malaria, and also on the existence of the stationary distribution for the solution process over sufficiently long time. Numerical simulation examples are presented to illuminate the persistence and stochastic permanence of malaria, and also to numerically approximate the stationary distribution of the states of the solution process.'
address: 'Department of Mathematical Sciences, Georgia Southern University, 65 Georgia Ave, Room 3042, Statesboro, Georgia, 30460, U.S.A. E-mail:dwanduku@georgiasouthern.edu;wandukudivine@yahoo.com[^1] '
author:
- Divine Wanduku
title: 'The stochastic permanence of malaria, and the existence of a stationary distribution for a class of malaria models'
---
Potential endemic steady state ,permanence in the mean ,Basic reproduction number,Lyapunov functional technique,intensity of white noise process
Introduction\[ch1.sec0\]
========================
According to the WHO estimates released in December $2016$, about 212 million cases of malaria occurred in $2015$ resulting in about 429 thousand deaths. In addition, the highest mortality rates were recorded for the sub-Saharan African countries, where about $90\%$ of the global malaria cases occurred, and resulted to about $75\%$ of the global malaria related deaths. Moreover, more than two third of these global malaria related deaths were children younger than or exactly five years old. Furthermore, in spite of the fact that malaria is a curable and preventable disease, and despite all technological advances to control and contain the disease, malaria imposes serious menace to human health and the welfare of many economies in the world. In fact, WHO has reported in $2015$ that nearly half of the world’s population was at risk to malaria, and the disease was actively and continuously transmitted in about $91$ countries in the world. Moreover, the most severely affected economies are the sub-Saharan countries, and the most vulnerable sub-human populations include children younger than five years old, pregnant women, people suffering from HIV/AIDS, and travellers from regions with low malaria transmission to malaria endemic zones[@WHO; @CDC]. These facts serve as motivation to foster research about malaria and understand all aspects of the disease that lead to its containment, and amelioration of the burdens of the disease.
Mathematical modeling is one special way of understanding malaria, and malaria models go as far back as 1911 with Ross[@ross] who studied mosquito control. Several other authors such as [@wanduku-biomath; @macdonald; @ngwa-shu; @hyun; @may; @kazeem; @gungala; @anita; @tabo] have also made strides in the understanding of malaria mathematically. Malaria is a vector-borne disease caused by protozoa (a micro-parasitic organism) of the genus *Plasmodium*. There are several different species of the parasite that cause disease in humans namely: *P. falciparum, P. viviax, P. ovale* and *P. malariae*. However, the species that causes the most severe and fatal disease is the *P. falciparum*. Malaria is transmitted between humans by the infectious bite of a female mosquito of the genus *Anopheles*. The complete life cycle of the malaria plasmodium entails two-hosts: (1) the female anopheles mosquito vector, and (2) the susceptible or infectious human being[@malaria; @WHO; @CDC].
The stages of maturation of the plasmodium within the human host is called the *exo-erythrocytic cycle*. Moreover, the total duration of the *exo-erythrocytic cycle* is estimated between 7-30 days depending on the species of plasmodium, with the exceptions of the plasmodia- *P. vivax* and *P. ovale* that may be delayed for as long as 1 to 2 years. See for example [@malaria; @WHO; @CDC]. Also, the stages of development of the plasmodium within the mosquito host is called the *sporogonic cycle*. It is estimated that the duration of the *sporogonic cycle* is over 2 to 3 weeks[@malaria; @WHO; @CDC]. The delay between infection of the mosquito and maturation of the parasite inside the mosquito suggests that the mosquito must survive a minimum of the 2 to 3 weeks to be able to transmit malaria[@malaria]. These facts are important in deriving a mathematical model to represent the dynamics of malaria. More details about the mosquito biting habit, the life cycle of malaria and the key issues related to the mathematical model for malaria in this study are located in Wanduku[@wanduku-biomath], and also in [@malaria; @WHO; @CDC].
It is also important to note that malaria confers natural immunity[@CDC; @lars; @denise] after recovery from the disease. The strength and effectiveness of the natural immunity against the disease depends primarily on the frequency of exposure to the parasites and other biological factors such as age, pregnancy, and genetic structure of red blood cells of people with malaria. The natural immunity against malaria has been studied mathematically by several different authors, for example, [@wanduku-biomath; @hyun]. The duration of the naturally acquired immunity period is random with a range of possible values from zero for individuals with almost no history of the disease (for instance, young babies etc.), to sufficiently long time for people with genetic resistance against the disease( for instance, people with sickle cell anemia, and duffy negative blood type conditions etc.). All of these facts related to the naturally acquired immunity against malaria, and development of the acquired immunity into a mathematical expression are discussed in Wanduku[@wanduku-biomath], and [@CDC; @lars; @denise; @hyun].
As with every other infectious disease dynamics, there is inevitable presence of noise in the dynamics of malaria. Mathematically, the noise in an infectious system over continuous time can be expressed in one way as a Wierner or Brownian motion process obtained as an approximation of a random walk process over an infinitesimally small time interval. Moreover, the central limit theorem is applied to obtain this approximation.
There are several different ways to introduce white noise into the infectious system, for example, by considering the variation of the driving parameters of the infectious system, or considering the random perturbation of the density of the system etc. Regardless of the method of introducing the noise into the system, the mathematical systems obtained from the approximation process above are called stochastic differential equation systems.
Stochastic systems offer a better representation of reality, and a better fit for most dynamic processes that occur in real life. This is because of the inevitable occurrence of random fluctuations in the dynamic real life systems. Whilst several deterministic systems for malaria dynamics have been studied [@wanduku-biomath; @macdonald; @ngwa-shu; @hyun; @may; @kazeem; @gungala; @anita; @tabo], to the best of the author’s knowledge, little or no mathematical studies authored by other experts exist about malaria in the framework of Ito-Doob type stochastic differential equations. The study by Wanduku[@wanduku-comparative] would appear to lead as the first attempt to understand the impacts of white noise on various aspects of malaria dynamics. And the mode of adding white noise into the malaria dynamics in this study is similar to the earlier studies [@Wanduku-2017; @wanduku-fundamental; @wanduku-delay].
An important investigation in the study of infectious population dynamic systems influenced by white noise is the permanence of the disease, and the existence of a stationary distribution for the infectious system. Several papers in the literature[@aadil; @yanli; @yongli; @yongli-2; @yzhang; @mao-2] have addressed these topics. Investigations about the permanence of the disease in the population seek to find conditions that negatively favor the survival of the endemic population classes (such as the exposed, infectious and removal classes) in the far future time. Moreover, in a white noise influenced infectious system, the permanence of the disease requires the existence of a nonzero average population size for the infectious classes over sufficiently long time.
The existence of a stationary distribution for a stochastic infectious system implies that in the far future the statistical properties of the different states of the system can be determined accurately by knowing the distribution of a single random variable, which is the limit of convergence in distribution of the random process describing the dynamics of the disease. Since most realistic stochastic models formulated in terms of stochastic differential equations are nonlinear, and explicit solutions are nontrivial, numerical methods can be used to approximate the stationary distribution for the random process. See for example [@aadil; @yongli-2; @mao-2]
Along with the stationary behavior of the stochastic infectious system over sufficiently long time, another topic of investigation concerns the ergodic character of the sample paths of the disease system. The ergodicity of the stochastic disease system ensures that the statistical properties of the disease in the system in the far future time can be understood, and estimated by the sample realizations of disease over sufficiently long time. That is, while insights about the ensemble nature of the disease are difficult to obtain directly from the explicit solutions of the stochastic system because of the nonlinear structure of the system, the stationary and ergodic properties of the stochastic system ensure that sufficient information about the disease is obtained from the sample paths of the disease over sufficiently long time. See for example [@aadil; @yongli-2; @mao-2]
Several different studies suggest that the strength or the intensity of the white noise in the infectious system plays a major role on the permanence of the disease, and also on the existence of a stationary distribution for the stochastic system[@aadil; @yanli; @yongli; @yongli-2; @yzhang]. In most of these studies, one can deduce that low intensity of the white noise is associated with the permanence of the disease in the far future time, and consequently lead to the existence of a stationary distribution for the stochastic infectious system.
The primary objective of this study is to characterize the role of the intensities of the white noises from different sources in a malaria dynamics on the overall behavior of the disease, and in particular on the permanence of the disease. Furthermore, another objective is to also understand the existence of an endemic stationary distribution, which numerically can be approximated for a given set of parameters corresponding to a malaria scenario.
Recently, Wanduku[@wanduku-biomath] presented a class of deterministic models for malaria, where the class type is determined by a generalized nonlinear incidence rate of the disease. The class of epidemic dynamic models incorporates the three delays in the dynamics of malaria from the incubation of the disease inside the mosquito (*sporogonic cycle*), the incubation of the plasmodium inside the human being (*exo-erythrocytic cycle*), and also the period of effective naturally acquired immunity against malaria. Moreover, the delay periods are all random and arbitrarily distributed.
Some special cases of the generalized nonlinear incidence rate include (1) a malaria scenario where the response rate of the disease transmission from infectives to susceptible individuals increases initially for small number of infectious individuals, and then saturates with a horizontal asymptote for large and larger number of infectious individuals, (2) a malaria scenario where the response rate of the disease transmission from infectives to susceptible individuals initially decreases, and saturates at a lower horizontal asymptote as the infected population increases, and (3) a malaria scenario where the response rate of the disease transmission from infectives to susceptible individuals initially increases, attains a maximum level and decreases as the number of infected individuals increases etc.
Some extensions of Wanduku[@wanduku-biomath] will appear in the context of a general class of vector-borne disease models such as dengue fever and malaria in Wanduku[@wanduku-theorBio], where the role of the different sources of variability on vector-borne diseases are investigated, and their intensities are classified. The focus of [@wanduku-theorBio] is on disease eradication in the steady state population. In the extension Wanduku[@wanduku-comparative], a general class of malaria stochastic models is investigated, and the emphasis is to examine the extend to which the different sources of noise in the system deviate the stochastic dynamics of malaria from its ideal dynamics in the absence of noise. Note that Wanduku[@wanduku-comparative] is a comparative study. The current study extends Wanduku[@wanduku-biomath] by introducing the sources of variability in Wanduku[@wanduku-comparative] with a more detailed formulation of the white noise processes in the malaria dynamics, and provides detailed analytical techniques, biological interpretation and numerical simulation results to comprehend (1) the behavior of the stochastic system in the neighborhood of a potential endemic equilibrium, (2) the stochastic permanence of the disease, and the fundamental role of the intensities of the noises in the system in determining the persistence of the disease, and (3) the existence of a stationary endemic distribution to fully characterize the statistical properties of the states in the system in the long-term. This work is presented as follows:- in Section \[ch1.sec0\], the fundamentals in the derivation of the class of deterministic models for malaria in Wanduku [@wanduku-biomath] are discussed, and essential information to this study is presented. In Section \[ch1.sec0.sec1\], the new class of stochastic models is extensively formulated. In Section \[ch1.sec1\], the model validation results are presented for the stochastic system. In Section \[ch1.sec3\], the persistence of the disease in the human population is exhibited. The permanence of malaria in the mean in the human population is also exhibited in Section \[ch1.sec5\]. Furthermore, the existence of a stationary distribution for the class of stochastic models is presented in Section \[ch1.sec3.sec1\]. Moreover, the ergodicity of the stochastic system is also exhibited in this section. Finally, numerical results are given to test the permanence of malaria, and approximate the stationary distribution of malaria in Section \[ch1.sec4\].
The derivation of the model and some preliminary results {#ch1.sec0}
========================================================
In the recent study by Wanduku [@wanduku-biomath], a class of SEIRS epidemic dynamic models for malaria with three random delays is presented. The delays represent the incubation periods of the infectious agent (plasmodium) inside the vector(mosquito) denoted $T_{1}$, and inside the human host denoted $T_{2}$. The third delay represents the naturally acquired immunity period of the disease $T_{3}$, where the delays are random variables with density functions $f_{T_{1}}, t_{0}\leq T_{1}\leq h_{1}, h_{1}>0$, and $f_{T_{2}}, t_{0}\leq T_{2}\leq h_{2}, h_{2}>0$ and $f_{T_{3}}, t_{0}\leq T_{3}<\infty$. Furthermore, the joint density of $T_{1}$ and $T_{2}$ given by $f_{T_{1},T_{2}}, t_{0}\leq T_{1}\leq h_{1} , t_{0}\leq T_{2}\leq h_{2}$, is also expressed as $f_{T_{1},T_{2}}=f_{T_{1}}.f_{T_{2}}, t_{0}\leq T_{1}\leq h_{1} , t_{0}\leq T_{2}\leq h_{2}$, since it is assumed that the random variables $T_{1}$ and $T_{2}$ are independent (see [@wanduku-biomath]). The independence between $T_{1}$ and $T_{2}$ is justified from the understanding that the incubation of the infectious agent for the vector-borne disease depends on the suitable biological environmental requirements for incubation inside the vector and the human body which are unrelated. Furthermore, the independence between $T_{1}$ and $T_{3}$ follows from the lack of any real biological evidence to justify the connection between the incubation of the infectious agent inside the vector and the acquired natural immunity conferred to the human being. But $T_{2}$ and $T_{3}$ may be dependent as biological evidence suggests that the naturally acquired immunity is induced by exposure to the infectious agent.
By employing similar reasoning in [@cooke; @qun; @capasso; @huo], the expected incidence rate of the disease or force of infection of the disease at time $t$ due to the disease transmission process between the infectious vectors and susceptible humans, $S(t)$, is given by the expression $\beta \int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}S(t)G(I(t-s))ds$, where $\mu$ is the natural death rate of individuals in the population, and it is assumed for simplicity that the natural death rate for the vectors and human beings are the same. Assuming exponential lifetimes for the people and vectors in the population, the term $0<e^{-\mu s}\leq 1, s\in [t_{0}, h_{1}], h_{1}>0$ represents the survival probability rate of exposed vectors over the incubation period, $T_{1}$, of the infectious agent inside the vectors with the length of the period given as $T_{1}=s, \forall s \in [t_{0}, h_{1}]$, where the vectors acquired infection at the earlier time $t-s$ from an infectious human via a successful infected blood meal, and become infectious at time $t$. Furthermore, it is assumed that the survival of the vectors over the incubation period of length $s\in [t_{0}, h_{1}]$ is independent of the age of the vectors. In addition, $I(t-s)$, is the infectious human population at earlier time $t-s$, $G$ is a nonlinear incidence function of the disease dynamics, and $\beta$ is the average number of effective contacts per infectious individual per unit time. Indeed, the force of infection, $\beta \int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}S(t)G(I(t-s))ds$ signifies the expected rate of new infections at time $t$ between the infectious vectors and the susceptible human population $S(t)$ at time $t$, where the infectious agent is transmitted per infectious vector per unit time at the rate $\beta$. Furthermore, it is assumed that the number of infectious vectors at time $t$ is proportional to the infectious human population at earlier time $t-s$. Moreover, it is further assumed that the interaction between the infectious vectors and susceptible humans exhibits nonlinear behavior, for instance, psychological and overcrowding effects, which is characterized by the nonlinear incidence function $G$. Therefore, the force of infection given by $$\label{ch1.sec0.eqn0}
\beta \int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}S(t)G(I(t-s))ds,$$ represents the expected rate at which infected individuals leave the susceptible state and become exposed at time $t$.
The susceptible individuals who have acquired infection from infectious vectors but are non infectious form the exposed class $E$. The population of exposed individuals at time $t$ is denoted $E(t)$. After the incubation period, $T_{2}=u\in [t_{0}, h_{2}]$, of the infectious agent in the exposed human host, the individual becomes infectious, $I(t)$, at time $t$. Applying similar reasoning in [@cooke-driessche], the exposed population, $E(t)$, at time $t$ can be written as follows $$\label{ch1.sec0.eqn1a}
E(t)=E(t_{0})e^{-\mu (t-t_{0})}p_{1}(t-t_{0})+\int^{t}_{t_{0}}\beta S(\xi)e^{-\mu T_{1}}G(I(\xi-T_{1}))e^{-\mu(t-\xi)}p_{1}(t-\xi)d \xi,$$ where $$\label{ch1.seco.eqn1b}
p_{1}(t-\xi)=\left\{\begin{array}{l}0,t-\xi\geq T_{2},\\
1, t-\xi< T_{2} \end{array}\right.$$ represents the probability that an individual remains exposed over the time interval $[\xi, t]$. It is easy to see from (\[ch1.sec0.eqn1a\]) that under the assumption that the disease has been in the population for at least a time $t>\max_{t_{0}\leq T_{1}\leq h_{1}, t_{0}\leq T_{2}\leq h_{2}} {( T_{1}+ T_{2})}$, in fact, $t>h_{1}+h_{2}$, so that all initial perturbations have died out, the expected number of exposed individuals at time $t$ is given by $$\label{ch1.sec0.eqn1}
E(t)=\int_{t_{0}}^{h_{2}}f_{T_{2}}(u)\int_{t-u}^{t}\beta \int^{h_{1}}_{t_{0}} f_{T_{1}}(s) e^{-\mu s}S(v)G(I(v-s))e^{-\mu(t-u)}dsdvdu.$$ Similarly, for the removal population, $R(t)$, at time $t$, individuals recover from the infectious state $I(t)$ at the per capita rate $\alpha$ and acquire natural immunity. The natural immunity wanes after the varying immunity period $T_{3}=r\in [ t_{0},\infty]$, and removed individuals become susceptible again to the disease. Therefore, at time $t$, individuals leave the infectious state at the rate $\alpha I(t)$ and become part of the removal population $R(t)$. Thus, at time $t$ the removed population is given by the following equation $$\label{ch1.sec0.eqn2a}
R(t)=R(t_{0})e^{-\mu (t-t_{0})}p_{2}(t-t_{0})+\int^{t}_{t_{0}}\alpha I(\xi)e^{-\mu(t-\xi)}p_{2}(t-\xi)d \xi,$$ where $$\label{ch1.sec0.eqn2b}
p_{2}(t-\xi)=\left\{\begin{array}{l}0,t-\xi\geq T_{3},\\
1, t-\xi< T_{3} \end{array}\right.$$ represents the probability that an individual remains naturally immune to the disease over the time interval $[\xi, t]$. But it follows from (\[ch1.sec0.eqn2a\]) that under the assumption that the disease has been in the population for at least a time $t> \max_{t_{0}\leq T_{1}\leq h_{1}, t_{0}\leq T_{2}\leq h_{2}, T_{3}\geq t_{0}}{(T_{1}+ T_{2}, T_{3})}\geq \max_{t_{0}\leq T_{3}}{(T_{3})}$, in fact, the disease has been in the population for sufficiently large amount of time so that all initial perturbations have died out, then the expected number of removal individuals at time $t$ can be written as $$\label{ch1.sec0.eqn2}
R(t)=\int_{t_{0}}^{\infty}f_{T_{3}}(r)\int_{t-r}^{t}\alpha I(v)e^{-\mu (t-v)}dvdr.$$ There is also constant birth rate $B$ of susceptible individuals in the population. Furthermore, individuals die additionally due to disease related causes at the rate $d$. Moreover, $B>$, $d>0$, and all the other parameters are nonnegative. A compartmental framework illustrating the transition rates between the different states in the system and also showing the delays in the disease dynamics is given in Figure \[ch1.sec4.figure 1\].
![The compartmental framework illustrates the transition rates between the states $S,E,I,R$ of the system. It also shows the incubation delay $T_{2}$ and the naturally acquired immunity $T_{3}$ periods. \[ch1.sec4.figure 1\]](SEIRS-compartmental-framework-edited-april-5-2017.eps){height="8cm"}
It follows from (\[ch1.sec0.eqn0\]), (\[ch1.sec0.eqn1\]), (\[ch1.sec0.eqn2\]) and the transition rates illustrated in the compartmental framework in Figure \[ch1.sec4.figure 1\] above that the family of SEIRS epidemic dynamic models for a malaria and vector-borne diseases in general in the absence of any random environmental fluctuations in the disease dynamics can be written as follows: $$\begin{aligned}
dS(t)&=&\left[ B-\beta S(t)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))ds - \mu S(t)+ \alpha \int_{t_{0}}^{\infty}f_{T_{3}}(r)I(t-r)e^{-\mu r}dr \right]dt,\nonumber\\
&&\label{ch1.sec0.eq3}\\
dE(t)&=& \left[ \beta S(t)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))ds - \mu E(t)\right.\nonumber\\
&&\left.-\beta \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdu \right]dt,\label{ch1.sec0.eq4}\\
&&\nonumber\\
dI(t)&=& \left[\beta \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdu- (\mu +d+ \alpha) I(t) \right]dt,\nonumber\\
&&\label{ch1.sec0.eq5}\\
dR(t)&=&\left[ \alpha I(t) - \mu R(t)- \alpha \int_{t_{0}}^{\infty}f_{T_{3}}(r)I(t-r)e^{-\mu s}dr \right]dt,\label{ch1.sec0.eq6}\end{aligned}$$ where the initial conditions are given in the following: Let $h= h_{1}+ h_{2}$ and define $$\begin{aligned}
&&\left(S(t),E(t), I(t), R(t)\right)
=\left(\varphi_{1}(t),\varphi_{2}(t), \varphi_{3}(t),\varphi_{4}(t)\right), t\in (-\infty,t_{0}],\nonumber\\% t\in [t_{0}-h,t_{0}],\quad and\quad=
&&\varphi_{k}\in \mathcal{C}((-\infty,t_{0}],\mathbb{R}_{+}),\forall k=1,2,3,4, \nonumber\\
&&\varphi_{k}(t_{0})>0,\forall k=1,2,3,4,\nonumber\\
\label{ch1.sec0.eq06a}\end{aligned}$$ where $\mathcal{C}((-\infty,t_{0}],\mathbb{R}_{+})$ is the space of continuous functions with the supremum norm $$\label{ch1.sec0.eq06b}
||\varphi||_{\infty}=\sup_{ t\leq t_{0}}{|\varphi(t)|}.$$ The following general properties of the incidence function $G$ studied in [@wanduku-biomath] are given as follows:
\[ch1.sec0.assum1\]
1. $G(0)=0$.
2. $G(I)$ is strictly monotonic on $[0,\infty)$.
3. $G''(I)<0$ $\Leftrightarrow$ $G(I)$ is differentiable concave on $[0,\infty)$.
4. $\lim_{I\rightarrow \infty}G(I)=C, 0\leq C<\infty$ $\Leftrightarrow$ $G(I)$ has a horizontal asymptote $0\leq C<\infty$.
5. $G(I)\leq I, \forall I>0$ $\Leftrightarrow$ $G(I)$ is at most as large as the identity function $f:I\mapsto I$ over the positive all $I\in (0,\infty)$.
Note that an incidence function $G$ that satisfies Assumption \[ch1.sec0.assum1\] $A1$-$A5$ can be used to describe the disease transmission rate of a vector-borne disease scenario, where the disease dynamics is represented by the system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]), and the disease transmission rate between the vectors and the human beings initially increases or decreases for small values of the infectious population size, and is bounded or steady for sufficiently large size of the infectious individuals in the population. It is noted that Assumption \[ch1.sec0.assum1\] is a generalization of some subcases of the assumptions $A1$-$A5$ investigated in [@gumel; @zhica; @kyrychko; @qun]. Some examples of frequently used incidence functions in the literature that satisfy Assumption \[ch1.sec0.assum1\]$A1$-$A5$ include: $G(I(t))=\frac{I(t)}{1+\alpha I(t)}, \alpha>0$, $G(I(t))=\frac{I(t)}{1+\alpha I^{2}(t)}, \alpha>0$, $G(I(t))=I^{p}(t),0<p<1$ and $G(I)=1-e^{-aI}, a>0$. In the analysis of the deterministic malaria model (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) with initial conditions in (\[ch1.sec0.eq06a\])-(\[ch1.sec0.eq06b\]) in Wanduku[@wanduku-biomath], the threshold values for disease eradication such as the basic reproduction number for the disease when the system is in steady state are obtained in both cases where the delays in the system $T_{1}, T_{2}$ and $T_{3}$ are constant and also arbitrarily distributed. It should be noted that the assumption of constant delay times representing the incubation period of the disease in the vector, $T_{1}$, incubation period of the disease in the host, $T_{2}$, and immunity period of the disease in the human population, $T_{3}$ in (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) is equivalent to the special case of letting the probability density functions $f_{T_{i}}, i=1,2,3$ of the random variables $T_{1}, T_{2}$ and $T_{3}$ be the dirac-delta function. That is, $$\label{ch1.sec2.eq4}
f_{T_{i}}(s)=\delta(s-T_{i})=\left\{\begin{array}{l}+\infty, s=T_{i},\\
0, otherwise,
\end{array}\right.
, i=1, 2, 3.$$ Moreover, under the assumption that $T_{1}\geq 0, T_{2}\geq 0$ and $T_{3}\geq 0$ are constant, the following expectations can be written as $E(e^{-2\mu (T_{1}+T_{2})})=e^{-2\mu (T_{1}+T_{2})} $, $E(e^{-2\mu T_{1}})=e^{-2\mu T_{1}} $ and $E(e^{-2\mu T_{3}})=e^{-2\mu T_{3}} $. When the delays in the system are all constant, the basic reproduction number of the disease is given by $$\label{ch1.sec2.lemma2a.corrolary1.eq4}
\hat{R}^{*}_{0}=\frac{\beta S^{*}_{0} }{(\mu+d+\alpha)}.$$ This threshold value $\hat{R}^{*}_{0}=\frac{\beta S^{*}_{0} }{(\mu+d+\alpha)}$ from (\[ch1.sec2.lemma2a.corrolary1.eq4\]), represents the total number of infectious cases that result from one malaria infectious individual present in a completely disease free population with state given by $S^{*}_{0}=\frac{B}{\mu}$, over the average lifetime given by $\frac{1 }{(\mu+d+\alpha)}$ of a person who has survived from disease related death $d$, natural death $\mu$ and recovered at rate $\alpha$ from infection. Hence, $\hat{R}^{*}_{0}$ is also the noise-free basic reproduction number of the disease, whenever the incubation periods of the malaria parasite inside the human and mosquito hosts given by $T_{i}, i=1,2$, and also the period of effective naturally acquired immunity against malaria given by $T_{3}$, are all positive constants. Furthermore, the threshold condition $\hat{R}^{*}_{0}<1$ is required for the disease to be eradicated from the noise free human population, whenever the constant delays in the system also satisfy the following: $$\label{ch1.sec2.lemma2a.corrolary1.eq7}
T_{max}\geq \frac{1}{2\mu}\log{\frac{\hat{R}^{*}_{1}}{1-\hat{R}^{*}_{0}}},$$ where $$\label{ch1.sec2.lemma2a.corrolary1.eq8}
T_{max}=\max{(T_{1}+T_{2}, T_{3})},$$ and $$\label{ch1.sec2.lemma2a.corrolary1.eq3}
\hat{R}^{*}_{1}=\frac{\beta S^{*}_{0} \hat{K}^{*}_{0}+\alpha}{(\mu+d+\alpha)},$$ with some constant $\hat{K}^{*}_{0}>0$ (in fact, $\hat{K}^{*}_{0}=4 $).
On the other hand, when the delays in the system $T_{i}, i=1,2$ are random, and arbitrarily distributed, the basic reproduction number is given by $$\label{ch1.sec2.theorem1.corollary1.eq3}
R_{0}=\frac{\beta S^{*}_{0} \hat{K}_{0}}{(\mu+d+\alpha)}+\frac{\alpha}{(\mu+d+\alpha)},$$ where, $\hat{K}_{0}>0$ is a constant that depends only on $S^{*}_{0}$ (in fact, $\hat{K}_{0}=4+ S^{*}_{0} $). In addition, malaria is eradicated from the system in the steady state, whenever $R_{0}\leq 1$, $\hat{U}_{0}\leq 1$ and $\hat{V}_{0}\leq 1$, where $$\label{ch1.sec2.theorem1.corollary1.eq4}
\hat{U}_{0}=\frac{2\beta S^{*}_{0}+\beta +\alpha + 2\frac{\mu}{\tilde{K}(\mu)^{2}}}{2\mu},$$ and $$\label{ch1.sec2.theorem1.corollary1.eq5}
\hat{V}_{0}=\frac{(2\mu \tilde{K}(\mu)^{2} + \alpha + \beta (2S^{*}_{0}+1 ) )}{2\mu},$$ are other threshold values for the stability of the disease-free steady state $E_{0}=(S^{*}_{0},0,0),S^{*}_{0}=\frac{B}{\mu}$.
Note that the threshold value $R_{0}$ in (\[ch1.sec2.theorem1.corollary1.eq3\]) is a modification of the basic reproduction number $\hat{R}^{*}_{0}$ defined in (\[ch1.sec2.lemma2a.corrolary1.eq4\]), and it is therefore the corresponding noise-free basic reproduction number for the disease dynamics described by deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]), whenever the delays $T_{i}, i=1,2,3$ in the system are random variables. See Wanduku[@wanduku-biomath] for more conceptual and biological interpretation of the threshold values for disease eradication. The stochastic extension of the deterministic model (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) is derived and studied in the following section.
The stochastic model {#ch1.sec0.sec1}
====================
Stochastic models are more realistic because of the inevitable occurrence of random fluctuations in the dynamics of diseases, and in addition, stochastic models provide a better fit for these disease scenarios than their deterministic counterparts.
There are several different techniques to add gaussian noise processes into a dynamic system. One method involves adding the noise into the system as direct influence to the state of the system, where the random fluctuations in the system are, for instance, (1) proportional to the state of the system, or (2) proportional to the deviation of the state of the system from a nonzero steady state etc. Another approach to adding white noise into a dynamic system involves (3) incorporating the random fluctuations in the driving parameters of the system such as the birth, death, recovery and disease transmission rates etc. of an infectious system. See for example [@imf].
In this study, the third approach is utilized to model the random environmental fluctuations in the disease transmission rate $\beta$, and also in the natural death rates $\mu$ of the different states $S(t)$, $E(t)$ , $I(t)$ and $R(t)$ of the human population. This approach entails the construction of a random walk process for the rates $\beta$, and $\mu$ over an infinitesimally small interval $[t, t+dt]$ and applying the central limit theorem. See for example [@wanduku-bookchapter]
For $t\geq t_{0}$, let $(\Omega, \mathfrak{F}, P)$ be a complete probability space, and $\mathfrak{F}_{t}$ be a filtration (that is, sub $\sigma$- algebra $\mathfrak{F}_{t}$ that satisfies the following: given $t_{1}\leq t_{2} \Rightarrow \mathfrak{F}_{t_{1}}\subset \mathfrak{F}_{t_{2}}; E\in \mathfrak{F}_{t}$ and $P(E)=0 \Rightarrow E\in \mathfrak{F}_{0} $ ). The variability in the disease transmission and natural death rates are represented by independent white noise or Wiener processes with drifts, and the rates are expressed as follows: $$\label{ch1.sec0.eq7}
\mu \rightarrow \mu + \sigma_{i}\xi_{i}(t),\quad \xi_{i}(t) dt= dw_{i}(t),i=S,E,I,R, \quad \beta \rightarrow \beta + \sigma_{\beta}\xi_{\beta}(t),\quad \xi_{\beta}(t)dt=dw_{\beta}(t),$$ where $\xi_{i}(t)$ and $w_{i}(t)$ represent the standard white noise and normalized wiener processes for the $i^{th}$ state at time $t$, with the following properties: $w(0)=0, E(w(t))=0, var(w(t))=t$. Furthermore, $\sigma_{i},i=S,E,I,R $, represents the intensity of the white noise process due to the random natural death rate of the $i^{th}$ state, and $\sigma_{\beta}$ is the intensity of the white noise process due to the random disease transmission rate. Moreover, the $ w_{i}(t),i=S,E,I,R,\beta,\forall t\geq t_{0}$, are all independent. The detailed formulation of the expressions in (\[ch1.sec0.eq7\]) will appear in the book chapter by Wanduku[@wanduku-bookchapter]. The ideas behind the formulation of the expressions in (\[ch1.sec0.eq7\]) are given in the following. The constant parameters $\mu$ and $\beta$ represent the natural death and disease transmission rates per unit time, respectively. In reality, random environmental fluctuations impact these rates turning them into random variables $\tilde{\mu}$ and $\tilde{\beta}$. Thus, the natural death and disease transmission rates over an infinitesimally small interval of time $[t, t+dt]$ with length $dt$ is given by the expressions $\tilde{\mu}(t)=\tilde{\mu}dt$ and $\tilde{\beta}(t)=\tilde{\beta}dt$, respectively. It is assumed that there are independent and identical random impacts acting upon these rates at times $t_{j+1}$ over $n$ subintervals $[t_{j}, t_{j+1}]$ of length $\triangle t=\frac{dt}{n}$, where $t_{j}=t_{0}+j\triangle t, j=0,1,\cdots,n$, and $t_{0}=t$. Furthermore, it is assumed that $\tilde{\mu}(t_{0})=\tilde{\mu}(t)=\mu dt$ is constant or deterministic, and $\tilde{\beta}(t_{0})=\tilde{\beta}(t)=\beta dt$ is also a constant. It follows that by letting the independent identically distributed random variables $Z_{i},i=1,\cdots,n $ represent the random effects acting on the natural death rate, then it follows further that the rate at time $t_{n}=t+dt$, that is, $$\label{ch1.sec0.eq7.eq1}
\tilde{\mu}(t+dt)=\tilde{\mu}(t)+\sum_{j=1}^{n}Z_{j},$$ where $E(Z_{j})=0$, and $Var(Z_{j})=\sigma^{2}_{i}\triangle t, i\in \{S, E, I, R\}$. Note that $\tilde{\beta}(t+dt)$ can similarly be expressed as (\[ch1.sec0.eq7.eq1\]). And for sufficient large value of $n$, the summation in (\[ch1.sec0.eq7.eq1\]) converges in distribution by the central limit theorem to a random variable which is identically distributed as the wiener process $\sigma_{i}(w_{i}(t+dt)-w_{i}(t))=\sigma_{i}dw_{i}(t)$, with mean $0$ and variance $\sigma^{2}_{i}dt, i\in \{S, E, I, R\}$. It follows easily from (\[ch1.sec0.eq7.eq1\]) that $$\label{ch1.sec0.eq7.eq2}
\tilde{\mu}dt =\mu dt+ \sigma_{i}dw_{i}(t), i\in \{S, E, I, R\}.$$ Similarly, it can be easily seen that $$\label{ch1.sec0.eq7.eq2}
\tilde{\beta}dt =\beta dt+ \sigma_{\beta}dw_{\beta}(t).$$ Note that the intensities $\sigma^{2}_{i},i=S,E,I,R, \beta $ of the independent white noise processes in the expressions $\tilde{\mu}(t)=\mu dt + \sigma_{i}\xi_{i}(t)$ and $\tilde{\beta} (t)=\beta dt + \sigma_{\beta}\xi_{\beta}(t)$ that represent the natural death rate, $\tilde{\mu}(t)$, and disease transmission rate, $\tilde{\beta} (t)$, at time $t$, measure the average deviation of the random variable disease transmission rate, $\tilde{\beta}$, and natural death rate, $\tilde{\mu}$, about their constant mean values $\beta$ and $\mu$, respectively, over the infinitesimally small time interval $[t, t+dt]$. These measures reflect the force of the random fluctuations that occur during the disease outbreak at anytime, and which lead to oscillations in the natural death and disease transmission rates overtime, and consequently lead to oscillations of the susceptible, exposed, infectious and removal states of the system over time during the disease outbreak. Thus, in this study the words “strength” and “intensity” of the white noise are used synonymously. Also, the constructions “strong noise” and “weak noise” are used to refer to white noise with high and low intensities, respectively.
It is easy to see from (\[ch1.sec0.eq7\]) that the random natural death rate per unit time denoted $\tilde{\mu}$ is given by $\tilde{\mu}=\mu + \sigma_{i}\xi_{i}(t),\quad \xi_{i}(t) dt= dw_{i}(t),i=S,E,I,R,$. It follows further that under the assumption of independent deaths in the human population, so that the number of natural deaths $N(t)$ over an interval $[t_{0}, t_{0}+t]$ of length $t$ follows a Poisson process $\{N(t),t\geq t_{0}\}$ with intensity of the process $E(\tilde{\mu})=\mu$, and mean $E(N(t))=E(\tilde{\mu}t)=\mu t$, then the time until death is exponentially distributed with mean $\frac{1}{\mu}$. Moreover, the survival function is given by $$\label{ch1.sec0.eq7.eq3}
\mathfrak{S}(t)=e^{-\mu t},t>0.$$ Substituting (\[ch1.sec0.eq7\])-(\[ch1.sec0.eq7.eq3\]) into the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) leads to the following generalized system of Ito-Doob stochastic differential equations describing the dynamics of vector-borne diseases in the human population. $$\begin{aligned}
dS(t)&=&\left[ B-\beta S(t)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))ds - \mu S(t)+ \alpha \int_{t_{0}}^{\infty}f_{T_{3}}(r)I(t-r)e^{-\mu r}dr \right]dt\nonumber\\
&&-\sigma_{S}S(t)dw_{S}(t)-\sigma_{\beta} S(t)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))dsdw_{\beta}(t) \label{ch1.sec0.eq8}\\
dE(t)&=& \left[ \beta S(t)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))ds - \mu E(t)\right.\nonumber\\
&&\left.-\beta \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdu \right]dt\nonumber\\
&&-\sigma_{E}E(t)dw_{E}(t)+\sigma_{\beta} S(t)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))dsdw_{\beta}(t)\nonumber\\
&&-\sigma_{\beta} \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdudw_{\beta}(t)\label{ch1.sec0.eq9}\\
dI(t)&=& \left[\beta \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdu- (\mu +d+ \alpha) I(t) \right]dt\nonumber\\
&&-\sigma_{I}I(t)dw_{I}(t)+\sigma_{\beta} \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdudw_{\beta}(t)\nonumber\\
&&\label{ch1.sec0.eq10}\\
dR(t)&=&\left[ \alpha I(t) - \mu R(t)- \alpha \int_{t_{0}}^{\infty}f_{T_{3}}(r)I(t-r)e^{-\mu s}dr \right]dt-\sigma_{R}R(t)dw_{R}(t),\label{ch1.sec0.eq11}\end{aligned}$$ where the initial conditions are given in the following: Let $h= h_{1}+ h_{2}$ and define $$\begin{aligned}
&&\left(S(t),E(t), I(t), R(t)\right)
=\left(\varphi_{1}(t),\varphi_{2}(t), \varphi_{3}(t),\varphi_{4}(t)\right), t\in (-\infty,t_{0}],\nonumber\\% t\in [t_{0}-h,t_{0}],\quad and\quad=
&&\varphi_{k}\in \mathcal{C}((-\infty,t_{0}],\mathbb{R}_{+}),\forall k=1,2,3,4, \nonumber\\
&&\varphi_{k}(t_{0})>0,\forall k=1,2,3,4,\nonumber\\
\label{ch1.sec0.eq12}\end{aligned}$$ where $\mathcal{C}((-\infty,t_{0}],\mathbb{R}_{+})$ is the space of continuous functions with the supremum norm $$\label{ch1.sec0.eq13}
||\varphi||_{\infty}=\sup_{ t\leq t_{0}}{|\varphi(t)|}.$$ Furthermore, the random continuous functions $\varphi_{k},k=1,2,3,4$ are $\mathfrak{F}_{0}-measurable$, or independent of $w(t)$ for all $t\geq t_{0}$. In a similar structure to the study [@cooke-driessche], one major advantage of the family of vector-borne disease models (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is the sufficiency in its simplistic nature to provide insights about the vector-borne disease in the human population with limited characterization or limited knowledge of the life cycle of the disease vector. This model provides a suitable platform to study control strategies against the disease with primary focus directed to the human population, whenever there is limited information about the vector life cycle, for instance, in an emergency situation where there is a sudden deadly vector-borne disease outbreak, and there is limited time to investigate the biting habits and life cycles of the vectors. Observe that (\[ch1.sec0.eq9\]) and (\[ch1.sec0.eq11\]) decouple from the other two equations in the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]). Nevertheless, for mathematical convenience the results in this paper will be shown for the vector $X(t)=(S(t), E(t), I(t))^{T}$. The following notations are utilized: $$\label{ch1.sec0.eq13b}
\left\{
\begin{array}{lll}
Y(t)&=&(S(t), E(t), I(t), R(t))^{T} \\
X(t)&=&(S(t), E(t), I(t))^{T} \\
N(t)&=&S(t)+ E(t)+ I(t)+ R(t).
\end{array}
\right.$$
Model validation \[ch1.sec1\]
=============================
The existence and uniqueness of solution of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is exhibited in the following theorem. Moreover, the feasibility region of the the solution process $\{X(t), t\geq t_{0}\}$ of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is defined. The standard methods utilized in the earlier studies[@wanduku-determ; @Wanduku-2017; @wanduku-delay; @divine5] are applied to establish the results. It should be noted that the existence and qualitative behavior of the positive solution process of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) depend on the sources (natural death or disease transmission rates) of variability in the system. As it is shown below, certain sources of variability lead to very complex uncontrolled behavior of the sample paths of the system.
The following lemma describes the behavior of the positive local solution process for the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]). This result will be useful in establishing the existence and uniqueness results for the global solutions of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]).
\[ch1.sec1.lemma1\] Suppose for some $\tau_{e}>t_{0}\geq 0$ the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) with initial condition in (\[ch1.sec0.eq12\]) has a unique positive solution process $Y(t)\in \mathbb{R}^{4}_{+}$, for all $t\in (-\infty, \tau_{e}]$, then it follows that
if $N(t_{0})\leq \frac{B}{\mu}$, and the intensities of the independent white noise processes in the system satisfy $\sigma_{i}=0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$, then $N(t)\leq \frac{B}{\mu}$, and in addition, the set denoted by $$\label{ch1.sec1.lemma1.eq1}
D(\tau_{e})=\left\{Y(t)\in \mathbb{R}^{4}_{+}: N(t)=S(t)+ E(t)+ I(t)+ R(t)\leq \frac{B}{\mu}, \forall t\in (-\infty, \tau_{e}] \right\}=\bar{B}^{(-\infty, \tau_{e}]}_{\mathbb{R}^{4}_{+},}\left(0,\frac{B}{\mu}\right),$$ is locally self-invariant with respect to the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), where $\bar{B}^{(-\infty, \tau_{e}]}_{\mathbb{R}^{4}_{+},}\left(0,\frac{B}{\mu}\right)$ is the closed ball in $\mathbb{R}^{4}_{+}$ centered at the origin with radius $\frac{B}{\mu}$ containing the local positive solutions defined over $(-\infty, \tau_{e}]$.
If the intensities of the independent white noise processes in the system satisfy $\sigma_{i}>0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$, then $Y(t)\in \mathbb{R}^{4}_{+}$ and $N(t)\geq 0$, for all $t\in (-\infty, \tau_{e}]$.
Proof:\
It follows directly from (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) that when $\sigma_{i}=0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$, then $$\label{ch1.sec1.lemma1.eq2}
dN(t)=[B-\mu N(t)-dI(t)]dt$$ The result in (a.) follows easily by observing that for $Y(t)\in \mathbb{R}^{4}_{+}$, the equation (\[ch1.sec1.lemma1.eq2\]) leads to $N(t)\leq \frac{B}{\mu}-\frac{B}{\mu}e^{-\mu(t-t_{0})}+N(t_{0})e^{-\mu(t-t_{0})}$. And under the assumption that $N(t_{0})\leq \frac{B}{\mu}$, the result follows immediately. The result in (b.) follows directly from Theorem \[ch1.sec1.thm1\].\
The following theorem presents the existence and uniqueness results for the global solutions of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]).
\[ch1.sec1.thm1\] Given the initial conditions (\[ch1.sec0.eq12\]) and (\[ch1.sec0.eq13\]), there exists a unique solution process $X(t,w)=(S(t,w),E(t,w), I(t,w))^{T}$ satisfying (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), for all $t\geq t_{0}$. Moreover,
the solution process is positive for all $t\geq t_{0}$ a.s. and lies in $D(\infty)$, whenever the intensities of the independent white noise processes in the system satisfy $\sigma_{i}=0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$. That is, $S(t,w)>0,E(t,w)>0, I(t,w)>0, \forall t\geq t_{0}$ a.s. and $X(t,w)\in D(\infty)=\bar{B}^{(-\infty, \infty)}_{\mathbb{R}^{4}_{+},}\left(0,\frac{B}{\mu}\right)$, where $D(\infty)$ is defined in Lemma \[ch1.sec1.lemma1\], (\[ch1.sec1.lemma1.eq1\]).
Also, the solution process is positive for all $t\geq t_{0}$ a.s. and lies in $\mathbb{R}^{4}_{+}$, whenever the intensities of the independent white noise processes in the system satisfy $\sigma_{i}>0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$. That is, $S(t,w)>0,E(t,w)>0, I(t,w)>0, \forall t\geq t_{0}$ a.s. and $X(t,w)\in \mathbb{R}^{4}_{+}$.
Proof:\
A similar proof of this result appears in a more general study of vector-borne diseases in Wanduku[@wanduku-theorBio], nevertheless it is added here for completion. It is easy to see that the coefficients of (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) satisfy the local Lipschitz condition for the given initial data (\[ch1.sec0.eq12\]). Therefore there exist a unique maximal local solution $X(t,w)=(S(t,w), E(t,w), I(t,w))$ on $t\in (-\infty,\tau_{e}(w)]$, where $\tau_{e}(w)$ is the first hitting time or the explosion time of the process[@mao]. The following shows that $X(t,w)\in D(\tau_{e})$ almost surely, whenever $\sigma_{i}=0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$, where $D(\tau_{e}(w))$ is defined in Lemma \[ch1.sec1.lemma1\] (\[ch1.sec1.lemma1.eq1\]), and also that $X(t,w)\in \mathbb{R}^{4}_{+}$, whenever $\sigma_{i}>0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$. Define the following stopping time $$\label{ch1.sec1.thm1.eq1}
\left\{
\begin{array}{lll}
\tau_{+}&=&sup\{t\in (t_{0},\tau_{e}(w)): S|_{[t_{0},t]}>0,\quad E|_{[t_{0},t]}>0,\quad and\quad I|_{[t_{0},t]}>0 \},\\
\tau_{+}(t)&=&\min(t,\tau_{+}),\quad for\quad t\geq t_{0}.\\
\end{array}
\right.$$ and lets show that $\tau_{+}(t)=\tau_{e}(w)$ a.s. Suppose on the contrary that $P(\tau_{+}(t)<\tau_{e}(w))>0$. Let $w\in \{\tau_{+}(t)<\tau_{e}(w)\}$, and $t\in [t_{0},\tau_{+}(t))$. Define $$\label{ch1.sec1.thm1.eq2}
\left\{
\begin{array}{ll}
V(X(t))=V_{1}(X(t))+V_{2}(X(t))+V_{3}(X(t)),\\
V_{1}(X(t))=\ln(S(t)),\quad V_{2}(X(t))=\ln(E(t)),\quad V_{3}(X(t))=\ln(I(t)),\forall t\leq\tau_{+}(t).
\end{array}
\right.$$ It follows from (\[ch1.sec1.thm1.eq2\]) that $$\label{ch1.sec1.thm1.eq3}
dV(X(t))=dV_{1}(X(t))+dV_{2}(X(t))+dV_{3}(X(t)),$$ where $$\begin{aligned}
%% \nonumber % Remove numbering (before each equation)
dV_{1}(X(t)) &=& \frac{1}{S(t)}dS(t)-\frac{1}{2}\frac{1}{S^{2}(t)}(dS(t))^{2}\nonumber \\
&=&\left[ \frac{B}{S(t)}-\beta \int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))ds - \mu + \frac{\alpha}{S(t)} \int_{t_{0}}^{\infty}f_{T_{3}}(r)I(t-r)e^{-\mu r}dr \right.\nonumber\\
&&\left.-\frac{1}{2}\sigma^{2}_{S}-\frac{1}{2}\sigma^{2}_{\beta}\left(\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))ds\right)^{2}\right]dt\nonumber\\
&&-\sigma_{S}dw_{S}(t)-\sigma_{\beta} \int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))dsdw_{\beta}(t), \label{ch1.sec1.thm1.eq4}\end{aligned}$$ $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
dV_{2}(X(t)) &=& \frac{1}{E(t)}dE(t)-\frac{1}{2}\frac{1}{E^{2}(t)}(dE(t))^{2} \nonumber\\
&=& \left[ \beta \frac{S(t)}{E(t)}\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))ds - \mu \right.\nonumber\\
&&\left.-\beta\frac{1}{E(t)} \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdu \right.\nonumber\\
&&\left.-\frac{1}{2}\sigma^{2}_{E}-\frac{1}{2}\sigma^{2}_{\beta}\frac{S^{2}(t)}{E^{2}(t)}\left(\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))ds\right)^{2}\right.\nonumber\\
&&\left.-\frac{1}{2}\sigma^{2}_{\beta}\frac{1}{E^{2}(t)}\left(\int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdu \right)^{2}\right]dt\nonumber\\
&&-\sigma_{E}dw_{E}(t)+\sigma_{\beta} \frac{S(t)}{E(t)}\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(t-s))dsdw_{\beta}(t)\nonumber\\
&&-\sigma_{\beta}\frac{1}{E(t)} \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdudw_{\beta}(t),\nonumber\\
\label{ch1.sec1.thm1.eq5}\end{aligned}$$ and $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
dV_{3}(X(t)) &=& \frac{1}{I(t)}dI(t)-\frac{1}{2}\frac{1}{I^{2}(t)}(dI(t))^{2}\nonumber \\
&=& \left[\beta \frac{1}{I(t)}\int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdu- (\mu +d+ \alpha)\right. \nonumber\\
&&\left.-\frac{1}{2}\sigma^{2}_{I}-\frac{1}{2}\sigma^{2}_{\beta}\left(\int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdu \right)^{2}\right]dt\nonumber\\
&&-\sigma_{I}dw_{I}(t)+\sigma_{\beta}\frac{1}{I(t)} \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(t-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(t-s-u))dsdudw_{\beta}(t)\nonumber\\
&&\label{ch1.sec1.thm1.eq6}\end{aligned}$$ It follows from (\[ch1.sec1.thm1.eq3\])-(\[ch1.sec1.thm1.eq6\]) that for $t<\tau_{+}(t)$, $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
V(X(t))-V(X(t_{0})) &\geq& \int^{t}_{t_{0}}\left[-\beta \int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(\xi-s))ds-\frac{1}{2}\sigma^{2}_{S}\right.\nonumber\\
&&\left.-\frac{1}{2}\sigma^{2}_{\beta}\left(\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(\xi-s))ds\right)^{2}\right]d\xi\nonumber\\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%V_2
%%%%%%%%%%%%%%%%%%%%%
&&+ \int_{t}^{t_{0}}\left[-\beta\frac{1}{E(\xi)} \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(\xi-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(\xi-s-u))dsdu
\right.\nonumber\\
%&&\left. \right.\nonumber\\
&&\left.-\frac{1}{2}\sigma^{2}_{E}-\frac{1}{2}\sigma^{2}_{\beta}\frac{S^{2}(\xi)}{E^{2}(\xi)}\left(\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(\xi-s))ds\right)^{2}\right.\nonumber\\
&&\left.-\frac{1}{2}\sigma^{2}_{\beta}\frac{1}{E^{2}(\xi)}\left(\int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(\xi-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(\xi-s-u))dsdu \right)^{2}\right]d\xi\nonumber\\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% V_3
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
&&+ \int_{t}^{t_{0}}\left[- (3\mu +d+ \alpha)-\frac{1}{2}\sigma^{2}_{I}\right. \nonumber\\
&&\left.-\frac{1}{2}\sigma^{2}_{\beta}\left(\int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(\xi-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(\xi-s-u))dsdu \right)^{2}\right]d\xi\nonumber\\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%Diffussion v_1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
&&+\int_{t}^{t_{0}}\left[-\sigma_{S}dw_{S}(\xi)-\sigma_{\beta} \int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(\xi-s))dsdw_{\beta}(\xi)\right]\nonumber \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%V_2
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
&&+\int_{t}^{t_{0}}\left[-\sigma_{E}dw_{E}(\xi)+\sigma_{\beta} \frac{S(\xi)}{E(\xi)}\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s}G(I(\xi-s))dsdw_{\beta}(\xi)\right]\nonumber\\
&&-\int_{t}^{t_{0}}\left[\sigma_{\beta}\frac{1}{E(\xi)} \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(\xi-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(\xi-s-u))dsdudw_{\beta}(\xi)\right]\nonumber\\
%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%V_3
&&+\int_{t_{0}}^{t}\left[-\sigma_{I}dw_{I}(\xi)\right.\nonumber\\
&&\left.+\sigma_{\beta}\frac{1}{I(\xi)} \int_{t_{0}}^{h_{2}}f_{T_{2}}(u)S(\xi-u)\int^{h_{1}}_{t_{0}}f_{T_{1}}(s) e^{-\mu s-\mu u}G(I(\xi-s-u))dsdudw_{\beta}(\xi)\right].\nonumber\\
&&\label{ch1.sec1.thm1.eq7}
%%%%%%%%%%%%%%%%%%%%%\end{aligned}$$ Taking the limit on (\[ch1.sec1.thm1.eq7\]) as $t\rightarrow \tau_{+}(t)$, it follows from (\[ch1.sec1.thm1.eq1\])-(\[ch1.sec1.thm1.eq2\]) that the left-hand side $V(X(t))-V(X(t_{0}))\leq -\infty$. This contradicts the finiteness of the right-handside of the inequality (\[ch1.sec1.thm1.eq7\]). Hence $\tau_{+}(t)=\tau_{e}(w)$ a.s., that is, $X(t,w)\in D(\tau_{e})$, whenever $\sigma_{i}=0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$, and $X(t,w)\in \mathbb{R}^{4}_{+}$, whenever $\sigma_{i}>0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$.
The following shows that $\tau_{e}(w)=\infty$. Let $k>0$ be a positive integer such that $||\vec{\varphi}||_{1}\leq k$, where $\vec{\varphi}=\left(\varphi_{1}(t),\varphi_{2}(t), \varphi_{3}(t)\right), t\in (-\infty,t_{0}]$ defined in (\[ch1.sec0.eq12\]), and $||.||_{1}$ is the $p-sum$ norm defined on $\mathbb{R}^{3}$, when $p=1$. Define the stopping time $$\label{ch1.sec1.thm1.eq8}
\left\{
\begin{array}{ll}
\tau_{k}=sup\{t\in [t_{0},\tau_{e}): ||X(s)||_{1}=S(s)+E(s)+I(s)\leq k, s\in[t_{0},t] \}\\
\tau_{k}(t)=\min(t,\tau_{k}).
\end{array}
\right.$$ It is easy to see that as $k\rightarrow \infty$, $\tau_{k}$ increases. Set $\lim_{k\rightarrow \infty}\tau_{k}(t)=\tau_{\infty}$. Then it follows that $\tau_{\infty}\leq \tau_{e}$ a.s. We show in the following that: (1.) $\tau_{e}=\tau_{\infty}\quad a.s.\Leftrightarrow P(\tau_{e}\neq \tau_{\infty})=0$, (2.) $\tau_{\infty}=\infty\quad a.s.\Leftrightarrow P(\tau_{\infty}=\infty)=1$.
Suppose on the contrary that $P(\tau_{\infty}<\tau_{e})>0$. Let $w\in \{\tau_{\infty}<\tau_{e}\}$ and $t\leq \tau_{\infty}$. Define $$\label{ch1.sec1.thm1.eq9}
\left\{
\begin{array}{ll}
\hat{V}_{1}(X(t))=e^{\mu t}(S(t)+E(t)+I(t)),\\
\forall t\leq\tau_{k}(t).
\end{array}
\right.$$ The Ito-Doob differential $d\hat{V}_{1}$ of (\[ch1.sec1.thm1.eq9\]) with respect to the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is given as follows: $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
d\hat{V}_{1} &=& \mu e^{\mu t}(S(t)+E(t)+I(t)) dt + e^{\mu t}(dS(t)+dE(t)+dI(t)) \\
&=& e^{\mu t}\left[B+\alpha \int_{t_{0}}^{\infty}f_{T_{3}}(r)I(t-r)e^{-\mu r}dr-(\alpha + d)I(t)\right]dt\nonumber\\
&&-\sigma_{S}e^{\mu t}S(t)dw_{S}(t)-\sigma_{E}e^{\mu t}E(t)dw_{E}(t)-\sigma_{I}e^{\mu t}I(t)dw_{I}(t)\label{ch1.sec1.thm1.eq10}\end{aligned}$$ Integrating (\[ch1.sec1.thm1.eq9\]) over the interval $[t_{0}, \tau]$, and applying some algebraic manipulations and simplifications it follows that $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
V_{1}(X(\tau)) &=& V_{1}(X(t_{0}))+\frac{B}{\mu}\left(e^{\mu \tau}-e^{\mu t_{0}}\right)\nonumber\\
&&+\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-\mu r}\left(\int_{t_{0}-r}^{t_{0}}\alpha I(\xi)d\xi-\int_{\tau-r}^{\tau}\alpha I(\xi)d\xi\right)dr-\int_{t_{0}}^{\tau}d I(\xi)d\xi \nonumber\\
&&+\int^{\tau}_{t_{0}}\left[-\sigma_{S}e^{\mu \xi}S(\xi)dw_{S}(\xi)-\sigma_{E}e^{\mu \xi}E(\xi)dw_{E}(\xi)-\sigma_{I}e^{\mu \xi}I(\xi)dw_{I}(\xi)\right]\label{ch1.sec1.thm1.eq11}\end{aligned}$$ Removing negative terms from (\[ch1.sec1.thm1.eq11\]), it implies from (\[ch1.sec0.eq12\]) that $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
V_{1}(X(\tau)) &\leq& V_{1}(X(t_{0}))+\frac{B}{\mu}e^{\mu \tau}\nonumber\\
&&+\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-\mu r}\left(\int_{t_{0}-r}^{t_{0}}\alpha \varphi_{3}(\xi)d\xi\right)dr \nonumber\\
&&+\int^{\tau}_{t_{0}}\left[-\sigma_{S}e^{\mu \xi}S(\xi)dw_{S}(\xi)-\sigma_{E}e^{\mu \xi}E(\xi)dw_{E}(\xi)-\sigma_{I}e^{\mu \xi}I(\xi)dw_{I}(\xi)\right]\label{ch1.sec1.thm1.eq12}\end{aligned}$$ But from (\[ch1.sec1.thm1.eq9\]) it is easy to see that for $\forall t\leq\tau_{k}(t)$, $$\label{ch1.sec1.thm1.eq12a}
||X(t)||_{1}=S(t)+E(t)+I(t)\leq V(X(t)).$$ Thus setting $\tau=\tau_{k}(t)$, then it follows from (\[ch1.sec1.thm1.eq8\]), (\[ch1.sec1.thm1.eq12\]) and (\[ch1.sec1.thm1.eq12a\]) that $$\label{ch1.sec1.thm1.eq13}
k=||X(\tau_{k}(t))||_{1}\leq V_{1}(X(\tau_{k}(t)))$$ Taking the limit on (\[ch1.sec1.thm1.eq13\]) as $k\rightarrow \infty$ leads to a contradiction because the left-hand-side of the inequality (\[ch1.sec1.thm1.eq13\]) is infinite, but following the right-hand-side from (\[ch1.sec1.thm1.eq12\]) leads to a finite value. Hence $\tau_{e}=\tau_{\infty}$ a.s. The following shows that $\tau_{e}=\tau_{\infty}=\infty$ a.s. Let $\ w\in \{\tau_{e}<\infty\}$. It follows from (\[ch1.sec1.thm1.eq11\])-(\[ch1.sec1.thm1.eq12\]) that $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
I_{\{\tau_{e}<\infty\}}V_{1}(X(\tau)) &\leq& I_{\{\tau_{e}<\infty\}}V_{1}(X(t_{0}))+I_{\{\tau_{e}<\infty\}}\frac{B}{\mu}e^{\mu \tau}\nonumber\\
&&+I_{\{\tau_{e}<\infty\}}\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-\mu r}\left(\int_{t_{0}-r}^{t_{0}}\alpha \varphi_{3}(\xi)d\xi\right)dr\nonumber\\
&&+I_{\{\tau_{e}<\infty\}}\int^{\tau}_{t_{0}}\left[-\sigma_{S}e^{\mu \xi}S(\xi)dw_{S}(\xi)-\sigma_{E}e^{\mu \xi}E(\xi)dw_{E}(\xi)-\sigma_{I}e^{\mu \xi}I(\xi)dw_{I}(\xi)\right].
\nonumber\\
\label{ch1.sec1.thm1.eq14}\end{aligned}$$ Suppose $\tau=\tau_{k}(t)\wedge T$, where $ T>0$ is arbitrary, then taking the expected value of (\[ch1.sec1.thm1.eq14\]) follows that $$\label{ch1.sec1.thm1.eq14a}
E(I_{\{\tau_{e}<\infty\}}V_{1}(X(\tau_{k}(t)\wedge T))) \leq V_{1}(X(t_{0}))+\frac{B}{\mu}e^{\mu T}$$ But from (\[ch1.sec1.thm1.eq12a\]) it is easy to see that $$\label{ch1.sec1.thm1.eq15}
I_{\{\tau_{e}<\infty,\tau_{k}(t)\leq T\}}||X(\tau_{k}(t))||_{1}\leq I_{\{\tau_{e}<\infty\}}V_{1}(X(\tau_{k}(t)\wedge T))$$ It follows from (\[ch1.sec1.thm1.eq14\])-(\[ch1.sec1.thm1.eq15\]) and (\[ch1.sec1.thm1.eq8\]) that $$\begin{aligned}
P(\{\tau_{e}<\infty,\tau_{k}(t)\leq T\})k&=&E\left[I_{\{\tau_{e}<\infty,\tau_{k}(t)\leq T\}}||X(\tau_{k}(t))||_{1}\right]\nonumber\\
&\leq& E\left[I_{\{\tau_{e}<\infty\}}V_{1}(X(\tau_{k}(t)\wedge T))\right]\nonumber\\
&\leq& V_{1}(X(t_{0}))+\frac{B}{\mu}e^{\mu T}.
%&&+\sum_{r=1}^{M}\sum_{i=1}^{n_{r}}\sum_{q=1}^{M}\sum_{l=1}^{n_{q}}\int_{0}^{\infty}f^{rq}_{il}(t)\left[\varrho^{q}_{l}\int^{t_{0}}_{-t}\varphi^{rq}_{il2}(s)
%e^{\delta^{q}_{l}s}ds\right]dt\nonumber\\
\label{ch1.sec1.thm1.eq16}
\end{aligned}$$ It follows immediately from (\[ch1.sec1.thm1.eq16\]) that $P(\{\tau_{e}<\infty,\tau_{\infty}\leq T\})\rightarrow 0$ as $k\rightarrow \infty$. Furthermore, since $T<\infty$ is arbitrary, we conclude that $P(\{\tau_{e}<\infty,\tau_{\infty}< \infty\})= 0$. Finally, by the total probability principle, $$\begin{aligned}
P(\{\tau_{e}<\infty\})&=&P(\{\tau_{e}<\infty,\tau_{\infty}=\infty\})+P(\{\tau_{e}<\infty,\tau_{\infty}<\infty\})\nonumber\\
&\leq&P(\{\tau_{e}\neq\tau_{\infty}\})+P(\{\tau_{e}<\infty,\tau_{\infty}<\infty\})\nonumber\\
&=&0.\label{ch1.sec1.thm1.eq17}
\end{aligned}$$ Thus from (\[ch1.sec1.thm1.eq17\]), $\tau_{e}=\tau_{\infty}=\infty$ a.s.. In addition, $X(t)\in D(\infty)$, whenever $\sigma_{i}=0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$, and $X(t,w)\in \mathbb{R}^{4}_{+}$, whenever $\sigma_{i}>0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$.
\[ch1.sec0.remark1\]Theorem \[ch1.sec1.thm1\] and Lemma \[ch1.sec1.lemma1\] signify that the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) has a unique positive solution process $Y(t)\in \mathbb{R}^{4}_{+}$ globally for all $t\in (-\infty, \infty)$. Furthermore, it follows that every positive sample path for the stochastic system that starts in the closed ball centered at the origin with a radius of $\frac{B}{\mu}$, $D(\infty)=\bar{B}^{(-\infty, \infty)}_{\mathbb{R}^{4}_{+},}\left(0,\frac{B}{\mu}\right)$, will continue to oscillate and remain bounded in the closed ball for all time $t\geq t_{0}$, whenever the intensities of the independent white noise processes in the system satisfy $\sigma_{i}=0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$. Hence, the set $D(\infty)=\bar{B}^{(-\infty, \infty)}_{\mathbb{R}^{4}_{+},}\left(0,\frac{B}{\mu}\right)$ is a positive self-invariant set, or the feasibility region for the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), whenever $\sigma_{i}=0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$. In the case where the intensities of the independent white noise processes in the system satisfy $\sigma_{i}>0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$, the sample path solutions are positive and unique, and continue to oscillate in the unbounded space of positive real numbers $\mathbb{R}^{4}_{+}$. In other words, all positive sample path solutions of the system that start in the bounded region $D(\infty)$, remain bounded for all time, whenever $\sigma_{i}=0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$, and the positive sample paths may become unbounded, whenever $\sigma_{i}>0$, $i\in \{S, E, I\}$ and $\sigma_{\beta}\geq 0$.
The implication of this result to the disease dynamics represented by (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is that the occurrence of noise exclusively from the disease transmission rate allows a controlled situation for the disease dynamics, since the positive solutions exist within a well-defined positive self invariant space. The additional source of variability from the natural death rate of any of the different disease classes (susceptible, exposed, infectious or removed), may lead to more complex and uncontrolled situations for the disease dynamics, since it is obvious that the intensities of the white noise processes from the natural death rates of the different states in the system may drive the positive sample path solutions of the system unbounded. Some examples of uncontrolled disease situations that can occur when the positive solutions are unbounded include:- (1) extinction of the population, (2) failure of existence of an infection-free steady population state, wherein the disease can be controlled into the state, and (3) a sudden significant random rise or drop of a given state, such as the infectious state, from a low to high value, or vice versa over a short time period etc.
It is shown in the subsequent sections that the stronger noise in the system tends to enhance the persistence of the disease, and possible eventual extinction of the human population.
Stochasticity of the endemic equilibrium and persistence of disease\[ch1.sec3\]
===============================================================================
From a probabilistic perspective, the stochastic asymptotic stability (in the sense of Lyapunov) of an endemic equilibrium $E_{1}$, whenever it exists, ensures that every sample path for the stochastic system that starts in the neighborhood of the steady state $E_{1}$, has a high probability of oscillating in the neighborhood of the state, and eventually converges to that steady state, almost surely.
The biological significance of the stochastic stability of the endemic equilibrium $E_{1}$, whenever it exists is that, there exists a steady state for all disease related states in the population (exposed, infectious and removed), denoted $E_{1}$, where all sample paths for the disease related states that start in the neighborhood of the state $E_{1}$, have a high probability of oscillating in the neighborhood of the state $E_{1}$, and eventually converge to that steady state in the definite sense. In other words, in the long term, there is certainty of an endemic population, which persists the disease. Epidemiologists require the basic reproduction numbers $R^{*}_{0}$ or $R_{0}$ defined in (\[ch1.sec2.lemma2a.corrolary1.eq4\]) and (\[ch1.sec2.theorem1.corollary1.eq3\]), respectively, to satisfy the conditions $R^{*}_{0}>1$ or $R_{0}\geq 1$ for the disease to persist.These facts are discussed in this section, and examples provided to substantiate the results.
It is easy to see that the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) does not have a nontrivial or endemic steady state, whenever at least one of the intensities of the independent white noise processes in the system $\sigma_{i}>0, i=S, E, I, R, \beta$. Nevertheless, when the intensities of the noises of the system are infinitesimally small, that is, $\sigma_{i}=0, i=S, E, I, R, \beta$, the resulting system behaves approximately in the same manner as the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]), which has an endemic equilibrium $E_{1}$ studied in [@wanduku-biomath]. Thus, in this section, the asymptotic behavior of the sample paths of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) in the neighborhood of the potential endemic steady state, denoted $E_{1}=(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})$, is exhibited. The following results are quoted from [@wanduku-biomath] about sufficient conditions for the existence of the endemic equilibrium of the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]).
\[ch1.sec3.thm1\] Suppose the threshold condition $R_{0}>1$ is satisfied, where $R_{0}$ is defined in (\[ch1.sec2.theorem1.corollary1.eq3\]). It follows that when the delays in the system namely $T_{i}, i=1, 2, 3$ are random, and arbitrarily distributed, then the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) has a unique positive equilibrium state denoted by $E_{1}=(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})$, whenever $$\label{ch1.sec3.thm1.eq1}
E(e^{-\mu(T_{1}+T_{2})})\geq \frac{\hat{K}_{0}+\frac{\alpha}{\beta \frac{B}{\mu}}}{G'(0)},$$ where $\hat{K}_{0}$ is also defined in (\[ch1.sec2.theorem1.corollary1.eq3\]).
Proof:\
See[@wanduku-biomath].
\[ch1.sec3.thm1.corrolary1\] Suppose the incubation periods of the malaria plasmodium inside the mosquito and human hosts $T_{1}$ and $T_{2}$, and also the period of effective natural immunity against malaria inside the human being $T_{3}$ are constant. Let the threshold condition $R^{*}_{0}>1$ be satisfied, where $R^{*}_{0}$ is defined in (\[ch1.sec2.lemma2a.corrolary1.eq4\]). It follows that the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) has a unique positive equilibrium state denoted by $E_{1}=(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})$, whenever $$\label{ch1.sec3.thm1.corrolary1.eq1}
T_{1}+T_{2}\leq\frac{1}{\mu}\log{(G^{'}(0))}.%E(e^{-\mu(T_{1}+T_{2})})\geq \frac{\hat{K}_{0}+\frac{\alpha}{\beta \frac{B}{\mu}}}{G'(0)},$$
Proof:\
See[@wanduku-biomath]. It should be noted from Assumption \[ch1.sec0.assum1\] that the nonlinear function $G$ is bounded. Therefore, suppose $$\label{ch1.sec3.rem1.eqn1}
G^{*}=\sup_{x>0}{G(x)},$$ then it is easy to see that $0\leq G(x)\leq G^{*},\forall x>0$. It follows further from Assumption \[ch1.sec0.assum1\] that given $\lim _{I\rightarrow \infty}{G(I)}=C$, if $G$ is strictly monotonic increasing then $G^{*}\leq C$. Also, if $G$ is strictly monotonic decreasing then $G^{*}\geq C$. It easy to see that when the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) is perturbed by the noise in the system from at least one of the different sources- natural death or disease transmission rates, that is, whenever at least one of $\sigma_{i}> 0,i=S,E,I,R,\beta$, then the nontrivial steady state $E_{1}=(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})$ ceases to exist for the resulting perturbed system from (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]). It is important to understand the extend to which the sample paths are deviated from the endemic steady state $E_{1}$, under the influence of the noises in the system.
The following lemma will be utilized to prove the results that characterize the asymptotic behavior of the sample paths of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) in the neighborhood of the nontrivial steady state $E_{1}=(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})$, whenever at least one of $\sigma_{i} \neq 0,i=S,E,I,R,\beta$.
\[ch1.sec3.lemma1\]Let the hypothesis of Theorem \[ch1.sec3.thm1\] be satisfied and define the $\mathcal{C}^{2,1}-$ function $V:\mathbb{R}^{3}_{+}\times \mathbb{R}_{+}\rightarrow \mathbb{R}_{+}$ where $$\label{ch1.sec3.lemma1.eq1}
V(t)=V_{1}(t)+V_{2}(t)+V_{3}(t) +V_{4}(t),$$ where, $$\label{ch1.sec3.lemma1.eq2}
V_{1}(t)=\frac{1}{2}\left(S(t)-S^{*}_{1}+E(t)-E^{*}_{1}+I(t)-I^{*}_{1}\right)^{2},$$ $$\label{ch1.sec3.lemma1.eq3}
V_{2}(t)=\frac{1}{2}\left(S(t)-S^{*}_{1}\right)^{2}$$ $$\label{ch1.sec3.lemma1.eq4}
V_{3}(t)=\frac{1}{2}\left(S(t)-S^{*}_{1}+E(t)-E^{*}_{1}\right)^{2}.$$ and $$\begin{aligned}
V_{4}(t)&=&\frac{3}{2}\frac{\alpha}{\lambda(\mu)}\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-2\mu r}dr(I(\theta)-I^{*})^{2}d\theta dr\nonumber\\
&&+\frac{\beta S^{*}_{1}}{\lambda(\mu)}(G'(I^{*}_{1}))^{2} \int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu (s+u)}\int^{t}_{t-(s+u)}(I(\theta)-I^{*}_{1})^{2}d\theta dsdu\nonumber\\
%%%%%
%&&+\frac{\beta }{\lambda(\mu)}(G'(I^{*}_{1}))^{2} \int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu (s+u)}\int^{t}_{t-(s+u)}(I(\theta)-I^{*}_{1})^{2}d\theta %dsdu\nonumber\\
%%%%%
&&+[\frac{\beta \lambda(\mu)}{2} \int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu (s+u)}\int^{t}_{t-(s+u)}G^{2}(I(\theta))(S(\theta)-S^{*}_{1})^{2}d\theta dsdu\nonumber\\
%%%%%
%%%%%
&&+\sigma^{2}_{\beta} \int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu (s+u)}\int^{t}_{t-(s+u)}G^{2}(I(\theta))(S(\theta)-S^{*}_{1})^{2}d\theta dsdu,\nonumber\\\label{ch1.sec3.lemma1.eq4b}\end{aligned}$$ where $\lambda(\mu)>0$ is a real valued function of $\mu$. Suppose $\tilde{\phi}_{1}$, $\tilde{\psi}_{1}$ and $\tilde{\varphi}_{1}$ are defined as follows $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
\tilde{\phi}_{1}&=&3\mu-\left[2\mu\lambda{(\mu)}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha\lambda{(\mu)}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}+3\sigma ^{2}_{S}+\left(\frac{\beta (G^{*})^{2}}{2\lambda{(\mu)}}+\sigma^{2}_{\beta}(G^{*})^{2}\right)E(e^{-2\mu T_{1}})\right.\nonumber\\
&&\left.+\left(\frac{\beta \lambda{(\mu)}(G^{*})^{2}}{2}+\sigma^{2}_{\beta}(G^{*})^{2}\right)E(e^{-2\mu (T_{1}+T_{2})})\right]\label{ch1.sec3.lemma1.eq5a}\\
\tilde{\psi}_{1} &=& 2\mu-\left[\frac{\beta }{2\lambda{(\mu)}}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}+ \frac{2\mu}{\lambda{(\mu)}}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha \lambda{(\mu)}+ 2 \sigma^{2}_{E} \right]\label{ch1.sec3.lemma1.eq5b}\\
\tilde{\varphi}_{1}&=& (\mu + d+\alpha)-\left[(2\mu+d+\alpha)\frac{1}{\lambda{(\mu)}}+ \frac{\alpha\lambda{(\mu)}}{2} + \sigma^{2}_{I}+\frac{3\alpha}{2\lambda{(\mu)}}E(e^{-2\mu T_{3}})\right.\nonumber\\
&&\left. +\left(\frac{\beta S^{*}_{1}(G'(I^{*}_{1}))^{2}}{\lambda{(\mu)}}
\right)E(e^{-2\mu (T_{1}+T_{2})})\right].\label{ch1.sec3.lemma1.eq5c}
\end{aligned}$$ The differential operator $dV$ applied to $V(t)$ with respect to the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) can be written as follows: $$\label{ch1.sec3.lemma1.eq5}
dV=LV(t)dt + \overrightarrow{g}(S(t), E(t), I(t))d\overrightarrow{w(t)},$$ where for $\overrightarrow{w(t)}=(w_{S},w_{E}, w_{I}, w_{\beta})^{T}$ and the function $(S(t), E(t), I(t))\mapsto g(S(t), E(t), I(t))$, is defined as follows: $$\begin{aligned}
&&\overrightarrow{g}(S(t), E(t), I(t))d\overrightarrow{w(t)}= -\sigma_{S}(3(S(t)-S^{*}_{1})+2(E(t)-E^{*}_{1})+I(t)-I^{*}_{1})S(t)dw_{S}(t)\nonumber\\
&&-\sigma_{E}(2(S(t)-S^{*}_{1})+2(E(t)-E^{*}_{1})+I(t)-I^{*}_{1})E(t)dw_{E}(t)\nonumber\\
&&-\sigma_{I}((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1})+I(t)-I^{*}_{1})I(t)dw_{I}(t)\nonumber\\
&&-\sigma_{\beta}(S(t)-S^{*}_{1})S(t)\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-\mu s}G(I(t-s))dsdw_{\beta}(t)\nonumber\\
&&-\sigma_{\beta}((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1}))\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-\mu (s+u)}S(t-u)G(I(t-s-u))dsdudw_{\beta}(t),\nonumber\\\label{ch1.sec3.lemma1.eq6}
\end{aligned}$$ and $LV$ satisfies the following inequality $$\begin{aligned}
LV(t)&\leq& -\left\{\tilde{\phi}_{1}(S(t)-S^{*}_{1})^{2}+\tilde{\psi}_{1}(E(t)-E^{*}_{1})^{2}+\tilde{\varphi}_{1}(I(t)-I^{*}_{1})^{2}\right\}\nonumber\\
&&+3\sigma^{2}_{S}(S^{*}_{1})^{2}+ 2\sigma^{2}_{E}(E^{*}_{1})^{2}+\sigma^{2}_{I}(I^{*}_{1})^{2}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-2\mu (T_{1}+T_{2})})+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-\mu T_{1}}).\nonumber\\\label{ch1.sec3.lemma1.eq7}
\end{aligned}$$
Proof\
From (\[ch1.sec3.lemma1.eq2\])-(\[ch1.sec3.lemma1.eq4\]) the derivative of $V_{1}$, $V_{2}$ and $V_{3}$ with respect to the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) can be written in the form: $$\begin{aligned}
dV_{1}&=&LV_{1}dt -\sigma_{S}((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1})+I(t)-I^{*}_{1})S(t)dw_{S}(t)\nonumber\\
&&-\sigma_{E}((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1})+I(t)-I^{*}_{1})E(t)dw_{E}(t)\nonumber\\
&&-\sigma_{I}((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1})+I(t)-I^{*}_{1})I(t)dw_{I}(t),\nonumber\\\label{ch1.sec3.lemma1.proof.eq1}\end{aligned}$$ $$\begin{aligned}
dV_{2}&=&LV_{2}dt -\sigma_{S}((S(t)-S^{*}_{1}))S(t)dw_{S}(t)\nonumber\\
&&-\sigma_{\beta}(S(t)-S^{*}_{1})S(t)\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-\mu s}G(I(t-s))dsdw_{\beta}(t)\nonumber\\\label{ch1.sec3.lemma1.proof.eq2}\end{aligned}$$ and $$\begin{aligned}
dV_{3}&=&LV_{3}dt -\sigma_{S}((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1}))S(t)dw_{S}(t)-\sigma_{S}((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1}))E(t)dw_{E}(t)\nonumber\\
&&-\sigma_{\beta}((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1}))\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-\mu (s+u)}S(t-u)G(I(t-s-u))dsdudw_{\beta}(t),\nonumber\\\label{ch1.sec3.lemma1.proof.eq3}\end{aligned}$$where utilizing (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]), $LV_{1}$, $LV_{2}$ and $LV_{3}$ can be written as follows: $$\begin{aligned}
LV_{1}(t)&=&-\mu (S(t)-S^{*}_{1})^{2}-\mu (E(t)-E^{*}_{1})^{2}-(\mu + d+ \alpha) (I(t)-I^{*}_{1})^{2}\nonumber\\
&&-2\mu (S(t)-S^{*}_{1})(E(t)-E^{*}_{1})-(2\mu + d+ \alpha) (S(t)-S^{*}_{1})(I(t)-I^{*}_{1})\nonumber\\
&&-(2\mu + d+ \alpha) (E(t)-E^{*}_{1})(I(t)-I^{*}_{1})\nonumber\\
&&+\alpha((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1})+(I(t)-I^{*}_{1}))\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-\mu r}(I(t-r)-I^{*}_{1})dr\nonumber\\
&&+\frac{1}{2}\sigma^{2}_{S}S^{2}(t)+\frac{1}{2}\sigma^{2}_{E}E^{2}(t)+\frac{1}{2}\sigma^{2}_{I}I^{2}(t),\label{ch1.sec3.lemma1.proof.eq4}\end{aligned}$$ $$\begin{aligned}
LV_{2}(t)&=&-\mu (S(t)-S^{*}_{1})^{2}+\alpha(S(t)-S^{*}_{1})\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-\mu r}(I(t-r)-I^{*}_{1})dr\nonumber\\
&&-\beta(S(t)-S^{*}_{1})^{2}\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-\mu s}G(I(t-s))ds\nonumber\\
&&-\beta S^{*}_{1}(S(t)-S^{*}_{1})\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-\mu s}(G(I(t-s))-G(I^{*}_{1}))ds\nonumber\\
&&+\frac{1}{2}\sigma^{2}_{S}S^{2}(t)+\frac{1}{2}\sigma^{2}_{\beta}S^{2}(t)\left(\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-\mu s}G(I(t-s))ds\right)^{2},\label{ch1.sec3.lemma1.proof.eq5}\end{aligned}$$ and $$\begin{aligned}
LV_{3}(t)&=&-\mu (S(t)-S^{*}_{1})^{2}-\mu (E(t)-E^{*}_{1})^{2}-2\mu (S(t)-S^{*}_{1})(E(t)-E^{*}_{1})\nonumber\\
&&+\alpha((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1}))\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-\mu r}(I(t-r)-I^{*}_{1})dr\nonumber\\
&&-\beta(S(t)-S^{*}_{1})\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-\mu (s+u)}(S(t-u)-S^{*}_{1})G(I(t-s-u))dsdu\nonumber\\
&&-\beta(E(t)-E^{*}_{1})\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-\mu (s+u)}(S(t-u)-S^{*}_{1})G(I(t-s-u))dsdu\nonumber\\
&&-\beta S^{*}_{1}(S(t)-S^{*}_{1})\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-\mu (s+u)}(G(I(t-s-u))-G(I^{*}_{1}))dsdu\nonumber\\
&&-\beta S^{*}_{1}(E(t)-E^{*}_{1})\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-\mu (s+u)}(G(I(t-s-u))-G(I^{*}_{1}))dsdu\nonumber\\
%%%%%%%%%&&
&&+\frac{1}{2}\sigma^{2}_{S}S^{2}(t)+\frac{1}{2}\sigma^{2}_{E}E^{2}(t)\nonumber\\
%%%%%%%%%&&
&&+\frac{1}{2}\sigma^{2}_{\beta}\left(\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-\mu (s+u)}S(t-u)G(I(t-s-u))dsdu\right)^{2}.\label{ch1.sec3.lemma1.proof.eq6}\end{aligned}$$ From (\[ch1.sec3.lemma1.proof.eq4\])-(\[ch1.sec3.lemma1.proof.eq6\]), the set of inequalities that follow will be used to estimate the sum $LV_{1}(t)+LV_{2}(t)+LV_{3}(t)$. That is, applying $Cauchy-Swartz$ and $H\ddot{o}lder$ inequalities, and also applying the algebraic inequality $$\label{ch2.sec2.thm2.proof.eq2*}
2ab\leq \frac{a^{2}}{g(c)}+b^{2}g(c)$$ where $a,b,c\in \mathbb{R}$, and the function $g$ is such that $g(c)> 0$, the terms associated with the integral term (sign) $\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-\mu r}(I(t-r)-I^{*}_{1})dr$ are estimated as follows: $$\label{ch1.sec3.lemma1.proof.eq7}
(a(t)-a^{*})\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-\mu r}(I(t-r)-I^{*}_{1})dr\leq \frac{\lambda(\mu)}{2}(a-a^{*})^{2} + \frac{1}{2\lambda(\mu)}\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-2\mu r}(I(t-r)-I^{*}_{1})^{2}dr,$$ where $a(t)\in \{S(t), E(t), I(t)\}$ and $a^{*}\in \{S^{*}_{1}, E^{*}_{1}, I^{*}_{1}\}$. Furthermore, the terms with the integral sign that depend on $G(I(t-s))$ and $G(I(t-s-u))$ are estimated as follows: $$\begin{aligned}
&&-\beta(S(t)-S^{*}_{1})^{2}\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-\mu s}G(I(t-s))ds \leq \frac{\beta\lambda(\mu)}{2}(S(t)-S^{*}_{1})^{2}\nonumber\\
&&+\frac{\beta}{2\lambda(\mu)}(S(t)-S^{*}_{1})^{2}\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-2\mu s}G^{2}(I(t-s))ds.\nonumber\\
&& -\beta(E(t)-E^{*}_{1})\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-\mu (s+u)}(S(t-u)-S^{*}_{1})G(I(t-s-u))dsdu\leq \frac{\beta}{2\lambda(\mu)}(E(t)-E^{*}_{1})^{2} \nonumber\\
&&+\frac{\beta\lambda(\mu)}{2}\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu (s+u)}(S(t-u)-S^{*}_{1})^{2}G^{2}(I(t-s-u))dsdu. \label{ch1.sec3.lemma1.proof.eq8}
\end{aligned}$$ The terms with the integral sign that depend on $G(I(t-s))-G(I^{*}_{1})$ and $G(I(t-s-u))-G(I^{*}_{1})$ are estimated as follows: $$\begin{aligned}
&&-\beta S^{*}_{1}(S(t)-S^{*}_{1})\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-\mu s}(G(I(t-s))-G(I^{*}_{1}))ds\leq \frac{\beta S^{*}_{1}\lambda(\mu)}{2}(S(t)-S^{*}_{1})^{2}\nonumber\\
&& +\frac{\beta S^{*}_{1}}{2\lambda(\mu)}\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-2\mu s}(I(t-s)-I^{*}_{1})^{2}\left(\frac{G(I(t-s))-G(I^{*}_{1})}{I(t-s)-I^{*}_{1}}\right)^{2}ds\nonumber\\
&&\leq \frac{\beta S^{*}_{1}\lambda(\mu)}{2}(S(t)-S^{*}_{1})^{2}\nonumber\\
&& +\frac{\beta S^{*}_{1}}{2\lambda(\mu)}\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-2\mu s}(I(t-s)-I^{*}_{1})^{2}\left(G'(I^{*}_{1})\right)^{2}ds.\nonumber\\
&&-\beta S^{*}_{1}(E(t)-E^{*}_{1})\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-\mu (s+u)}(G(I(t-s-u))-G(I^{*}_{1}))dsdu\leq \frac{\beta S^{*}_{1}\lambda(\mu)}{2}(E(t)-E^{*}_{1})^{2}\nonumber\\
&& +\frac{\beta S^{*}_{1}}{2\lambda(\mu)}\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu s}(I(t-s-u)-I^{*}_{1})^{2}\left(\frac{G(I(t-s-u))-G(I^{*}_{1})}{I(t-s-u)-I^{*}_{1}}\right)^{2}ds\nonumber\\
&&\leq \frac{\beta S^{*}_{1}\lambda(\mu)}{2}(E(t)-E^{*}_{1})^{2}\nonumber\\
&& +\frac{\beta S^{*}_{1}}{2\lambda(\mu)}\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu s}(I(t-s-u)-I^{*}_{1})^{2}\left(G'(I^{*}_{1})\right)^{2}ds,\nonumber\\\label{ch1.sec3.lemma1.proof.eq9}\end{aligned}$$ where the inequality in (\[ch1.sec3.lemma1.proof.eq9\]) follows from Assumption \[ch1.sec0.assum1\]. That is, $G$ is a differentiable monotonic function with $G''(I)<0$, and consequently, $0< \frac{G(I)-G(I^{*}_{1})}{(I-I^{*}_{1})}\leq G'(I^{*}_{1}), \forall I>0$. By employing the $Cauchy-Swartz$ and $H\ddot{o}lder$ inequalities, and also applying the following algebraic inequality $(a+b)^{2}\leq 2a^{2}+ 2b^{2}$, the last set of terms with integral signs on (\[ch1.sec3.lemma1.proof.eq5\])-(\[ch1.sec3.lemma1.proof.eq6\]) are estimated as follows: $$\begin{aligned}
&&\frac{1}{2}\sigma^{2}_{\beta}S^{2}(t)\left(\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-\mu s}G(I(t-s))ds\right)^{2}\leq \sigma^{2}_{\beta}\left((S(t)-S^{*}_{1})^{2}+(S^{*}_{1})^{2}\right)\times\nonumber\\
&&\times\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-2\mu s}G^{2}(I(t-s))ds.\nonumber\\
&&\frac{1}{2}\sigma^{2}_{\beta}\left(\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-\mu (s+u)}S(t-u)G(I(t-s-u))dsdu\right)^{2}\leq \sigma^{2}_{\beta}\times\nonumber\\
&&\times\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu (s+u)}\left((S(t-u)-S^{*}_{1})^{2}+(S^{*}_{1})^{2}\right)G^{2}(I(t-s-u))dsdu.\nonumber\\\label{ch1.sec3.lemma1.proof.eq10}\end{aligned}$$ By further applying the algebraic inequality (\[ch2.sec2.thm2.proof.eq2\*\]) and the inequalities (\[ch1.sec3.lemma1.proof.eq7\])-(\[ch1.sec3.lemma1.proof.eq10\]) on the sum $LV_{1}(t)+LV_{2}(t)+LV_{3}(t)$, it is easy to see from (\[ch1.sec3.lemma1.proof.eq4\])-(\[ch1.sec3.lemma1.proof.eq6\]) that $$\begin{aligned}
&&LV_{1}(t)+LV_{2}(t)+LV_{3}(t)\leq (S(t)-S^{*}_{1})^{2}\left[-3\mu +2\mu \lambda(\mu) +(2\mu+ d+\alpha)\frac{\lambda(\mu)}{2}+\alpha\lambda(\mu)\right.\nonumber\\
&&\left.+ \frac{\beta\lambda(\mu)}{2}+\frac{\beta S^{*}_{1}\lambda(\mu)}{2}+ \frac{\beta}{2\lambda(\mu)}(G^{*})^{2}E(e^{-2\mu T_{1}})+ 3\sigma^{2}_{S}+\sigma^{2}_{S} (G^{*})^{2}E(e^{-2\mu T_{1}})\right]\nonumber\\
&&(E(t)-E^{*}_{1})^{2}\left[-2\mu +\frac{2\mu }{\lambda(\mu)} +(2\mu+ d+\alpha)\frac{\lambda(\mu)}{2}+\alpha\lambda(\mu)+ \frac{\beta}{2\lambda(\mu)}+\frac{\beta S^{*}_{1}\lambda(\mu)}{2}+ 2\sigma^{2}_{E}\right]\nonumber\\
%%%%%\right.\nonumber\\&&\left.
&&+(I(t)-I^{*}_{1})^{2}\left[-(\mu+d+\alpha) +(2\mu+ d+\alpha)\frac{1}{\lambda(\mu)}+\frac{\alpha\lambda(\mu)}{2}+ \frac{\beta}{2\lambda(\mu)}+\frac{\beta S^{*}_{1}\lambda(\mu)}{2}+ \sigma^{2}_{I}\right]\nonumber\\
%\right.\nonumber\\
%&&\left.
&&+\frac{3\alpha}{2\lambda(\mu)}\int_{t_{0}}^{\infty}f_{T_{3}}(r)e^{-2\mu r}(I(t-r)-I^{*}_{1})^{2}dr\nonumber\\
&&+\frac{\beta S^{*}_{1}}{\lambda(\mu)}(G'(I^{*}_{1}))^{2}\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu (s+u)}(I(t-s-u)-I^{*}_{1})^{2}dsdu\nonumber\\
&&+\frac{\beta \lambda(\mu)}{2}\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu (s+u)}G^{2}(I(t-s-u))(S(t-s)-S^{*}_{1})^{2}dsdu\nonumber\\
&&+3\sigma^{2}_{S}(S^{*}_{1})^{2}+2\sigma^{2}_{E}(E^{*}_{1})^{2}+ \sigma^{2}_{I}(I^{*}_{1})^{2}\nonumber\\
&&+\sigma^{2}_{\beta}(S^{*}_{1})^{2}\int_{t_{0}}^{h_{1}}f_{T_{1}}(r)e^{-2\mu }f_{T_{1}}(s)e^{-2\mu s}G^{2}(I(t-s))ds\nonumber\\
&&+\sigma^{2}_{\beta}\int_{t_{0}}^{h_{2}}\int_{t_{0}}^{h_{1}}f_{T_{2}}(u)f_{T_{1}}(s)e^{-2\mu (s+u)}G^{2}(I(t-s-u))(S(t-s-u)-S^{*}_{1})^{2}dsdu\nonumber\\\label{ch1.sec3.lemma1.proof.eq11}\end{aligned}$$ But $V(t)=V_{1}(t)+V_{2}(t)+V_{3}(t)+V_{4}(t)$, therefore from (\[ch1.sec3.lemma1.proof.eq11\]), (\[ch1.sec3.lemma1.eq4b\]) and (\[ch1.sec3.lemma1.proof.eq4\])-(\[ch1.sec3.lemma1.proof.eq6\]), the results in (\[ch1.sec3.lemma1.eq5\])-(\[ch1.sec3.lemma1.eq7\]) follow directly. Theorems \[\[ch1.sec3.thm1\], \[ch1.sec3.thm1.corrolary1\]\] assert that the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) has an endemic equilibrium denoted $E_{1}=(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})$, whenever the basic reproduction numbers $R_{0}$ and $R^{*}_{0}$ for the disease in the absence of noise in the system satisfy $R_{0}>1$ and $R^{*}_{0}>1$, respectively. One common technique to obtain insight about the asymptotic behavior of the sample paths of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) near the potential endemic equilibrium $E_{1}=(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})$ for the stochastic system, is to characterize the long-term average distance of the paths of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) from the endemic equilibrium $E_{1}$.
Indeed, justification for this technique is the fact that for the second order stochastic solution process $\{X(t),t\geq t_{0}\}$ of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) defined in Theorem \[ch1.sec1.thm1\], the long-term average distance of the sample paths from the endemic equilibrium $E_{1}$, denoted $\limsup_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}||X(s)-E_{1}||ds}$ estimates the long-term ensemble mean denoted\
$\limsup_{t\rightarrow \infty}{E||X(t)-E_{1}||}$, almost surely. Moreover, if the solution process $\{X(t),t\geq t_{0}\}$ of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is stationary and ergodic, then the long-term ensemble mean $\limsup_{t\rightarrow \infty}{E||X(t)-E_{1}||}=E||X-E_{1}||$, where $X$ is the limit of convergence in distribution of the solution process $\{X(t),t\geq t_{0}\}$. That is, $\limsup_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}||X(s)-E_{1}||ds}=E||X-E_{1}||$, almost surely. These stationary and ergodic properties of the solution process $\{X(t),t\geq t_{0}\}$ are discussed in details in Section \[ch1.sec3.sec1\]. For convenience, the following notations are introduced and used in the rest of the results that follow in the subsequent sections. Let $a_{1}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{S}, \sigma^{2}_{\beta})$, $a_{2}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{I})$, $a_{3}(\mu, d, \alpha, \beta, B)$, and $a_{3}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{E})$ represent the following set of parameters $$\label{ch1.sec3.lemma1.proof.eq13a}
\left\{
\begin{array}{lll}
a_{1}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{S}, \sigma^{2}_{\beta})&=&2\mu\lambda{(\mu)}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha\lambda{(\mu)}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}+3\sigma ^{2}_{S}\\
&&+\left(\frac{\beta \lambda{(\mu)}(G^{*})^{2}}{2}+\sigma^{2}_{\beta}(G^{*})^{2}\right)\left(\frac{1}{G'(0)}\right)^{2}\\%\label{ch1.sec3.lemma1.proof.eq13a}\\
a_{1}(\mu, d, \alpha, \beta, B)&=&2\mu\lambda{(\mu)}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha\lambda{(\mu)}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}\\%\nonumber\\
&&+\left(\frac{\beta \lambda{(\mu)}(G^{*})^{2}}{2}\right)\left(\frac{1}{G'(0)}\right)^{2}\\%\label{ch1.sec3.lemma1.proof.eq13a1}\\
%\hat{K}_{0}+\frac{\alpha}{\beta \frac{B}{\mu}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
a_{2}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{I})&=&(2\mu+d+\alpha)\frac{1}{\lambda{(\mu)}}+ \frac{\alpha\lambda{(\mu)}}{2} + \sigma^{2}_{I}\\%\nonumber\\
&&+\left(\frac{\beta S^{*}_{1}(G'(I^{*}_{1}))^{2}}{\lambda{(\mu)}}
\right)\left(\frac{1}{G'(0)}\right)^{2}\\%\label{ch1.sec3.lemma1.proof.eq13b}\\
a_{2}(\mu, d, \alpha, \beta, B)&=&(2\mu+d+\alpha)\frac{1}{\lambda{(\mu)}}+ \frac{\alpha\lambda{(\mu)}}{2}\\% \nonumber\\
&&+\left(\frac{\beta S^{*}_{1}(G'(I^{*}_{1}))^{2}}{\lambda{(\mu)}}
\right)\left(\frac{1}{G'(0)}\right)^{2}\\%\label{ch1.sec3.lemma1.proof.eq13b1}\\
%\hat{K}_{0}+\frac{\alpha}{\beta \frac{B}{\mu}}%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
a_{3}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{E})&=&\frac{\beta }{2\lambda{(\mu)}}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}+ \frac{2\mu}{\lambda{(\mu)}}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha \lambda{(\mu)}+ 2 \sigma^{2}_{E}\\%\nonumber\\\label{ch1.sec3.lemma1.proof.eq13c}
a_{3}(\mu, d, \alpha, \beta, B)&=&\frac{\beta }{2\lambda{(\mu)}}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}+ \frac{2\mu}{\lambda{(\mu)}}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha \lambda{(\mu)}%\nonumber\\\label{ch1.sec3.lemma1.proof.eq13c1}
\end{array}
\right.$$ Also let $\tilde{a}_{1}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{S}, \sigma^{2}_{\beta})$, $\tilde{a}_{1}(\mu, d, \alpha, \beta, B)$, $\tilde{a}_{2}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{I})$, $\tilde{a}_{2}(\mu, d, \alpha, \beta, B)$ represent the following set of parameters $$\label{ch1.sec3.thm2.proof.eq4a}
\left\{
\begin{array}{lll}
\tilde{a}_{1}(\mu, d, \alpha, \beta, B,\sigma ^{2}_{S}, \sigma^{2}_{\beta})&=&2\mu\lambda{(\mu)}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha\lambda{(\mu)}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}+3\sigma ^{2}_{S}\\%\nonumber\\
&&+\left(\frac{\beta \lambda{(\mu)}(G^{*})^{2}}{2}+\sigma^{2}_{\beta}(G^{*})^{2}\right)+\left(\frac{\beta (G^{*})^{2}}{2\lambda{(\mu)}}+\sigma^{2}_{\beta}(G^{*})^{2}\right)\\%\label{ch1.sec3.thm2.proof.eq4a}\\
\tilde{a}_{1}(\mu, d, \alpha, \beta, B)&=&2\mu\lambda{(\mu)}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha\lambda{(\mu)}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}\\%\nonumber\\
&&+\left(\frac{\beta \lambda{(\mu)}(G^{*})^{2}}{2}\right)+\left(\frac{\beta (G^{*})^{2}}{2\lambda{(\mu)}}\right)\\%\label{ch1.sec3.thm2.proof.eq4a1}\\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\tilde{a}_{2}(\mu, d, \alpha, \beta, B, \sigma^{2}_{I})&=&(2\mu+d+\alpha)\frac{1}{\lambda{(\mu)}}+ \frac{\alpha\lambda{(\mu)}}{2} + \sigma^{2}_{I}\\%\nonumber\\
&&+\left(\frac{\beta S^{*}_{1}(G'(I^{*}_{1}))^{2}}{\lambda{(\mu)}}
\right)+\frac{3\alpha}{2\lambda(\mu)},\\%\label{ch1.sec3.thm2.proof.eq4b}\\
\tilde{a}_{2}(\mu, d, \alpha, \beta, B)&=&(2\mu+d+\alpha)\frac{1}{\lambda{(\mu)}}+ \frac{\alpha\lambda{(\mu)}}{2} +\left(\frac{\beta S^{*}_{1}(G'(I^{*}_{1}))^{2}}{\lambda{(\mu)}}
\right)+\frac{3\alpha}{2\lambda(\mu)}.
\end{array}
\right.$$ The result in Theorem \[ch1.sec3.thm2\] characterizes the behavior of the sample paths of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) in the neighborhood of the nontrivial steady states $E_{1}=(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})$ defined in Theorem \[ch1.sec3.thm1.corrolary1\], whenever the incubation and natural immunity delay periods of the disease denoted by $T_{1}$, $T_{2}$, and $T_{3}$ are constant for all individuals in the population, and Theorem \[ch1.sec1.thm1\]\[a.\] holds. The following partial result from \[[@maobook], Theorem 3.4\] called the strong law of large number for local martingales will be used to establish the result.
\[ch1.sec3.thm2.lemma1.lemma1\] Let $M=\{M_{t}\}_{t\geq 0}$ be a real valued continuous local martingale vanishing at $t=0$. Then $$\lim_{t\rightarrow \infty}{<M(t),M(t)>}=\infty, \quad a.s.\quad\Rightarrow\quad \lim_{t\rightarrow\infty} \frac{M(t)}{<M(t), M(t)>}=0,\quad a.s.$$ and also $$\limsup_{t\rightarrow \infty}{\frac{<M(t),M(t)>}{t}}<\infty, \quad a.s.\quad\Rightarrow\quad \lim_{t\rightarrow\infty} \frac{M(t)}{t}=0,\quad a.s.$$
The notation $<M(t),M(t)>$ is used to denote the quadratic variation of the local martingale $M=\{M(t),\forall t\geq t_{0}\}$.
Recall, the assumptions that $T_{1}, T_{2}$ and $T_{3}$ are constant, is also equivalent to the special case of letting the probability density functions of $T_{1}, T_{2}$ and $T_{3}$ to be the dirac-delta function defined in (\[ch1.sec2.eq4\]). Moreover, under the assumption that $T_{1}\geq 0, T_{2}\geq 0$ and $T_{3}\geq 0$ are constant, it follows from (\[ch1.sec3.lemma1.eq5a\])-(\[ch1.sec3.lemma1.eq5c\]), that $E(e^{-2\mu (T_{1}+T_{2})})=e^{-2\mu (T_{1}+T_{2})} $, $E(e^{-2\mu T_{1}})=e^{-2\mu T_{1}} $ and $E(e^{-2\mu T_{3}})=e^{-2\mu T_{3}} $.
\[ch1.sec3.thm2\] Let the hypotheses of Theorem \[ch1.sec1.thm1\]\[a.\], Theorem \[ch1.sec3.thm1.corrolary1\] and Lemma \[ch1.sec3.lemma1\] be satisfied and let $$\begin{aligned}
\mu>\max{ \left(\frac{1}{3}a_{1}(\mu, d, \alpha, \beta, B, \sigma^{2}_{S}=0,\sigma^{2}_{\beta}),\frac{1}{2}a_{3}(\mu, d, \alpha, \beta, B)\right)},\quad and\nonumber\\
(\mu+d+\alpha)>a_{2}(\mu, d, \alpha, \beta, B ).\label{ch1.sec3.thm2.eq1}
%\quad and\quad 2\mu> a_{3}(\mu, d, \alpha, \beta, B).
\end{aligned}$$ Also let the delay times $T_{1}, T_{2}$ and $T_{3}$ be constant, that is, the probability density functions of $T_{1}, T_{2}$ and $T_{3}$ respectively denoted by $f_{T_{i}}, i=1, 2, 3$ are the dirac-delta functions defined in (\[ch1.sec2.eq4\]). Furthermore, let the constants $T_{1}, T_{2}$ and $T_{3}$ satisfy the following set of inequalities: $$\label{ch1.sec3.thm2.eq2}
T_{1}>\frac{1}{2\mu}\log{\left(\frac{\left(\frac{\beta (G^{*})^{2}}{2\lambda{(\mu)}}+\sigma^{2}_{\beta}(G^{*})^{2}\right)}{(3\mu-a_{1}(\mu, d, \alpha, \beta, B, \sigma^{2}_{S}=0,\sigma^{2}_{\beta}))}\right)},$$ $$\label{ch1.sec3.thm2.eq3}
T_{2}<\frac{1}{2\mu}\log{\left(\frac{(3\mu-a_{1}(\mu, d, \alpha, \beta, B, \sigma^{2}_{S}=0,\sigma^{2}_{\beta}))}{\left(\frac{\beta (G^{*})^{2}}{2\lambda{(\mu)}}+\sigma^{2}_{\beta}(G^{*})^{2}\right)\left(\frac{1}{G'(0)}\right)^{2}}\right)},$$and $$\label{ch1.sec3.thm2.eq3b}
T_{3}>\frac{1}{2\mu}\log{\left(\frac{\frac{3\alpha}{2\lambda(\mu)}}{(\mu+d+\alpha)-a_{2}(\mu, d, \alpha, \beta, B)}\right)}.$$ There exists a positive real number $\mathfrak{m}_{1}>0$, such that $$\begin{aligned}
&&\limsup_{t\rightarrow \infty}\frac{1}{t}\int^{t}_{0}\left[ ||X(v)-E_{1}||_{2}\right]^{2}dv\nonumber\\
&&\leq \frac{\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}e^{-2\mu (T_{1}+T_{2})}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}e^{-\mu T_{1}}}{\mathfrak{m}_{1}},\nonumber\\
\label{ch2.sec3.thm2.eq4}\end{aligned}$$almost surely, where $X(t)$ is defined in (\[ch1.sec0.eq13b\]), and $||.||_{2}$ is the natural Euclidean norm in $\mathbb{R}^{2}$.
Proof:\
From Lemma \[ch1.sec3.lemma1\], (\[ch1.sec3.lemma1.eq5a\])-(\[ch1.sec3.lemma1.eq5c\]), it is easy to see that under the assumptions in (\[ch1.sec3.thm1.corrolary1.eq1\]) and (\[ch1.sec3.thm2.eq1\])-(\[ch1.sec3.thm2.eq3b\]), then $\tilde{\phi}_{1}>0$, $\tilde{\psi}_{1}>0$ and $\tilde{\varphi}_{1}>0$. Therefore, from (\[ch1.sec3.lemma1.eq5\])-(\[ch1.sec3.lemma1.eq7\]) it is also easy to see that $$\begin{aligned}
dV&=&LV(t)dt + \overrightarrow{g}(S(t), E(t), I(t))d\overrightarrow{w(t)},\nonumber\\
&\leq&-\min\{\tilde{\phi}_{1}, \tilde{\psi}_{1}, \tilde{\varphi}_{1}\}\left[ (S(t)-S^{*}_{1})^{2}+ (E(t)-E^{*}_{1})^{2}+ (I(t)-I^{*}_{1})^{2}\right]\nonumber\\
&& +3\sigma^{2}_{S}(S^{*}_{1})^{2}+ 2\sigma^{2}_{E}(E^{*}_{1})^{2}+\sigma^{2}_{I}(I^{*}_{1})^{2}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}e^{-2\mu (T_{1}+T_{2})}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}e^{-\mu T_{1}}\nonumber\\
&&+ \overrightarrow{g}(S(t), E(t), I(t))d\overrightarrow{w(t)},\label{ch1.sec3.thm2.proof.eq1}
\end{aligned}$$ Integrating both sides of (\[ch1.sec3.thm2.proof.eq1\]) from 0 to $t$, it follows that $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
&&(V(t)-V(0))\leq -\mathfrak{m_{1}}\int^{t}_{0}\left[ (S(v)-S^{*}_{1})^{2}+ (E(v)-E^{*}_{1})^{2}+ (I(v)-I^{*}_{1})^{2}\right]dv\nonumber\\
&& +\left(3\sigma^{2}_{S}(S^{*}_{1})^{2}+ 2\sigma^{2}_{E}(E^{*}_{1})^{2}+\sigma^{2}_{I}(I^{*}_{1})^{2}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}e^{-2\mu (T_{1}+T_{2})}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}e^{-\mu T_{1}}\right)t,\nonumber\\
&&+\int^{t}_{0}\overrightarrow{g}(S(v), E(v), I(v))d\overrightarrow{w(v)},\label{ch2.sec3.thm2.proof.eq2}\end{aligned}$$ where $V(0)\geq 0$ and $$\label{ch2.sec3.thm2.proof.eq3}
\mathfrak{m}_{1}=min(\tilde{\phi},\tilde{ \psi},\tilde{\varphi})>0.$$ are constants and $$\begin{aligned}
&&\overrightarrow{g}(S(t), E(t), I(t))d\overrightarrow{w(t)}=
-\sigma_{\beta}(S(t)-S^{*}_{1})S(t)e^{-\mu T_{1}}G(I(t-T_{1}))dw_{\beta}(t)\nonumber\\
&&-\sigma_{\beta}((S(t)-S^{*}_{1})+(E(t)-E^{*}_{1}))e^{-\mu (T_{1}+T_{2})}S(t-T_{2})G(I(t-T_{1}-T_{2}))dw_{\beta}(t),\nonumber\\\label{ch2.sec3.thm2.proof.eq4}
\end{aligned}$$ since Theorem \[ch1.sec1.thm1\]\[a.\] holds and $T_{1}$ and $T_{2}$ are constants. Now, define $$\begin{aligned}
N_{1}(t)=-\int^{t}_{0}\sigma_{\beta}(S(v)-S^{*}_{1})S(v)e^{-\mu T_{1}}G(I(v-T_{1}))dw_{\beta}(v),\quad and\quad \nonumber\\ N_{2}(t)=-\int^{t}_{0}\sigma_{\beta}((S(v)-S^{*}_{1})+(E(v)-E^{*}_{1}))e^{-\mu (T_{1}+T_{2})}S(v-T_{2})G(I(v-T_{1}-T_{2}))dw_{\beta}(v).\nonumber\\
\label{ch2.sec3.thm2.proof.eq5}
\end{aligned}$$ Also, the quadratic variations of $N_{1}(t)$ and $N_{2}(t)$ in (\[ch2.sec3.thm2.proof.eq5\]) are given by $$\begin{aligned}
<N(t)_{1}(t), N_{1}(t)>&=&\int^{t}_{0}\sigma^{2}_{\beta}(S(v)-S^{*}_{1})^{2}S^{2}(v)e^{-2\mu T_{1}}G^{2}(I(v-T_{1}))dv,\nonumber\\
<N(t)_{2}(t), N_{2}(t)>&=&\int^{t}_{0}\sigma^{2}_{\beta}((S(v)-S^{*}_{1})+(E(v)-E^{*}_{1}))^{2}e^{-2\mu (T_{1}+T_{2})}S^{2}(v-T_{2})G^{2}(I(v-T_{1}-T_{2}))dv.\nonumber\\\label{ch2.sec3.thm2.proof.eq6}
\end{aligned}$$ It follows that when Theorem \[ch1.sec1.thm1\]\[a.\] holds, then from Assumption \[ch1.sec0.assum1\] and (\[ch2.sec3.thm2.proof.eq6\]), $$\begin{aligned}
<N(t)_{1}, N_{1}(t)>&\leq& \int^{t}_{0}\sigma^{2}_{\beta}\left(\frac{\beta}{\mu}+S^{*}_{1}\right)^{2}\left(\frac{\beta}{\mu}\right)^{2}e^{-2\mu T_{1}}\left(\frac{\beta}{\mu}\right)^{2}dv\nonumber\\
&=&\sigma^{2}_{\beta}\left(\frac{\beta}{\mu}+S^{*}_{1}\right)^{2}\left(\frac{\beta}{\mu}\right)^{4}e^{-2\mu T_{1}}t.\label{ch2.sec3.thm2.proof.eq7}\end{aligned}$$ From (\[ch2.sec3.thm2.proof.eq7\]), it is easy to see that $\limsup_{t\rightarrow \infty }{\frac{1}{t}<N(t)_{1}, N_{1}(t)>}<\infty$, a.s.. Therefore by the strong law of large numbers for local martingales in Lemma \[ch1.sec3.thm2.lemma1.lemma1\], it follows that $\limsup_{t\rightarrow \infty }\frac{1}{t}N_{1}(t)=0$, a.s. By the same reasoning, it can be shown that $\limsup_{t\rightarrow \infty }\frac{1}{t}N_{2}(t)=0$, a.s. Moreover, from (\[ch2.sec3.thm2.proof.eq4\]), it can be seen that $\limsup_{t\rightarrow \infty }\int^{t}_{0}\overrightarrow{g}(S(v), E(v), I(v))d\overrightarrow{w(v)}=0$, a.s. Hence, diving both sides of (\[ch2.sec3.thm2.proof.eq2\]) by $t$ and $\mathfrak{m}_{1}$, and taking the $\limsup_{t\rightarrow \infty}$, then (\[ch2.sec3.thm2.eq4\]) follows directly.
\[ch2.sec3.thm2.rem1\] Theorem \[ch1.sec3.thm2\] asserts that when the basic reproduction number $R^{*}_{0}$ defined in (\[ch1.sec2.lemma2a.corrolary1.eq4\]) satisfies $R^{*}_{0}>1$, and the disease dynamics is perturbed by random fluctuations exclusively in the disease transmission rate, that is, the intensity $\sigma_{\beta}>0$, it is noted earlier that the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) does not have an endemic equilibrium state. Nevertheless, the conditions in Theorem \[ch1.sec3.thm2\] provide estimates for the constant delay times $T_{1}, T_{2}$ and $T_{3}$ in (\[ch1.sec3.thm2.eq2\])-(\[ch1.sec3.thm2.eq3b\]) in addition to other parametric restrictions in (\[ch1.sec3.thm2.eq1\]) that are sufficient for the solution paths of the perturbed stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) to oscillate near the nontrivial steady state, $E_{1}$, of the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) found in Theorem \[ch1.sec3.thm1.corrolary1\]. The result in (\[ch2.sec3.thm2.eq4\]) estimates the average distance between the sample paths of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), and the nontrivial steady state $E_{1}$. Moreover, (\[ch2.sec3.thm2.eq4\]) depicts the size of the oscillations of the paths of the stochastic system relative to $E_{1}$, where smaller values for the intensity ( $\sigma_{\beta}> 0$) lead to oscillations of the paths in close proximity to the steady state $E_{1}$, and vice versa.
In a biological context, the result of this theorem signifies that when the basic reproduction number exceeds one, and the other parametric restrictions in (\[ch1.sec3.thm2.eq1\])-(\[ch1.sec3.thm2.eq3b\]) are satisfied, then the disease related classes ($E, I, R$), and consequently the disease in a whole will persist in state near the endemic equilibrium state $E_{1}$. Moreover, stronger noise in the system from the disease transmission rate of the disease tends to persist the disease in state further away from the endemic equilibrium state $E_{1}$, and vice versa. Nevertheless, in this case of variability exclusively from the disease transmission rate, the numerical simulation results in Example \[ch1.sec4.subsec1.1\] suggest that continuous decrease in size of some of the subpopulation classes- susceptible, exposed, infectious and removal may occur, as the intensity of the noise from the disease transmission rate increases, but there is no definite indication of extinction of the entire human population over time. Note that, comparing to the simulation results in Example \[ch1.sec4.subsec1.2\], there is some evidence that the strength of the noise from the disease transmission rate persists the disease, but not to the point of extinction of the entire population (or at least not at the same rate as the case of the strength of the noises from the natural deathrates).
These observations in Example \[ch1.sec4.subsec1.1\] also support the remark for Theorem \[ch1.sec1.thm1\]\[a\] that the sample paths for the stochastic system exhibit non-complex behaviors such as extinction of the entire human population, when the disease dynamics is perturbed exclusively by noise from the disease transmission rate, compared to the complex behavior observed when the disease dynamics is perturbed by noise from the natural death rates. These facts suggest that malaria control policies in the event where the disease is persistent, should focus on reducing the intensity of the fluctuations in the disease transmission rate, perhaps through vector control and better care of the people in the population to keep the transmission rate constant, in order to reduce the number of malaria cases which lead to the persistence of the disease.
The subsequent result provides more general conditions irrespective of the probability distribution of the random variables $T_{1}, T_{2}$ and $T_{3}$, that are sufficient for the trajectories of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) to oscillate near the nontrivial steady state $E_{1}$ of the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]), whenever the intensities of the gaussian noises in the system are positive, that is whenever $\sigma_{i}>0, i=S, E, I, R, \beta$. The following result from \[Lemma 2.2 [@maobook]\] and \[lemma 15,[@Li2008]\] will be used to achieve this result.
\[ch1.sec3.thm3.lemma1\] Let $M(t); t\geq 0$ be a continuous local martingale with initial value $M(0)=0$. Let $<M(t),M(t)>$ be its quadratic variation. Let $\delta >1$ be a number, and let $\nu_{k}$ and $\tau_{k}$ be two sequences of positive numbers. Then, for almost all $w\in\Omega$, there exists a random integer $k_{0}=k_{0}(w)$ such that, for all $k\geq k_{0}$, $$M(t)\leq \frac{1}{2}\nu_{k}<M(t), M(t)>+\frac{\delta ln k}{\nu_{k}}, 0\leq t\leq \tau_{k}.$$
Proof:\
See [@maobook; @Li2008].
\[ch2.sec3.thm3\] Suppose the hypotheses of Theorem \[ch1.sec1.thm1\]\[b.\], Theorem \[ch1.sec3.thm1\] and Lemma \[ch1.sec3.lemma1\] are satisfied, and let $$\label{ch1.sec3.thm3.eq1}
\mu> \max\left\{\frac{1}{3}\tilde{a}_{1}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{S}, \sigma^{2}_{\beta}),\frac{1}{2}a_{3}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{E})\right\}\quad and\quad(\mu+d+\alpha)>\tilde{a}_{2}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{I}).$$ It follows that for any arbitrary probability distribution of the delay times: $T_{1}, T_{2}$ and $T_{3}$, there exists a positive real number $\mathfrak{m}_{2}>0$ such that $$\begin{aligned}
&&\limsup_{t\rightarrow \infty}\frac{1}{t}\int^{t}_{0}\left[ (S(v)-S^{*}_{1})^{2}+ (E(v)-E^{*}_{1})^{2}+ (I(v)-I^{*}_{1})^{2}\right]dv\nonumber\\
&&\leq \frac{3\sigma^{2}_{S}(S^{*}_{1})^{2}+ 2\sigma^{2}_{E}(E^{*}_{1})^{2}+\sigma^{2}_{I}(I^{*}_{1})^{2}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-2\mu (T_{1}+T_{2})})+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-\mu T_{1}})}{\mathfrak{m}_{2}},\nonumber\\
\label{ch2.sec3.thm3.eq2}\end{aligned}$$almost surely.
Proof:\
From Lemma \[ch1.sec3.lemma1\], (\[ch1.sec3.lemma1.eq5a\])-(\[ch1.sec3.lemma1.eq5c\]), $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
-\tilde{\phi}_{1}&=&-3\mu+\left[2\mu\lambda{(\mu)}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha\lambda{(\mu)}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}+3\sigma ^{2}_{S}+\left(\frac{\beta (G^{*})^{2}}{2\lambda{(\mu)}}+\sigma^{2}_{\beta}(G^{*})^{2}\right)E(e^{-2\mu T_{1}})\right.\nonumber\\
&&\left.+\left(\frac{\beta \lambda{(\mu)}(G^{*})^{2}}{2}+\sigma^{2}_{\beta}(G^{*})^{2}\right)E(e^{-2\mu (T_{1}+T_{2})})\right]\nonumber\\
&&\leq -3\mu+\left[2\mu\lambda{(\mu)}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha\lambda{(\mu)}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}+3\sigma ^{2}_{S}+\left(\frac{\beta (G^{*})^{2}}{2\lambda{(\mu)}}+\sigma^{2}_{\beta}(G^{*})^{2}\right)\right.\nonumber\\
&&\left.+\left(\frac{\beta \lambda{(\mu)}(G^{*})^{2}}{2}+\sigma^{2}_{\beta}(G^{*})^{2}\right)\right]\nonumber\\
&&=-\left(3\mu-\tilde{a}_{1}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{S}, \sigma^{2}_{\beta})\right)\label{ch1.sec3.thm3.proof.eq1}\\
%%%%%
%%%%%%
-\tilde{\psi}_{1} &=& -2\mu+\left[\frac{\beta }{2\lambda{(\mu)}}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}+ \frac{2\mu}{\lambda{(\mu)}}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha \lambda{(\mu)}+ 2 \sigma^{2}_{E} \right]\nonumber\\
&&=-\left(2\mu-a_{3}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{E})\right)\label{ch1.sec3.thm3.proof.eq2}\\
-\tilde{\varphi}_{1}&=& -(\mu + d+\alpha)+\left[(2\mu+d+\alpha)\frac{1}{\lambda{(\mu)}}+ \frac{\alpha\lambda{(\mu)}}{2} + \sigma^{2}_{I}+\frac{3\alpha}{2\lambda{(\mu)}}E(e^{-2\mu T_{3}})\right.\nonumber\\
&&\left. +\left(\frac{\beta S^{*}_{1}(G'(I^{*}_{1}))^{2}}{\lambda{(\mu)}}
\right)E(e^{-2\mu (T_{1}+T_{2})})\right]\nonumber\\
&\leq& -(\mu + d+\alpha)+\left[(2\mu+d+\alpha)\frac{1}{\lambda{(\mu)}}+ \frac{\alpha\lambda{(\mu)}}{2} + \sigma^{2}_{I}+\frac{3\alpha}{2\lambda{(\mu)}}\right.\nonumber\\
&&\left. +\left(\frac{\beta S^{*}_{1}(G'(I^{*}_{1}))^{2}}{\lambda{(\mu)}}
\right)\right]\nonumber\\
&&=-\left((\mu + d+\alpha)-\tilde{a}_{2}(\mu, d, \alpha, \beta, B, \sigma ^{2}_{I})\right),\label{ch1.sec3.thm3.proof.eq3}
\end{aligned}$$since $0<E(e^{-2\mu (T_{i})})\leq 1, i=1, 2,3$. It follows from (\[ch1.sec3.lemma1.eq5\])-(\[ch1.sec3.lemma1.eq7\]) and (\[ch1.sec3.thm3.proof.eq1\])-(\[ch1.sec3.thm3.proof.eq3\]) that $$\begin{aligned}
dV&=&LV(t)dt + \overrightarrow{g}(S(t), E(t), I(t))d\overrightarrow{w(t)},\nonumber\\
&\leq&-\min\{\left(3\mu-\tilde{a}_{1}(\mu, d, \alpha, \beta, B)\right), \left(2\mu-a_{3}(\mu, d, \alpha, \beta, B)\right), \left((\mu + d+\alpha)-\tilde{a}_{2}(\mu, d, \alpha, \beta, B)\right)\}\times\nonumber\\
&&\times \left[ (S(t)-S^{*}_{1})^{2}+ (E(t)-E^{*}_{1})^{2}+ (I(t)-I^{*}_{1})^{2}\right]\nonumber\\
&& +3\sigma^{2}_{S}(S^{*}_{1})^{2}+ 2\sigma^{2}_{E}(E^{*}_{1})^{2}+\sigma^{2}_{I}(I^{*}_{1})^{2}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-2\mu (T_{1}+T_{2})})+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-\mu T_{1}})\nonumber\\
&&+ \overrightarrow{g}(S(t), E(t), I(t))d\overrightarrow{w(t)},\label{ch1.sec3.thm3.proof.eq4}
\end{aligned}$$ where under the assumptions (\[ch1.sec3.thm3.eq1\]) in the hypothesis, $$\label{ch2.sec3.thm3.proof.eq5}
\mathfrak{m}_{2}=\min\{\left(3\mu-\tilde{a}_{1}(\mu, d, \alpha, \beta, B)\right), \left(2\mu-a_{3}(\mu, d, \alpha, \beta, B)\right), \left((\mu + d+\alpha)-\tilde{a}_{2}(\mu, d, \alpha, \beta, B)\right)\}>0.$$ Integrating both sides of (\[ch1.sec3.thm3.proof.eq4\]) from 0 to $t$, it follows that $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
&&(V(t)-V(0))\leq -\mathfrak{m_{2}}\int^{t}_{0}\left[ (S(v)-S^{*}_{1})^{2}+ (E(v)-E^{*}_{1})^{2}+ (I(v)-I^{*}_{1})^{2}\right]dv\nonumber\\
&& +\left(3\sigma^{2}_{S}(S^{*}_{1})^{2}+ 2\sigma^{2}_{E}(E^{*}_{1})^{2}+\sigma^{2}_{I}(I^{*}_{1})^{2}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-2\mu (T_{1}+T_{2})})+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-\mu T_{1}})\right)t\nonumber\\
&&+M(t), \label{ch2.sec3.thm3.proof.eq6}\end{aligned}$$ where $V(0)$ is constant and from (\[ch1.sec3.lemma1.eq6\]), the function $M(t)$ is defined as follows:- $$\begin{aligned}
M(t)&=&\int^{t}_{0}\overrightarrow{g}(S(v), E(v), I(v))d\overrightarrow{w(v)}\nonumber\\
&=&\sum^{3}_{i=1}M^{S}_{i}(t)+\sum^{3}_{i=1}M^{E}_{i}(t)+\sum^{3}_{i=1}M^{I}_{i}(t)+\sum^{3}_{i=1}M^{\beta}_{i}(t),\nonumber\\
\label{ch2.sec3.thm3.proof.eq7}
\end{aligned}$$ and $$\begin{aligned}
M^{S}_{1}(t)=-\int^{t}_{0}3\sigma_{S}(S(v)-S^{*}_{1})S(v)dw_{S}(v),\nonumber\\
M^{S}_{2}(t)=-\int^{t}_{0}2\sigma_{S}(E(v)-E^{*}_{1})S(v)dw_{S}(v),\nonumber\\
M^{S}_{3}(t)=-\int^{t}_{0}\sigma_{S}(I(v)-I^{*}_{1})S(v)dw_{S}(v).\nonumber\\
\label{ch2.sec3.thm3.proof.eq8}
\end{aligned}$$ Also, $$\begin{aligned}
M^{E}_{1}(t)=-\int^{t}_{0}2\sigma_{E}(S(v)-S^{*}_{1})E(v)dw_{E}(v),\nonumber\\
M^{E}_{2}(t)=-\int^{t}_{0}2\sigma_{E}(E(v)-E^{*}_{1})E(v)dw_{E}(v),\nonumber\\
M^{E}_{3}(t)=-\int^{t}_{0}\sigma_{E}(I(v)-I^{*}_{1})E(v)dw_{E}(v),\nonumber\\
\label{ch2.sec3.thm3.proof.eq9}
\end{aligned}$$ and $$\begin{aligned}
M^{I}_{1}(t)=-\int^{t}_{0}2\sigma_{I}(S(v)-S^{*}_{1})I(v)dw_{I}(v),\nonumber\\
M^{I}_{2}(t)=-\int^{t}_{0}2\sigma_{I}(E(v)-E^{*}_{1})I(v)dw_{I}(v),\nonumber\\
M^{I}_{3}(t)=-\int^{t}_{0}\sigma_{I}(I(v)-I^{*}_{1})I(v)dw_{I}(v).\nonumber\\
\label{ch2.sec3.thm3.proof.eq10}
\end{aligned}$$ Furthermore, $$\begin{aligned}
M^{\beta}_{1}(t)=-\int^{t}_{0}\sigma_{\beta}(S(v)-S^{*}_{1})S(v)E(e^{-\mu T_{1}}G(I(v-T_{1})))dw_{\beta}(v),\nonumber\\
M^{\beta}_{2}(t)=-\int^{t}_{0}\sigma_{\beta}(S(v)-S^{*}_{1})E(S(v-T_{2})e^{-\mu (T_{1}+T_{2})}G(I(v-T_{1}-T_{2})))dw_{\beta}(v),\nonumber\\
M^{\beta}_{3}(t)=-\int^{t}_{0}\sigma_{\beta}(E(v)-E^{*}_{1})E(S(v-T_{2})e^{-\mu (T_{1}+T_{2})}G(I(v-T_{1}-T_{2})))dw_{\beta}(v).\nonumber\\
\label{ch2.sec3.thm3.proof.eq11}
\end{aligned}$$ Applying Lemma \[ch1.sec3.thm3.lemma1\], choose $\delta=\frac{2}{12}, \nu_{k}=\nu$, and $\tau_{k}=k$, then there exists a random number $k_{j}(w)>0, w\in \Omega$, and $j=1,2,\ldots 12$ such that $$\label{ch2.sec3.thm3.proof.eq12}
M^{S}_{i}(t)\leq \frac{1}{2}\nu <M^{S}_{i}(t), M^{S}_{i}(t)>+ \frac{\frac{2}{12}}{\nu}\ln{k}, \forall t\in [0,k], i=1,2,3,$$ $$\label{ch2.sec3.thm3.proof.eq13}
M^{E}_{i}(t)\leq \frac{1}{2}\nu <M^{E}_{i}(t), M^{E}_{i}(t)>+ \frac{\frac{2}{12}}{\nu}\ln{k}, \forall t\in [0,k], i=1,2,3,$$ $$\label{ch2.sec3.thm3.proof.eq14}
M^{I}_{i}(t)\leq \frac{1}{2}\nu <M^{I}_{i}(t), M^{S}_{i}(t)>+ \frac{\frac{2}{12}}{\nu}\ln{k}, \forall t\in [0,k], i=1,2,3,$$ $$\label{ch2.sec3.thm3.proof.eq15}
M^{\beta}_{i}(t)\leq \frac{1}{2}\nu <M^{\beta}_{i}(t), M^{\beta}_{i}(t)>+ \frac{\frac{2}{12}}{\nu}\ln{k}, \forall t\in [0,k], i=1,2,3,$$ where the quadratic variations $<M^{a}_{i}(t), M^{a}_{i}(t)>, \forall i=1,2,3$ and $a\in \{S, E, I, \beta\}$ are computed in the same manner as (\[ch2.sec3.thm2.proof.eq6\]).
It follows from (\[ch2.sec3.thm3.proof.eq7\]) that for all $ k>\max_{j=1,2,\ldots,12}{k_{j}(w)}>0$, $$\begin{aligned}
M(t)&\leq& \frac{1}{2}\nu \sum^{3}_{i=1}<M^{S}_{i}(t), M^{S}_{i}(t)>+\frac{1}{2}\nu \sum^{3}_{i=1}<M^{E}_{i}(t), M^{E}_{i}(t)>\nonumber\\
&&+\frac{1}{2}\nu \sum^{3}_{i=1}<M^{I}_{i}(t), M^{I}_{i}(t)>+\frac{1}{2}\nu \sum^{3}_{i=1}<M^{\beta}_{i}(t), M^{\beta}_{i}(t)> \nonumber\\
&&+\frac{2}{\nu}\ln{k}, \forall t\in [0,k].\label{ch2.sec3.thm3.proof.eq16}
\end{aligned}$$ Now, rearranging and diving both sides (\[ch2.sec3.thm3.proof.eq6\]) of by $t$ and $\mathfrak{m}_{2}$, it follows that for all $t\in [k-1,k]$, $$\begin{aligned}
&&\frac{1}{t}\int^{t}_{0}\left[ (S(v)-S^{*}_{1})^{2}+ (E(v)-E^{*}_{1})^{2}+ (I(v)-I^{*}_{1})^{2}\right]dv\nonumber\\
&&\leq \frac{3\sigma^{2}_{S}(S^{*}_{1})^{2}+ 2\sigma^{2}_{E}(E^{*}_{1})^{2}+\sigma^{2}_{I}(I^{*}_{1})^{2}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-2\mu (T_{1}+T_{2})})+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-\mu T_{1}})}{\mathfrak{m}_{2}}\nonumber\\
&&+\frac{1}{t}\left(\frac{1}{\mathfrak{m}_{2}}\right)\left(\frac{1}{2}\nu \sum^{3}_{i=1}<M^{S}_{i}(t), M^{S}_{i}(t)>+\frac{1}{2}\nu \sum^{3}_{i=1}<M^{E}_{i}(t), M^{E}_{i}(t)>\right)\nonumber\\
&&+\frac{1}{t}\left(\frac{1}{\mathfrak{m}_{2}}\right)\left(\frac{1}{2}\nu \sum^{3}_{i=1}<M^{I}_{i}(t), M^{I}_{i}(t)>+\frac{1}{2}\nu \sum^{3}_{i=1}<M^{\beta}_{i}(t), M^{\beta}_{i}(t)> \right)\nonumber\\
&&+\frac{2}{\nu (k-1)}\ln{k}\left(\frac{1}{\mathfrak{m}_{2}}\right).% ,\nonumber\\
\label{ch2.sec3.thm3.proof.eq17}
\end{aligned}$$ Thus letting $k\rightarrow \infty$, then $t\rightarrow \infty$. It follows from (\[ch2.sec3.thm3.proof.eq17\]) that $$\begin{aligned}
&&\limsup_{t\rightarrow \infty}\frac{1}{t}\int^{t}_{0}\left[ (S(v)-S^{*}_{1})^{2}+ (E(v)-E^{*}_{1})^{2}+ (I(v)-I^{*}_{1})^{2}\right]dv\nonumber\\
&&\leq \frac{3\sigma^{2}_{S}(S^{*}_{1})^{2}+ 2\sigma^{2}_{E}(E^{*}_{1})^{2}+\sigma^{2}_{I}(I^{*}_{1})^{2}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-2\mu (T_{1}+T_{2})})+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-\mu T_{1}})}{\mathfrak{m}_{2}}\nonumber\\
&&+\limsup_{t\rightarrow \infty}\frac{1}{t}\left(\frac{1}{\mathfrak{m}_{2}}\right)\left(\frac{1}{2}\nu \sum^{3}_{i=1}<M^{S}_{i}(t), M^{S}_{i}(t)>+\frac{1}{2}\nu \sum^{3}_{i=1}<M^{E}_{j}(t), M^{E}_{i}(t)>\right)\nonumber\\
&&+\limsup_{t\rightarrow \infty}\frac{1}{t}\left(\frac{1}{\mathfrak{m}_{2}}\right)\left(\frac{1}{2}\nu \sum^{3}_{i=1}<M^{I}_{i}(t), M^{I}_{i}(t)>+\frac{1}{2}\nu \sum^{3}_{i=1}<M^{\beta}_{i}(t), M^{\beta}_{i}(t)> \right)\nonumber\\.% ,\nonumber\\
\label{ch2.sec3.thm3.proof.eq18}
\end{aligned}$$ Finally, by sending $\nu\rightarrow 0$, the result in (\[ch2.sec3.thm3.eq2\]) follows immediately from (\[ch2.sec3.thm3.proof.eq18\]).
\[ch1.sec3.rem2\] When the disease dynamics is perturbed by random fluctuations in the disease transmission or natural death rates, that is, when at least one of the intensities $\sigma^{2}_{i}> 0, i= S, E, I, \beta$, it has been noted earlier that the nontrivial steady state, $E_{1}$, of the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) found in Theorem \[ch1.sec3.thm1\] no longer exists for the perturbed stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]). Nevertheless, the conditions in Theorem \[ch1.sec3.thm2\] provide restrictions for the constant delays $T_{1}$, $T_{2}$ and $T_{3}$ in (\[ch1.sec3.thm2.eq2\])-(\[ch1.sec3.thm2.eq3b\]) and parametric restrictions in (\[ch1.sec3.thm2.eq1\]) which are sufficient for the sample path solutions of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) to oscillate in the neighborhood of the potential endemic equilibrium $E_{1}$.
Also, the conditions in Theorem \[ch2.sec3.thm3\] provide general restrictions in (\[ch1.sec3.thm3.eq1\]) irrespective of the probability distribution of the random variable delay times $T_{1}, T_{2}$ and $T_{3}$ that are sufficient for the solutions of the perturbed stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) to oscillate near the nontrivial steady state, $E_{1}$, of the deterministic system (\[ch1.sec0.eq3\])-(\[ch1.sec0.eq6\]) found in Theorem \[ch1.sec3.thm1\].
The results in (\[ch2.sec3.thm2.eq4\]) and (\[ch2.sec3.thm3.eq2\]) characterize the average distance between the trajectories of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), and the nontrivial steady state $E_{1}$. Moreover, the results also signify that the size of the oscillations of the trajectories of the stochastic system relative to the state $E_{1}$ depends on the intensities of the noises in the system, that is the sizes of $\sigma^{2}_{i}> 0, i= S, E, I, \beta$. Indeed, it can be seen easily that the trajectories oscillate much closer to $E_{1}$, whenever the intensities ($\sigma^{2}_{i}> 0, i= S, E, I, \beta$) are small, and vice versa.
Permanence of malaria in the stochastic system\[ch1.sec5\]
==========================================================
In this section, the permanence of malaria in the human population is investigated, whenever the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is subject to the influence of random environmental fluctuations from the disease transmission and natural death rates.
Permanence in the mean of the disease seeks to determine whether there is always a positive significant average number of individuals of the disease related classes namely- exposed $(E)$, infectious $(I)$, and removal $(R)$ subclasses in the population over sufficiently long time. That is, seeking to determine whether $\lim_{t\rightarrow\infty}E(||E(t)||)>0$, $\lim_{t\rightarrow\infty}E(||I(t)||)>0$ and $\lim_{t\rightarrow\infty}E(||R(t)||)>0$. In the absence of explicit solutions for the nonlinear stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), this information about the asymptotic average number of individuals in the disease related classes is obtained via Lyapunov techniques from the examination of the statistical properties of the paths of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), in particular, determining the size of all average sample path estimates for the disease related subclasses over sufficiently long time. That is, to determine whether the following hold: $\liminf_{t\rightarrow\infty} \frac{1}{t}\int^{t}_{0}||E(s)||ds>0$, $\liminf_{t\rightarrow\infty} \frac{1}{t}\int^{t}_{0}||I(s)||ds>0$ and $\liminf_{t\rightarrow\infty} \frac{1}{t}\int^{t}_{0}||R(s)||ds>0$.
The following definition is given for the stochastic version of strong permanence in the mean of a disease.
\[ch1.sec5.definition1\] The system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is said to be almost surely permanent in the mean[@chen-biodyn] (in the strong sense), if $$\begin{aligned}
&& \liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}S(s)ds}>0, a.s., \quad \liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}E(s)ds}>0, a.s., \nonumber\\
&& \liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}I(s)ds}>0, a.s., \quad \liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}R(s)ds}>0, a.s.\label{ch1.sec5.definition1.eq1}
\end{aligned}$$ where $(S(t), E(t), I(t), R(t))$ is any positive solution of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]).
The method applied to show the permanence in the mean of the disease in the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is similar in the cases where (1) all the delays in the systems $T_{1}, T_{2}$ and $T_{3}$ are constant and finite, and (2) in the case where the delays $T_{1}, T_{2}$ and $T_{3}$ are random variables. Thus, without loss of generality, the results for the permanence in the mean of the disease in the stochastic system is presented only in the case of random delays in the system. Some ideas in [@yanli] can be used to prove this result.
\[ch1.sec5.thm1\] Assume that the conditions of Theorem \[ch1.sec3.thm1\] and Theorem \[ch2.sec3.thm3\] are satisfied. Define the following $$\begin{aligned}
\label{ch1.sec5.thm1.eq1}
\hat{h}\equiv\hat{h}(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})&=&3(S^{*}_{1})^{2}+ 2(E^{*}_{1})^{2}+(I^{*}_{1})^{2}+(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-2\mu (T_{1}+T_{2})})+(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-\mu T_{1}}),\nonumber\\
\sigma^{2}_{max}&=& \max{(\sigma^{2}_{S}, \sigma^{2}_{E}, \sigma^{2}_{I}, \sigma^{2}_{\beta})}.
\end{aligned}$$ Assume further that the following relationship is satisfied $$\label{ch1.sec5.thm1.eq2}
\sigma^{2}_{max}<\frac{\mathfrak{m}_{2}}{\hat{h}}\min{((S^{*}_{1})^{2}, (E^{*}_{1})^{2}, (I^{*}_{1})^{2})}\equiv \hat{\tau},$$ where $\mathfrak{m}_{2}$ is defined in(\[ch2.sec3.thm3.proof.eq5\]). Then it follows that $$\begin{aligned}
&& \liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}S(v)dv}>0,a.s., \quad \liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}E(v)dv}>0,a.s. \nonumber\\
&& and\quad \liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}I(v)dv}>0,a.s.\label{ch1.sec5.thm1.eq3}
\end{aligned}$$ In other words, the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is permanent in the mean.
Proof:\
It is easy to see that when Theorem \[ch1.sec3.thm1\] and Theorem \[ch2.sec3.thm3\] are satisfied, then it follows from (\[ch2.sec3.thm3.eq2\]) that $$\begin{aligned}
&&\limsup_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0} (S(v)-S^{*}_{1})^{2}dv}\leq \sigma^{2}_{max}\frac{\hat{h}}{\mathfrak{m}_{2}},\nonumber\\
&&\limsup_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0} (E(v)-E^{*}_{1})^{2}dv}\leq \sigma^{2}_{max}\frac{\hat{h}}{\mathfrak{m}_{2}},\nonumber\\
&&\limsup_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0} (I(v)-I^{*}_{1})^{2}dv}\leq \sigma^{2}_{max}\frac{\hat{h}}{\mathfrak{m}_{2}}.\quad a.s. \label{ch1.sec5.thm1.proof.eq1}\end{aligned}$$For each $a(t)\in \left\{S(t), E(t), I(t)\right\}$ it is easy to see that $$\label{ch1.sec5.thm1.proof.eq2}
2(a^{*}_{1})^{2}-2a^{*}_{1}a(t)\leq (a^{*}_{1})^{2}+ (a(t)-a^{*}_{1})^{2}.$$ It follows from (\[ch1.sec5.thm1.proof.eq2\]) that $$\begin{aligned}
\liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}a(v)dv}&\geq& \frac{a^{*}_{1}}{2}-\limsup_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0} \frac{(a(v)-a^{*}_{1})^{2}}{2a^{*}_{1}}dv}\nonumber\\
&\geq&\frac{a^{*}_{1}}{2}-\sigma^{2}_{max}\frac{\hat{h}}{\mathfrak{m_{2}}}\frac{1}{2a^{*}_{1}}\label{ch1.sec5.thm1.proof.eq3}\end{aligned}$$ For each $a(t)\in \left\{S(t), E(t), I(t)\right\}$, the inequalities in (\[ch1.sec5.thm1.eq3\]) follow immediately from (\[ch1.sec5.thm1.proof.eq3\]), whenever (\[ch1.sec5.thm1.eq2\]) is satisfied.\
The following result about the permanence of the disease in the system in the case where the delays $T_{1}, T_{2}$ and $T_{3}$ are all constant is stated.
\[ch1.sec5.thm2\] Assume that the conditions of Theorem \[ch1.sec3.thm2\] and Theorem \[ch1.sec3.thm1.corrolary1\] are satisfied. Define the following $$\begin{aligned}
\hat{h}\equiv\hat{h}(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})&=&(S^{*}_{1})^{2}(G^{*})^{2}e^{-2\mu (T_{1}+T_{2})}+(S^{*}_{1})^{2}(G^{*})^{2}e^{-\mu T_{1}},\label{ch1.sec5.thm2.eq1}
%\sigma^{2}_{\beta}&=& \max{(\sigma^{2}_{S}, \sigma^{2}_{E}, \sigma^{2}_{I}, \sigma^{2}_{\beta})}.
\end{aligned}$$ Assume further that the following relationship is satisfied $$\label{ch1.sec5.thm2.eq2}
\sigma^{2}_{\beta}<\frac{\mathfrak{m}_{1}}{\hat{h}}\min{((S^{*}_{1})^{2}, (E^{*}_{1})^{2}, (I^{*}_{1})^{2})}\equiv \hat{\tau},$$ where $\mathfrak{m}_{1}$ is defined in(\[ch2.sec3.thm2.proof.eq3\]). Then it follows that $$\begin{aligned}
&& \liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}S(v)dv}>0,a.s. \quad \liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}E(v)dv}>0,a.s. \nonumber\\
&& \liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}I(v)dv}>0,a.s.\label{ch1.sec5.thm2.eq3}
\end{aligned}$$ In other words, the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is permanent in the mean.
Proof:\
The proof is similar to the proof of Theorem \[ch1.sec5.thm1\] above.
\[ch1.sec5.rem1\] Theorems \[\[ch1.sec5.thm2\] &\[ch1.sec5.thm1\]\] provide sufficient conditions for the permanence in the mean of the vector-borne disease in the population, where the conditions depend on the intensities of the white noise processes in the system. Indeed, for Theorem \[ch1.sec5.thm1\], the condition in (\[ch1.sec5.thm1.eq2\]) suggests that there is a bound for the intensities of the white noise processes in the system that allows the disease to persist in the population permanently, provided that the noise free basic reproduction number of the disease in (\[ch1.sec2.theorem1.corollary1.eq3\]) satisfies $R_{0}>1$. Moreover, if one defines the term $$\label{ch1.sec5.rem1.eq1}
l_{a}(\sigma^{2}_{max})=\frac{a^{*}_{1}}{2}-\sigma^{2}_{max}\frac{\hat{h}}{\mathfrak{m_{2}}}\frac{1}{2a^{*}_{1}}, \forall a(t)\in \left\{S(t), E(t), I(t)\right\},$$ then it is easy to see from (\[ch1.sec5.thm1.proof.eq3\]) that $ l_{a}(\sigma^{2}_{max})$ is the lower bound for all average sample path estimates for the ensemble mean of the different states $a(t)\in \left\{S(t), E(t), I(t)\right\}$ of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) over sufficiently large amount of time, where $a^{*}_{1}\in \left\{S^{*}_{1}, E^{*}_{1}, I^{*}_{1}\right\}$. It can be observed from (\[ch1.sec5.rem1.eq1\]) and (\[ch1.sec5.thm1.eq2\]) that $$\label{ch1.sec5.rem1.eq2}
\lim_{\sigma^{2}_{max}\rightarrow 0^{+}}{l_{a}(\sigma^{2}_{max})}=\frac{1}{2}a^{*}_{1},\quad and \quad \lim_{\sigma^{2}_{max}\rightarrow \hat{\tau}^{-}}{l_{a}(\sigma^{2}_{max})}=0.$$ That is, the presence of noise in the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) with significant continuously increasing intensity values (i.e. $\sigma^{2}_{max}\rightarrow \hat{\tau}^{-}$ ) allow for smaller values of the asymptotic average limiting value of all sample paths ($\liminf_{t\rightarrow \infty}{\frac{1}{t}\int^{t}_{0}a(v)dv}$) of the states $a(t)\in \left\{S(t), E(t), I(t)\right\}$ of the system, and vice versa.
This observation suggests that the occurrence of “stronger” noise (noise with higher intensity) in the system suppresses the permanence of the disease in the population by allowing smaller average values for the sample paths of the disease related states $E, I, R$ in the population over sufficiently large amount of time. However, it should be noted that the smaller average values for the disease related classes $E, I, R$ over sufficiently long time are also matched with smaller average values for the susceptible class $S$ as the intensities of the noises in the system rise. This observation suggests that there is a general decrease in the population size over sufficiently long time as the intensities of the noises in the system increase in magnitude.
Therefore, one can conclude that for small to moderate values for the intensities of the noises in the system, the disease persists permanently with higher average values for the disease related classes. Furthermore, as the magnitude of the intensities of the noises increase to higher values, the human population may become extinct over sufficiently long time. This observation is illustrated in the Figures \[ch1.sec4.subsec1.1.figure 1\]-\[ch1.sec4.subsec1.1.figure 3\] and Figures \[ch1.sec4.subsec1.1.figure 4\]-\[ch1.sec4.subsec1.1.figure 6\], and much more in Figures \[ch1.sec4.figure 1\]-\[ch1.sec4.figure 3\].
The statistical significance of this result is noting that if all the sample paths for a given state $a(t)\in \left\{S(t), E(t), I(t)\right\}$ are bounded from below on average by the same significantly larger positive value $ l_{a}(\sigma^{2}_{max})$ asymptotically, then the ensemble means corresponding to the given states are also bounded by the same value asymptotically. This fact is more apparent as the stationary and ergodic properties of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) are shown in the next section.
Stationary distribution and ergodic property {#ch1.sec3.sec1}
============================================
The knowledge of the distribution of the solutions of a system of stochastic differential equations holds the key to fully understand the statistical and probabilistic properties of the solutions at any time, and overall to characterize the uncertainties of the states of the stochastic system over time. Furthermore, the ergodicity of the solutions of a stochastic system ensure that one obtains insights about the long-term behavior of the system, that is the statistical properties of the solutions of the system via knowledge of the average behavior of the sample paths or sample realizations of the system over finite or sufficiently large time interval.
In this section, the long term distribution and the ergodicity of the positive solutions of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) are characterized in the neighborhood of a potential endemic steady state $E_{1}$ for the system obtained in Theorem \[ch1.sec3.thm1\]. It is shown below that the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) has a stationary endemic distribution for the solutions of the system. First, the definition of a stationary distribution for a continuous-time and continuous-state Markov process or for the solution of a system of stochastic differential equations is presented in the following:
\[ch1.sec3.sec1.defn1\] (see [@yongli]) Denote $\mathbb{P}_{\gamma}$ the corresponding probability distribution of an initial distribution $\gamma$, which describes the initial state of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) at time $t=0$. Suppose that the distribution of $Y(t)=(S(t), E(t), I(t), R(t))$ with initial distribution $\gamma$ converges in some sense to a distribution $\pi=\pi_{\gamma}$ ( a priori $\pi$ may depend on the initial distribution $\gamma$), that is, $$\label{ch1.sec3.sec1.defn1.eq1}
\lim_{t\rightarrow \infty}\mathbb{P}_{\gamma}\{X(t)\in F\}=\pi(F)$$ for all measurable sets $F$, then we say that the stochastic system of differential equations has a stationary distribution $\pi(.)$.
The standard method utilized in [@yongli; @yongli-2; @yzhang] is applied to establish this result. The following assumptions are made:- let $Y(t)$ be a regular temporary homogeneous Markov process in the $d$-dimensional space $E_{d}\subseteq \mathbb{R}^{d}_{+}$ described by the stochastic differential equation $$\label{ch1.sec3.sec1.eq1}
dY(t)=b(Y,t)dt+\sum_{r=1}^{k}g_{r}(Y, t)dB_{r}(t).$$ Then the diffusion matrix can be defined as follows $$\label{ch1.sec3.sec1.eq2}
A(Y)=(a_{ij}(y)), a_{ij}(y)=\sum_{r=1}^{k}g^{i}_{r}(Y, t)g^{j}_{r}(Y, t).$$ The following lemma describes the existence of a stationary solution for (\[ch1.sec3.sec1.eq1\]).
\[ch1.sec3.sec1.lemma1\] The Markov process $Y(t)$ has a unique ergodic stationary distribution $\pi(.)$ if a bounded domain $D\subset E_{d}$ with regular boundary $\Gamma$ exists and
there exists a constant $M>0$ satisfying $\sum^{d}_{ij}a_{ij}(x)\xi_{i}\xi_{j}\geq M|\xi|^{2}$, $x\in D, \xi\in \mathbb{R}^{d}$.
there is a $\mathcal{C}^{1,2}$-function $V\geq 0$ such that $LV$ is negative for any $E_{d}\backslash D$. Then $$\label{ch1.sec3.sec1.lemma1.eq1}
\mathbb{P}\left\{ \lim_{T\rightarrow \infty}\frac{1}{T}\int_{0}^{T}f(Y(t))dt=\int_{E_{d}}f(y)\pi(dy)\right\}=1,$$ for all $y\in E_{d}$, where $f(.)$ is an integrable function with respect to the measure $\pi$.
The result that follows characterizes the existence of stationary distribution for the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) in the general case where the delay times $T_{1}, T_{2}$ and $T_{3}$ in the system are distributed.
\[ch1.sec3.sec1.thm1\] Let the assumptions in Theorem \[ch2.sec3.thm3\] be satisfied, and let $$\label{ch1.sec3.sec1.thm1.eq0}
R=3\min_{(S, E, I)\in \mathbb{R}^{3}_{+}}{\left(\frac{1}{\left(\frac{1}{3}\Phi_{1}- \sigma^{2}_{\beta}\left(\frac{2}{3}(G^{*})^{2}+2\theta^{2}_{1}\right)\right)},\frac{2}{\Phi_{2}}\left(1+\sigma^{2}_{\beta}(S\theta_{1}+\theta_{2})^{2}\right), \frac{1}{\Phi_{3}}\left(1+2\sigma^{2}_{\beta}\theta^{2}_{2}\right) \right)}<\infty.$$ Also define the following parameters:- $$\label{ch1.sec3.sec1.thm1.eq1}
\Phi=3\sigma^{2}_{S}(S^{*}_{1})^{2}+ 2\sigma^{2}_{E}(E^{*}_{1})^{2}+\sigma^{2}_{I}(I^{*}_{1})^{2}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-2\mu (T_{1}+T_{2})})+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-\mu T_{1}}),$$ and $$\label{ch1.sec3.sec1.thm1.eq1-1}
R_{min}=3\min{\left(\frac{1}{\left(\frac{1}{3}\Phi_{1}- \sigma^{2}_{\beta}\left(\frac{2}{3}(G^{*})^{2}+2(I^{*}_{1})^{2}\right)\right)},\frac{2}{\Phi_{2}}\left(1+4\sigma^{2}_{\beta}(S^{*}_{1}I^{*}_{1})^{2}\right), \frac{1}{\Phi_{3}}\left(1+2\sigma^{2}_{\beta}(S^{*}_{1}I^{*}_{1})^{2}\right) \right)}<\infty,$$ where, $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
\theta_{1} &=&\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-\mu s}G(I(t-s))ds,\label{ch1.sec3.sec1.thm1.proof.eq1} \\
\theta_{2} &=&\int_{t_{0}}^{h_{2}}f_{T_{2}}S(t-u)\int_{t_{0}}^{h_{1}}f_{T_{1}}(s)e^{-\mu( s+u)}G(I(t-s-u))dsdu,\label{ch1.sec3.sec1.thm1.proof.eq2}\\
\Phi_{1}&=&3\mu-\left[2\mu\lambda{(\mu)}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha\lambda{(\mu)}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}\right.\nonumber\\
&&+\left.\left(\frac{\beta \lambda{(\mu)}(G^{*})^{2}}{2}\right)+\left(\frac{\beta (G^{*})^{2}}{2\lambda{(\mu)}}\right)\right],\label{ch1.sec3.sec1.thm1.proof.eq2a}\\
\Phi_{2}&=&2\mu-\left[\frac{\beta }{2\lambda{(\mu)}}+\frac{\beta S^{*}_{1}\lambda{(\mu)}}{2}+ \frac{2\mu}{\lambda{(\mu)}}+(2\mu+d+\alpha)\frac{\lambda{(\mu)}}{2}+\alpha \lambda{(\mu)}\right],\label{ch1.sec3.sec1.thm1.proof.eq2b}\\
\Phi_{3}&=&(\mu+d+\alpha)-\left[(2\mu+d+\alpha)\frac{1}{\lambda{(\mu)}}+ \frac{\alpha\lambda{(\mu)}}{2}+\frac{3\alpha}{2\lambda(\mu)} \right.\nonumber\\
&&\left.+ \frac{\beta S^{*}_{1}(G'(I^{*}_{1}))^{2}}{\lambda{(\mu)}}\right].\label{ch1.sec3.sec1.thm1.proof.eq2c}
\end{aligned}$$ It follows that there is a unique stationary distribution $\pi(.)$ for the solutions of (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), whenever $$\label{ch1.sec3.sec1.thm1.eq2}
\Phi<\min{[\Phi_{1}(S^{*}_{1})^{2}, \Phi_{2}(E^{*}_{1})^{2}, \Phi_{3}(I^{*}_{1})^{2}]},$$ and $$\label{ch1.sec3.sec1.thm1.eq2-1}
R<R_{min},\quad and\quad R_{min}+\Phi<||E_{1}-0||^{2}.$$ Moreover, the solution of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) is ergodic.
Proof:\
The results will be shown for the vector $X=(S, E, I)$ corresponding to the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]). Furthermore, the hypothesis $\mathbf{H1}$ in Lemma \[ch1.sec3.sec1.lemma1\] is verified in the following. It is easy to see that when the conditions in (\[ch1.sec3.thm3.eq1\]) given in Theorem \[ch2.sec3.thm3\] are satisfied, it follows from (\[ch1.sec3.sec1.thm1.proof.eq2a\])-(\[ch1.sec3.sec1.thm1.proof.eq2c\]) that $$\label{ch1.sec3.sec1.thm1.proof.eq2d}
2(G^{*})^{2}\sigma^{2}_{\beta} +3\sigma^{2}_{S}<\Phi_{1},\quad 2\sigma^{2}_{E}<\Phi_{2}\quad and\quad \sigma^{2}_{I}<\Phi_{3}.$$ In addition, the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) can be rewritten in the form (\[ch1.sec3.sec1.eq1\]), where $d=3$, and the diffusion matrix $ A(X)=(a_{ij}(x))$ is defined as follows: $$\label{ch1.sec3.sec1.thm1.proof.eq3}
\left\{
\begin{array}{ccc}
a_{11} & = &\sigma^{2}_{S}S^{2}+\sigma^{2}_{\beta}S^{2}\theta^{2}_{1}, \\
a_{12} & = &\sigma^{2}_{\beta}S\theta_{1}\theta_{2}-\sigma^{2}_{\beta}S^{2}\theta^{2}_{1}, \\
a_{13} & = &-\sigma^{2}_{\beta}S\theta_{1}\theta_{2},
\end{array}
\right.$$ $$\label{ch1.sec3.sec1.thm1.proof.eq4}
\left\{
\begin{array}{lll}
a_{21} & = &a_{12},\\%\sigma^{2}_{S}S^{2}+\sigma^{2}_{\beta}S^{2}\theta^{2} \\
a_{22} & = &\sigma^{2}_{E}E^{2}+\sigma^{2}_{\beta}S^{2}\theta^{1}-2\sigma^{2}_{\beta}S\theta_{1}\theta_{2}+\sigma^{2}_{\beta}\theta^{2}_{2}, \\
a_{23} & = &\sigma^{2}_{\beta}S\theta_{1}\theta_{2}-\sigma^{2}_{\beta}\theta^{2}_{2},
\end{array}
\right.$$ and $$\label{ch1.sec3.sec1.thm1.proof.eq5}
a_{31}=a_{13}, a_{32}=a_{32}, a_{33}=\sigma^{2}_{I}I^{2}+\sigma^{2}_{\beta}\theta^{2}_{2}.$$ Also, define the sets $U_{1}$ and $U_{2}$ as follows $$\label{ch1.sec3.sec1.thm1.proof.eq6}
U_{1}=\left\{(S, E, I)\in \mathbb{R}^{3}_{+}| \mathfrak{m}_{2}(S-S^{*}_{1})^{2}+\mathfrak{m}_{2}(E-E^{*}_{1})^{2}+\mathfrak{m}_{2}(I-I^{*}_{1})^{2}\leq \Phi\right\}$$ and $$\begin{aligned}
%<\frac{1}{6}\left[\frac{\Phi_{1}}{\sigma^{2}_{\beta}}-2(G^{*})^{2}\right]
U_{2}&=&\left\{(S, E, I)\in \mathbb{R}^{3}_{+}|S^{2}(\sigma^{2}_{S}-2\sigma^{2}_{\beta}\theta^{2}_{1})\geq 1,E^{2}\geq \frac{1}{\sigma^{2}_{E}}\left[1+\sigma^{2}_{\beta}(S\theta_{1}+\theta_{2})^{2}\right], I^{2}\geq \frac{1}{\sigma^{2}_{I}}\left(1+2\sigma^{2}_{\beta}\theta^{2}_{2}\right) \right\}.\nonumber\\
\label{ch1.sec3.sec1.thm1.proof.eq7}
\end{aligned}$$ One can see from (\[ch1.sec3.sec1.thm1.proof.eq2d\]) that $$\begin{aligned}
U_{2} &\subset&\left\{(S, E, I)\in \mathbb{R}^{3}_{+}| S^{2}>\frac{1}{\left(\frac{1}{3}\Phi_{1}- \sigma^{2}_{\beta}\left(\frac{2}{3}(G^{*})^{2}+2\theta^{2}_{1}\right)\right)},\quad E^{2}>\frac{2}{\Phi_{2}}\left(1+\sigma^{2}_{\beta}(S\theta_{1}+\theta_{2})^{2}\right),\quad\right.\nonumber\\
&&\left. I^{2}>\frac{1}{\Phi_{3}}\left(1+2\sigma^{2}_{\beta}\theta^{2}_{2}\right)
\right\},\nonumber\\
&\subset& \left(\bar{B}_{\mathbb{R}^{3}_{+}}(0; R)\right)^{c},\label{ch1.sec3.sec1.thm1.proof.eq7-1}
\end{aligned}$$ where the set $\left(\bar{B}_{\mathbb{R}^{3}_{+}}(0; R)\right)^{c}$ is the complement of the closed ball or sphere in $\mathbb{R}^{3}_{+}$ centered at the origin $(S, E, I)=(0,0,0)$ with radius given by $$\label{ch1.sec3.sec1.thm1.proof.eq7-2}
R=3\min_{(S, E, I)\in \mathbb{R}^{3}_{+}}{\left(\frac{1}{\left(\frac{1}{3}\Phi_{1}- \sigma^{2}_{\beta}\left(\frac{2}{3}(G^{*})^{2}+2\theta^{2}_{1}\right)\right)},\frac{2}{\Phi_{2}}\left(1+\sigma^{2}_{\beta}(S\theta_{1}+\theta_{2})^{2}\right), \frac{1}{\Phi_{3}}\left(1+2\sigma^{2}_{\beta}\theta^{2}_{2}\right) \right)}<\infty.$$ In addition, the symbol “$\subset$” signifies the set operation of proper subset, and $\Phi$ and $\mathfrak{m}_{2}$ are defined in (\[ch1.sec3.sec1.thm1.eq1\]), Theorem \[ch2.sec3.thm3\] and (\[ch2.sec3.thm3.proof.eq5\]). Moreover, $ \mathfrak{m}_{2}$ is given as follows: $$\begin{aligned}
% %\nonumber % Remove numbering (before each equation)
% \Phi=3\sigma^{2}_{S}(S^{*}_{1})^{2}+ 2\sigma^{2}_{E}(E^{*}_{1})^{2}+\sigma^{2}_{I}(I^{*}_{1})^{2}+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-2\mu (T_{1}+T_{2})})+\sigma^{2}_{\beta}(S^{*}_{1})^{2}(G^{*})^{2}E(e^{-\mu T_{1}}) \nonumber\\
% \label{ch1.sec3.sec1.thm1.proof.eq7a}\\
\mathfrak{m}_{2}=\min\{\left(3\mu-\tilde{a}_{1}(\mu, d, \alpha, \beta, B)\right), \left(2\mu-a_{3}(\mu, d, \alpha, \beta, B)\right), \left((\mu + d+\alpha)-\tilde{a}_{2}(\mu, d, \alpha, \beta, B)\right)\}.\nonumber\\
\label{ch1.sec3.sec1.thm1.proof.eq7b}
\end{aligned}$$ It is easy to see that when the conditions in (\[ch1.sec3.thm3.eq1\]) which are presented in Theorem \[ch2.sec3.thm3\] are satisfied, then $\mathfrak{m}_{2}>0$. Also, observe that $U_{1}$ defined in (\[ch1.sec3.sec1.thm1.proof.eq6\]) represents the interior and boundary of a sphere in $\mathbb{R}^{3}_{+}$ with radius $\Phi$, centered at the endemic equilibrium $E_{1}=(S^{*}_{1},E^{*}_{1}, I^{*}_{1})$. Furthermore, when (\[ch1.sec3.sec1.thm1.eq2\]) holds, it is easy to see that $$\label{ch1.sec3.sec1.thm1.proof.eq7c}
\Phi\ll \Phi_{1}(S^{*}_{1})^{2}+ \Phi_{2}(E^{*}_{1})^{2}+ \Phi_{3}(I^{*}_{1})^{2}<\max{(\Phi_{1}, \Phi_{2}, \Phi_{3})}||E_{1}-0||^{2}.$$ Thus, from (\[ch1.sec3.sec1.thm1.proof.eq7c\]), the set $U_{1}$ is non-empty and totally enclosed in $\mathbb{R}^{3}_{+}$. Furthermore, the endemic equilibrium $E_{1}=(S^{*}_{1}, E^{*}_{1}, I^{*}_{1})\in U_{2}$. Indeed, it can be seen from the Assumption \[ch1.sec0.assum1\] that $\theta_{1}\leq I^{*}_{1}$ and $\theta_{2}\leq S^{*}_{1}I^{*}_{1}$, whenever $X(t)=E_{1}$. Therefore, since $R<R_{min}$ from (\[ch1.sec3.sec1.thm1.eq2-1\]), it implies that $$\label{ch1.sec3.sec1.thm1.proof.eq7c-1}
E_{1}\in \left(\bar{B}_{\mathbb{R}^{3}_{+}}(0; R_{min})\right)^{c}\subset \left(\bar{B}_{\mathbb{R}^{3}_{+}}(0; R)\right)^{c}.$$ One also notes that the set $U_{1}$ does not overlap or intersect with the closed ball $\bar{B}_{\mathbb{R}^{3}_{+}}(0; R)$, since $R_{min}+\Phi<||E_{1}-0||^{2}$ from (\[ch1.sec3.sec1.thm1.eq2-1\]). That is, $U_{1}\subset \left(\bar{B}_{\mathbb{R}^{3}_{+}}(0; R)\right)^{c}$. Now, define the new set $$\label{ch1.sec3.sec1.thm1.proof.eq7d}
U=U_{1}\cap U_{2}.$$ Clearly, $U\neq \emptyset$ since $E_{1}\in U$. Also, from (\[ch1.sec3.sec1.thm1.proof.eq7d\]) the set $U\subset U_{1}\subset \mathbb{R}^{3}_{+}$ and $U\subset U_{2}\subset \mathbb{R}^{3}_{+}$, thus, the set $U$ is totally enclosed in $\mathbb{R}^{3}_{+}$, and hence, the set $U$ is well-defined. Now, let $(S, E, I)\in U=U_{1}\cap U_{2}$. It follows from (\[ch1.sec3.sec1.eq2\]) and (\[ch1.sec3.sec1.thm1.proof.eq3\])-(\[ch1.sec3.sec1.thm1.proof.eq5\]), that $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
\sum^{3}_{ij}a_{ij}(x)\xi_{i}\xi_{j}=&& \sigma^{2}_{I}S^{2}\xi^{2}_{1} +\sigma^{2}_{E}E^{2}\xi^{2}_{2} +\sigma^{2}_{I}I^{2}\xi^{2}_{3}\nonumber\\
&&+\sigma^{2}_{\beta}S^{2}\theta^{2}_{1}(\xi_{1}-\xi_{2})^{2}+\sigma^{2}_{\beta}\theta^{2}_{2}(\xi_{2}-\xi_{3})^{2}-2\sigma^{2}_{\beta}S\theta_{1}\theta_{2}\xi^{2}_{2}\nonumber\\
&&+2\sigma^{2}_{\beta}S\theta_{1}\theta_{2}\xi_{1}\xi_{2}+2\sigma^{2}_{\beta}S\theta_{1}\theta_{2}\xi_{2}\xi_{3}-2\sigma^{2}_{\beta}S\theta_{1}\theta_{2}\xi_{1}\xi_{3}.\label{ch1.sec3.sec1.thm1.proof.eq8}
\end{aligned}$$ By applying the following algebraic inequalities $\min_{a,b}{(-2ab,2ab)}>-a^{2}-b^{2}$, to $\sum^{3}_{ij}a_{ij}(x)\xi_{i}\xi_{j}, \forall \xi=(\xi_{1},\xi_{2})$, it is easy to see that (\[ch1.sec3.sec1.thm1.proof.eq8\]) becomes the following: $$\begin{aligned}
% \nonumber % Remove numbering (before each equation)
\sum^{3}_{ij}a_{ij}(x)\xi_{i}\xi_{j}\geq&& (\sigma^{2}_{S}S^{2}-2\sigma^{2}_{\beta}S^{2}\theta^{2}_{1})\xi^{2}_{1} +(\sigma^{2}_{E}E^{2}-\sigma^{2}_{\beta}(S\theta_{1}+\theta_{2})^{2})\xi^{2}_{2}\nonumber\\
&&+(\sigma^{2}_{I}I^{2}-2\sigma^{2}_{\beta}\theta^{2}_{2})\xi^{2}_{3}+\sigma^{2}_{\beta}S^{2}\theta^{2}_{1}(\xi_{1}-\xi_{2})^{2}\nonumber\\
&&+\sigma^{2}_{\beta}\theta^{2}_{2}(\xi_{2}-\xi_{3})^{2}.\label{ch1.sec3.sec1.thm1.proof.eq9}
\end{aligned}$$ Define the constant $M$ as follows $$\label{ch1.sec3.sec1.thm1.proof.eq10}
M=\min{[(\sigma^{2}_{S}S^{2}-2\sigma^{2}_{\beta}S^{2}\theta^{2}_{1}),(\sigma^{2}_{E}E^{2}-\sigma^{2}_{\beta}(S\theta_{1}+\theta_{2})^{2}),(\sigma^{2}_{I}I^{2}-2\sigma^{2}_{\beta}\theta^{2}_{2}) ]}.$$ Clearly, from (\[ch1.sec3.sec1.thm1.proof.eq7\]), one can see that $M\geq 1>0, \forall (S,E,I)\in U$. It is also seen that by taking $D$ to be a neighborhood of $U$, then for $\forall (S, E, I)\in \bar{D}$, it follows from (\[ch1.sec3.sec1.thm1.proof.eq9\])-(\[ch1.sec3.sec1.thm1.proof.eq10\]) that $$\label{ch1.sec3.sec1.thm1.proof.eq10a}
\sum^{3}_{ij}a_{ij}(x)\xi_{i}\xi_{j}\geq M|\xi|^{2},$$ where $\bar{D}$ is the closure of the set $D$.
The hypothesis $\mathbf{H2}$ in Lemma \[ch1.sec3.sec1.lemma1\] follows from (\[ch1.sec3.lemma1.eq7\]), where it can be seen that $$\label{ch1.sec3.sec1.thm1.proof.eq10b}
LV(t)<-\mathfrak{m}_{2}(S-S^{*}_{1})^{2}-\mathfrak{m}_{2}(E-E^{*}_{1})^{2}-\mathfrak{m}_{2}(I-I^{*}_{1})^{2}+ \Phi.$$ Furthermore, from (\[ch1.sec3.sec1.thm1.proof.eq6\])-(\[ch1.sec3.sec1.thm1.proof.eq7d\]), it follows that $LV(t)<0$, for all $X=(S, E, I)\in \mathbb{R}^{3}_{+}\backslash D$. Since the hypotheses $\mathbf{H1}$ and $\mathbf{H2}$ hold, it implies from Lemma \[ch1.sec3.sec1.lemma1\] that the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) has stationary Ergodic solutions.
\[ch1.sec3.sec1.thm1.rem1\] Theorem \[ch1.sec3.sec1.thm1\] signifies that the positive solution process $Y(t)=(S(t), E(t), I(t), R(t))\in R_{+}^{4}\times [t_{0}, \infty)$ of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) converges in distribution to a unique random variable (for instance denoted $Y_{s}=(S_{s}, E_{s}, I_{s}, R_{s})$) that has the stationary distribution $\pi(.)$ in the neighborhood of the endemic equilibrium $E_{1}=(S^{*}_{1}, E^{*}_{1}, I^{*}_{1}, R^{*}_{1})$, whenever the conditions of Theorem \[ch1.sec3.thm1\], Theorem \[ch2.sec3.thm3\] and (\[ch1.sec3.sec1.thm1.eq2\])-(\[ch1.sec3.sec1.thm1.eq2-1\]) are satisfied.
The stationary feature of the solutions process $\{Y(t),t\geq t_{0}\}$ ensures that all the statistical properties such as the mean, variance, and moments etc. remain the same over time for every random vector $Y(t)$, whenever $t\geq t_{0}$ is sufficiently large and fixed. In other words, the ensemble mean, variance and moments etc. for the solution process $\{Y(t),t\geq t_{0}\}$ exist for sufficiently large time, and they are constant, and also correspond to the mean, variance and moments etc. of the stationary distribution $\pi(.)$.
The ergodic feature of the solution process $\{Y(t),t\geq t_{0}\}$ also allows for insights about the statistical properties of the entire process via knowledge of the sample paths over sufficiently large amount of time. For example, over sufficiently large time, the average value of the sample path given by $\hat{\mu}_{f}=\lim_{T\rightarrow \infty}\frac{1}{T}\int_{0}^{T}f(Y(t))dt$ accurately estimates the corresponding ensemble mean given by $\mu_{f}=E(f(Y(t)))=\int_{E_{d}}f(y)\pi(dy)$, where $f(.)$ is an integrable function with respect to the measure $\pi$. In particular, $\hat{\mu}_{S}=\lim_{T\rightarrow \infty}\frac{1}{T}\int_{0}^{T}S(t)dt$ estimates the ensemble mean $\mu_{S}=E(S(t))=\int_{E_{d}}S\pi(dS)$, and $\hat{\mu}_{I}=\lim_{T\rightarrow \infty}\frac{1}{T}\int_{0}^{T}I(t)dt$ estimates the ensemble mean $\mu_{S}=E(I(t))=\int_{E_{d}}I\pi(dI)$ etc. over sufficiently large values of $t\geq t_{0}$.
Note that in the absence of explicit sample path solutions of the nonlinear stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), the stationary distribution $\pi(.)$ is numerically approximated in Section \[ch1.sec4\] for specified sets of system parameters of the stochastic system.
Example {#ch1.sec4}
=======
In this study, the examples exhibited in this section are used to facilitate understanding about the influence of the intensity or “strength” of the noise in the system on the persistence of the disease in the population, and also to illustrate the existence of a stationary distribution $\pi(.)$ for the states of the system. These objectives are achieved in a simplistic manner by examining the behavior of the sample paths of the different states ($S, E, I, R$) of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) in the neighborhood of the potential endemic equilibrium $E_{1}=(S^{*}_{1}, E^{*}_{1},I^{*}_{1}, R^{*}_{1})$ of the system, and also by generating graphical representations for the distributions of samples of observations of the states ($S, E, I, R$) of the system at a specified time $t\in [t_{0},\infty)$.
Recall Theorem \[ch1.sec3.thm1.corrolary1\] asserts that potential endemic equilibrium $E_{1}$ exists, whenever the basic reproduction number $R^{*}_{0}>1$, where $R^{*}_{0}$ is defined in (\[ch1.sec2.lemma2a.corrolary1.eq4\]). It follows that when the conditions of Theorem \[ch1.sec3.thm1.corrolary1\] are satisfied, then the endemic equilibrium $E_{1}=(S^{*}_{1}, E^{*}_{1},I^{*}_{1}, R^{*}_{1})$ satisfies the following system $$\label{ch1.sec4.eq1}
\left\{
\begin{array}{lll}
&&B-\beta Se^{-\mu T_{1}}G(I)-\mu S+\alpha I e^{-\mu T_{3}}=0\\
&&\beta Se^{-\mu T_{1}}G(I)-\mu E -\beta Se^{-\mu (T_{1}+T_{2})}G(I)=0\\
&&\beta Se^{-\mu (T_{1}+T_{2})}G(I)-(\mu+d+\alpha)I=0\\
&&\alpha I-\mu R-\alpha I e^{-\mu T_{3}}=0
\end{array}
\right.$$
Example 1: Effect of the intensity of white noise on the persistence of the disease {#ch1.sec4.subsec1}
------------------------------------------------------------------------------------
The following convenient list of parameter values in Table \[ch1.sec4.table2\] are used to generate and examine the paths of the different states of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), whenever $R^{*}_{0}>1$, and the intensities of the white noise processes in the system continuously increase. It is easily seen that for this set of parameter values, $R^{*}_{0}=80.7854>1$. Furthermore, the endemic equilibrium for the system $E_{1}$ is given as follows:- $E_{1}=(S^{*}_{1},E^{*}_{1},I^{*}_{1})=(0.2286216,0.07157075,0.9929282)$.
Disease transmission rate $\beta$ 0.6277
--------------------------------- ---------- -----------------------
Constant Birth rate $B$ $ \frac{22.39}{1000}$
Recovery rate $\alpha$ 0.05067
Disease death rate $d$ 0.01838
Natural death rate $\mu$ $0.002433696$
Incubation delay time in vector $T_{1}$ 2 units
Incubation delay time in host $T_{2}$ 1 unit
Immunity delay time $T_{3}$ 4 units
: A list of specific values chosen for the system parameters for the examples in subsection \[ch1.sec4.subsec1\][]{data-label="ch1.sec4.table2"}
The Euler-Maruyama stochastic approximation scheme[^2] is used to generate trajectories for the different states $S(t), E(t), I(t), R(t)$ over the time interval $[0,T]$, where $T=\max(T_{1}+T_{2}, T_{3})=4$. The special nonlinear incidence functions $G(I)=\frac{aI}{1+I}, a=0.05$ in [@gumel] is utilized to generate the numeric results. Furthermore, the following initial conditions are used $$\label{ch1.sec4.eq1}
\left\{
\begin{array}{l l}
S(t)= 10,\\
E(t)= 5,\\
I(t)= 6,\\
R(t)= 2,
\end{array}
\right.
\forall t\in [-T,0], T=\max(T_{1}+T_{2}, T_{3})=4.$$
### Effect of the intensity of white noise $\sigma_{\beta}$ on the persistence and permanence of the disease {#ch1.sec4.subsec1.1}
![(a-i), (b-i), (c-i) and (d-i) show the trajectories of the disease states $(S,E,I,R)$ respectively, whenever the only source of the noise in the system is from the disease transmission rate, and the strength of the noise in the system is relatively small, that is, $\sigma_{i}=0.5, \forall i\in \{S, E, I, R, \beta\} $. The broken lines represent the endemic equilibrium $E_{1}=(S^{*}_{1},E^{*}_{1},I^{*}_{1})=(0.2286216,0.07157075,0.9929282)$. Furthermore, $\min{(S(t))}=9.168214$, $\min{(E(t))}=4.946437$, $\min{(I(t))}=5.43688$ and $\min{(R(t))}=1.93282$. []{data-label="ch1.sec4.subsec1.1.figure 1"}](newpersistence-sigmas-beta-is-0_5-and-R_0-bigger-than-1.eps){height="6cm"}
![$(a-ii)$, $(b-ii)$, $(c-ii)$ and $(d-ii)$ show the trajectories of the disease states $(S,E,I,R)$ respectively, whenever the only source of the noise in the system is from the disease transmission rate, and the strength of the noise in the system is relatively high, that is, $\sigma_{i}=5, \forall i\in \{S, E, I, R, \beta\} $. The broken lines represent the endemic equilibrium $E_{1}=(S^{*}_{1},E^{*}_{1},I^{*}_{1})=(0.2286216,0.07157075,0.9929282)$. Furthermore, $\min{(S(t))}=3.614151$, $\min{(E(t))}=4.611832$, $\min{(I(t))}=5.43688$ and $\min{(R(t))}=1.93282$. []{data-label="ch1.sec4.subsec1.1.figure 2"}](newpersistence-sigmas-beta-is-5-and-R_0-bigger-than-1.eps){height="6cm"}
![$(a-iii)$, $(b-iii)$, $(c-iii)$ and $(d-iii)$ show the trajectories of the disease states $(S,E,I,R)$ respectively, whenever the only source of the noise in the system is from the disease transmission rate, and the strength of the noise in the system is relatively higher, that is, $\sigma_{i}=10, \forall i\in \{S, E, I, R, \beta\} $. The broken lines represent the endemic equilibrium $E_{1}=(S^{*}_{1},E^{*}_{1},I^{*}_{1})=(0.2286216,0.07157075,0.9929282)$. Furthermore, $\min{(S(t))}=1.217144$, $\min{(E(t))}=4.465998$, $\min{(I(t))}=5.43688$ and $\min{(R(t))}=1.93282$. []{data-label="ch1.sec4.subsec1.1.figure 3"}](newpersistence-sigmas-beta-is-10-and-R_0-bigger-than-1.eps){height="6cm"}
The Figures \[ch1.sec4.subsec1.1.figure 1\]-\[ch1.sec4.subsec1.1.figure 3\] and Figures \[ch1.sec4.subsec1.1.figure 4\]-\[ch1.sec4.subsec1.1.figure 6\] can be used to examine the persistence and permanence of the disease in the human population exhibited in Theorem \[ch1.sec3.thm2\] and Theorem \[ch1.sec5.thm2\], respectively. The following observations are made:- (1) the occurrence of noise in the disease transmission rate, that is, $\sigma_{\beta}>0$, results in random fluctuations mainly in the susceptible and exposed states depicted in Figures \[ch1.sec4.subsec1.1.figure 1\]-\[ch1.sec4.subsec1.1.figure 3\](a-i)-(a-iii), (b-i)-(b-iii). No major oscillations are observed in the infectious and removed states $I(t)$ and $R(t)$. This observation is also more significant in the approximate uniform distributions observed for the $I(t)$ and $R(t)$ states at the time $t=900$ depicted in Figures \[ch1.sec4.subsec1.1.figure 4\]-\[ch1.sec4.subsec1.1.figure 6\](c-iv)-(c-vi), (d-iv)-(d-vi), based on 1000 sample points for $I(t)$ and $R(t)$ at the fixed time $t=900$ (that is, 1000 sample observations of $I(900)$ and $R(900)$).
\(2) Increasing the intensity of the noise from the disease transmission rate, that is, as $\sigma_{\beta}$ increases from $0.5$ to $10$, it results in a rise in infection with many more people becoming exposed to the disease. This fact is depicted in Figures \[ch1.sec4.subsec1.1.figure 1\]-\[ch1.sec4.subsec1.1.figure 3\](a-i)-(a-iii), (b-i)-(b-iii), where a new higher maximum value for the trajectories of the exposed state $E(t)$ , and a new lower minimum value for the trajectory of the susceptible state $S(t)$ are attained over the time interval $[0,1000]$, respectively, across the figures as $\sigma_{\beta}$ increases from $0.5$ to $10$. Therefore, stronger noise in the disease dynamics from the disease transmission rate leads to more persistence of the disease. This observation about the persistence of disease is also significant in the approximate distributions for the susceptible and exposed states, $I(t)$ and $R(t)$, respectively, at the time $t=900$ depicted in Figures \[ch1.sec4.subsec1.1.figure 4\]-\[ch1.sec4.subsec1.1.figure 6\](a-iv)-(a-vi), (b-iv)-(b-vi), based on 1000 sample points for $I(t)$ and $R(t)$ at the fixed time $t=900$.
Indeed, it can be seen from Figures \[ch1.sec4.subsec1.1.figure 4\]-\[ch1.sec4.subsec1.1.figure 6\](a-iv)-(a-vi), (b-iv)-(b-vi) that when the intensity of the noise in the system is $\sigma_{\beta}=0.5$, the distributions of $S(900)$ and $E(900)$ are closely symmetric with one peak, with the center for $S(900)$ approximately between $(9.5, 10.5)$, and about $97.3\%$ of the values between $(9.0,11.0)$. The center for $E(900)$ approximately between $(4.5, 5.5)$, and about $97.3\%$ of the values between $(4.0,6.0)$.
Now, when the intensity increases to $\sigma_{\beta}=5$, and to $\sigma_{\beta}=10$, the distributions of $S(900)$ and $E(900)$ continuously become more sharply skewed, with $S(900)$ skewed the right with center ( utilizing the mode of $S(900)$) shifting to the left with majority of the possible values for $S(900)$ continuously decreasing in magnitude from approximately under the interval $(0, 30)$, to the interval $(0,20)$. These changes of the shape of the distribution, and decrease of the range of values in the support for the distribution of $S(900)$ as the the intensity rises from $\sigma_{\beta}=5$, and to $\sigma_{\beta}=10$, indicates that more susceptible tend to get infected at the time $t=900$ as the intensity of the noise rises.
Similarly, when the intensity increases from $\sigma_{\beta}=0.5$ to $\sigma_{\beta}=5$, and also from $\sigma_{\beta}=5$ to $\sigma_{\beta}=10$, the distribution of $E(900)$ is skewed to the left with center ( utilizing the mode of $E(900)$) shifting to the right with majority of the possible values in the support for $E(900)$ continuously increasing in magnitude from under the interval $(0, 10)$, to the interval $(0,20)$. These changes of the shape of the distribution, and increase in the magnitude of the range of values in the the support for $E(900)$ as the the intensity rises from $\sigma_{\beta}=5$, to $\sigma_{\beta}=10$, indicates that more susceptible people tend to get infected and become exposed at the time $t=900$.
(3)Finally, the remark about the influence of the strength of the noise on the stochastic permanence in the mean of the disease in Remark \[ch1.sec5.rem1\] can be examined using Figures \[ch1.sec4.subsec1.1.figure 1\]-\[ch1.sec4.subsec1.1.figure 3\], and Figures \[ch1.sec4.subsec1.1.figure 4\]-\[ch1.sec4.subsec1.1.figure 6\]. Recall, (\[ch1.sec5.rem1.eq2\]) in Remark \[ch1.sec5.rem1\] corresponding to Theorem \[ch1.sec5.thm2\] asserts that when the intensity of the noise $\sigma_{\beta}$ is infinitesimally small, a larger asymptotic lower bound $l_{a}(\sigma^{2}_{max})=l_{a}(\sigma^{2}_{\beta}),a\in \{S, E, I\}$ is attained for the average in time of the sample path of each state heavily influenced by the random fluctuations in the system, which in this scenario is the $S(t)$ and $E(t)$ states. Across the figures, as the intensity rises from $\sigma_{\beta}=0.5$, to $\sigma_{\beta}=10$ in Figures \[ch1.sec4.subsec1.1.figure 1\]-\[ch1.sec4.subsec1.1.figure 3\](a-i)-(a-iii), (b-i)-(b-iii), and Figures \[ch1.sec4.subsec1.1.figure 4\]-\[ch1.sec4.subsec1.1.figure 6\](a-iv)-(a-vi), (b-iv)-(b-vi), smaller minimum values for the paths of $S(t)$ and $E(t)$ are observed in Figures \[ch1.sec4.subsec1.1.figure 1\]-\[ch1.sec4.subsec1.1.figure 3\](a-i)-(a-iii), (b-i)-(b-iii).
Also, the centers (utilizing the mean) for $S(900)$ and $E(100)$ continuously decrease in magnitude as the intensity rises from $\sigma_{\beta}=0.5$, and to $\sigma_{\beta}=10$. Indeed, this is evident since for the figures that are symmetric with a single peak, that is, Figures \[ch1.sec4.subsec1.1.figure 6\](a-iv)-(b-iv) corresponding to $\sigma_{\beta}=0.5$, the measures of the center (mean, mode, and median) are all approximately equal and higher in magnitude, while for the figures that are skewed ( to the left, or to the right), that is, Figures \[ch1.sec4.subsec1.1.figure 4\]-\[ch1.sec4.subsec1.1.figure 5\] ((b-iv) and (b-v) skewed to the left, and (a-iv), (a-v) skewed to the right), the position of the mean is also pulled further to the direction of the skew, which is relatively lower in magnitude compared to the observation in Figures \[ch1.sec4.subsec1.1.figure 6\]. And note that despite the fact that the approximate distribution of $S(900)$ is skewed to the right in Figures \[ch1.sec4.subsec1.1.figure 4\]-\[ch1.sec4.subsec1.1.figure 5\](a-iv)-(a-v), the mean continuously becomes smaller in magnitude as the intensity rises from $\sigma_{\beta}=0.5$ to $\sigma_{\beta}=10$.
![(a-iv), (b-iv), (c-iv) and (d-iv) show the approximate distribution of the different disease states $(S,E,I,R)$ respectively, at the specified time $t=900$, whenever the only source of noise in the system is from the disease transmission rate, and the intensity of the noise in the system is relatively higher, that is, $\sigma_{\beta}=10$. []{data-label="ch1.sec4.subsec1.1.figure 4"}](newhistogram-persistence-sigmas-beta-is-10-and-R_0-bigger-than-1.eps){height="6cm"}
![(a-v), (b-v), (c-v) and (d-v) show the approximate distribution of the different disease states $(S,E,I,R)$ respectively, at the specified time $t=900$, whenever the only source of noise in the system is from the disease transmission rate, and the intensity of the noise in the system is relatively high, that is, $\sigma_{\beta}=5$. []{data-label="ch1.sec4.subsec1.1.figure 5"}](newhistogram-persistence-sigmas-beta-is-5-and-R_0-bigger-than-1.eps){height="6cm"}
![(a-vi), (b-vi), (c-vi) and (d-vi) show the approximate distribution of the different disease states $(S,E,I,R)$ respectively, at the specified time $t=900$, whenever the only source of noise in the system is from the disease transmission rate, and the intensity of the noise in the system is relatively low, that is, $\sigma_{\beta}=0.5$. []{data-label="ch1.sec4.subsec1.1.figure 6"}](newhistogram-persistence-sigmas-beta-is-0_5-and-R_0-bigger-than-1.eps){height="6cm"}
### Joint effect of the intensities of white noise $\sigma_{i},i=S, E, I, \beta$ on the persistence of the disease {#ch1.sec4.subsec1.2}
![(a1), (b1), (c1) and (d1) show the trajectories of the disease states $(S,E,I,R)$ respectively, whenever various sources of the noise in the system (i.e. natural death and disease transmission rates) are assumed to have the same strength or intensity (i.e. $\sigma_{i}=\sigma_{\beta}, i\in \{S, E, I, R\}$), and the strength of the noises in the system are relatively small, that is, $\sigma_{i}=0.5, \forall i\in \{S, E, I, R, \beta\} $. The broken lines represent the endemic equilibrium $E_{1}=(S^{*}_{1},E^{*}_{1},I^{*}_{1})=(0.2286216,0.07157075,0.9929282)$. Furthermore, $\min{(S(t))}= 6.941153$, $\min{(E(t))}=3.693744$, $\min{(I(t))}=4.004954$ and $\min{(R(t))}= 1.384626$. []{data-label="ch1.sec4.figure 1"}](persistence-sigmas-is-0_5-and-R_0-bigger-than-1.eps){height="6cm"}
![(a2), (b2), (c2) and (d2) show the trajectories of the disease states $(S,E,I,R)$ respectively, whenever the various sources of the noise in the system (i.e. natural death and disease transmission rates) are assumed to have the same strength or intensity (i.e. $\sigma_{i}=\sigma_{\beta}, i\in \{S, E, I, R\}$), and the strength of the noises in the system are relatively moderate, that is, $\sigma_{i}=1.5, \forall i\in \{S, E, I, R, \beta\} $. The broken lines represent the endemic equilibrium $E_{1}=(S^{*}_{1},E^{*}_{1},I^{*}_{1})=(0.2286216,0.07157075,0.9929282)$. Furthermore, $\min{(S(t))}= 0.6116444$, $\min{(E(t))}=0.3496159$, $\min{(I(t))}=0.4439692$ and $\min{(R(t))}= -1.499425$. []{data-label="ch1.sec4.figure 2"}](persistence-sigmas-is-1_5-and-R_0-bigger-than-1.eps){height="6cm"}
![(a3), (b3), (c3) and (d3) show the trajectories of the disease states $(S,E,I,R)$ respectively, whenever the various sources of the noise in the system (i.e. natural death and disease transmission rates) are assumed to have the same strength or intensity (i.e. $\sigma_{i}=\sigma_{\beta}, i\in \{S, E, I, R\}$), and the strength of the noises in the system are relatively high, that is, $\sigma_{i}=2.5, \forall i\in \{S, E, I, R, \beta\}$. The broken lines represent the endemic equilibrium $E_{1}=(S^{*}_{1},E^{*}_{1},I^{*}_{1})=(0.2286216,0.07157075,0.9929282)$. Furthermore, $\min{(S(t))}=0.03717095$, $\min{(E(t))}=-2.204265$, $\min{(I(t))}=0.03397191$ and $\min{(R(t))}= -2.645903$. []{data-label="ch1.sec4.figure 3"}](persistence-sigmas-is-2_5-and-R_0-bigger-than-1.eps){height="6cm"}
The Figures \[ch1.sec4.figure 1\]-\[ch1.sec4.figure 3\] can be used to examine the persistence of the disease in the human population as the intensities of the white noises in the system equally and continuously rise in value between $0.5$ to $2.5$, that is, for $\sigma_{i}=0.5, 1.5, 2.5, \forall i\in \{S, E, I, R, \beta\}$. It can be observed from Figure \[ch1.sec4.figure 1\] that when the intensity of noise is relatively small, that is, $\sigma_{S}=\sigma_{E}=\sigma_{\beta}=\sigma_{I}=\sigma_{R}=0.5$, the disease persists in the population with a significantly higher lower bounds for the disease classes $E, I$, and $R$. The lower bound for these disease classes are seen to continuously decrease in value as the magnitude of the intensities rise from $\sigma_{i}= 0.5$ to $\sigma_{i}= 1.5, \forall i\in \{S, E, I, R, \beta\}$, and also increase to $\sigma_{i}= 2.5, \forall i\in \{S, E, I, R, \beta\}$. This observation confirms the result in Theorems \[\[ch1.sec5.thm1\]&\[ch1.sec5.thm2\]\] and Remark \[ch1.sec5.rem1\], which asserts that an increase in the intensities of the noises in the system tends to lead to persistence of the disease with smaller lower bounds for the paths of the disease related classes, while a decrease in the strength of the noise in the system allows the disease to persist with a relatively higher lower margin for the paths of the disease related classes. It is important to note that the continuous decrease in the lower bounds for the paths of the disease related states $E, I, R$ are also matched with continuous decrease in the lower margin for the susceptible class exhibited in Figures \[ch1.sec4.figure 1\]-\[ch1.sec4.figure 3\] \[(a1), (a2), (a3)\]. This observation suggests that the population gets extinct overtime with continuous rise in the intensity of the noises in the system.
![(a4), (b4), (c4) and (d4) show the approximate distribution of the different disease states $(S,E,I,R)$ respectively, at the specified time $t=600$, whenever the intensities of the noises in the system are relatively small, that is, $\sigma_{E}=\sigma_{I}=\sigma_{R}=0.5$. []{data-label="ch1.sec4.figure 4"}](Histogram-persistence-sigmas-is-0_5-and-R_0-bigger-than-1.eps){height="6cm"}
![(a5), (b5), (c5) and (d5) show the approximate distribution of the different disease states $(S,E,I,R)$ respectively, at the specified time $t=600$, whenever the intensities of the noises in the system are relatively moderate, that is, $\sigma_{E}=\sigma_{I}=\sigma_{R}=1.5$. []{data-label="ch1.sec4.figure 5"}](Histogram-persistence-sigmas-is-1_5-and-R_0-bigger-than-1.eps){height="6cm"}
![(a6), (b6), (c6) and (d6) show the approximate distribution of the different disease states $(S,E,I,R)$ respectively, at the specified time $t=600$, whenever the intensities of the noises in the system are relatively high, that is, $\sigma_{E}=\sigma_{I}=\sigma_{R}=2.5$. []{data-label="ch1.sec4.figure 6"}](Histogram-persistence-sigmas-is-2_5-and-R_0-bigger-than-1.eps){height="6cm"}
Figure \[ch1.sec4.figure 4\]-Figure \[ch1.sec4.figure 6\] provides a clearer picture about the effect of the rise in the intensity of the noises in the system on the persistence of the disease at any given time, for example, when the time is $t=600$. The statistical graphs in Figure \[ch1.sec4.figure 4\]-Figure \[ch1.sec4.figure 6\] are based on samples of 1000 simulation observations for the different states in the system $S, E, I$ and $R$ at the time $t=600$. For the susceptible population in Figure \[ch1.sec4.figure 4\]-Figure \[ch1.sec4.figure 6\]\[(a4)-a(6)\], it can be seen that the majority of possible values in the support for $S(600)$ occur in $(0,60)$. However, the frequency of these values dwindle with the rise in the intensity of the noises in the system from $\sigma_{i}= 0.5, \forall i\in \{S, E, I, R, \beta\}$ to $\sigma_{i}= 2.5, \forall i\in \{S, E, I, R, \beta\}$. Moreover, the much smaller values in the support for $S(600)$ tend to occur more frequently as is depicted in Figure \[ch1.sec4.figure 5\] and Figure \[ch1.sec4.figure 6\]. This observation suggests that the rise in the intensity of the noise in the system increases the probability of occurrence for smaller values in the support of the susceptible population state $S$ at time $t=200$, and this further suggests that more susceptible people tend to converted out of the susceptible state, either as a result of infection or natural death.
Similar observations can be made for the disease related classes:- exposed($E$), infectious ($I$) and removal($R$) populations in Figure \[ch1.sec4.figure 4\]-Figure \[ch1.sec4.figure 6\]\[(b4)-b(6)\], \[(c4)-c(6)\] and \[(d4)-d(6)\] respectively. It can be seen that the majority of possible values in the support for $E(600)\leq 50$, $0\leq I(600)\leq 30$, and $R(600)\leq 20$. However, the frequency of these values also dwindle with the rise in the intensity of the noises in the system from $\sigma_{i}= 0.5, \forall i\in \{S, E, I, R, \beta\}$ to $\sigma_{i}= 2.5, \forall i\in \{S, E, I, R, \beta\}$. Moreover, much smaller values in the support for $I(600)$ tend to occur more frequently as is seen in Figure \[ch1.sec4.figure 5\] and Figure \[ch1.sec4.figure 6\], while negative values for $E(600)$ and $R(600)$ tend to occur the most for these disease related classes. This observation suggests that the rise in the intensity of the noise in the system increases the probability for smaller values in the support of the disease related states $E, I, R$ in the population at time $t=200$ to occur.
This observation further suggests that more exposed people tend to be converted from the exposed state either as a result of more exposed people turning into full blown infectious individuals or natural death. For the infectious population, this observation suggests that more infectious people tend to be converted from the infectious state either because more infectious people tend to recover from the disease or die naturally, or die from disease related causes. Also, for the recovery class, the high probability of occurrence of smaller values in the support for $R(200)$ suggests that more removed individuals tend to be converted out of the state either because more naturally immune persons tend to lose their naturally acquired immunity and become susceptible again to the disease, or because they tend to die naturally. It is important to note that the occurrence of the negative values in the support for $E(600)$ and $R(600)$ with high probability values signifies that in many occasions, the exposed and removal populations become extinct.
The numerical simulation results in the subsections \[\[ch1.sec4.subsec1.1\]-\[ch1.sec4.subsec1.2\]\] suggest and highlight an important fact that the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) exhibits higher sensitivity (such as high persistence of disease, tendency for the population to become extinct etc.) to changes in the magnitude of the intensities of the noises from the natural deathrates (i.e. $\sigma_{i}=0.5, 1.5, 2.5, \forall i\{S, E, I, R\})$ observed in Figures \[ch1.sec4.figure 1\]-\[ch1.sec4.figure 6\], compared to changes in the magnitude of the intensity of the noise from the disease transmission rate (i.e. $\sigma_{\beta}=0.5, 5, 10$) exhibited in Figures \[ch1.sec4.subsec1.1.figure 1\]-\[ch1.sec4.subsec1.1.figure 6\]. This fact can be easily seen in the occurrence and size of the oscillations in the sample paths of the different states $S, E, I, R$ represented in the figures, as the intensities continuously change values from small to high values. For example, the continuous changes in the intensity $\sigma_{\beta}=0.5, 5, 10$ result to a possibility for extinction of the human population only for very high intensity values, while small changes in the intensities $\sigma_{i}=0.5, 1.5, 2.5, \forall i\{S, E, I, R\})$ result to the possibility of extinction of the human population at disproportionately smaller magnitudes of the intensity values. This suggests that the growth rates of the intensities in the system influence the qualitative behavior of the trajectories of the system, and consequently impact the disease dynamics over time. This fact about growth rates of the intensities of the white noise processes in the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), and various classifications of the growth rates and the qualitative behavior of the disease dynamics is the central theme of the study Wanduku[@wanduku-theorBio].
Evidence of stationary distribution {#ch1.sec4.subsec2}
-----------------------------------
The following new convenient set of parameter values in Table \[ch1.sec4.table3\] are used to examine the results of Theorem \[ch1.sec3.sec1.thm1\] about the existence of a stationary distribution for the different states of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), whenever $R^{*}_{0}>1$, and the delay times $T_{1}, T_{2}$ and $T_{3}$ are constant.
It is easily seen that for this set of parameter values, $R^{*}_{0}=10.59709>1$. Furthermore, the endemic equilibrium for the system $E_{1}$ is given as follows:- $E_{1}=(S^{*}_{1},E^{*}_{1},I^{*}_{1})=(2.83334,0.2755608,2.521028)$. And, the parameter $\lambda(\mu)$ is taken to be $\lambda(\mu)=\mu e^{-\mu(T_{1}+T_{2})}$. Also, the intensities of the white noise processes in the system are $\sigma_{\beta}=2$ and $\sigma_{i}=0.2, \forall i\{S, E, I, R\})$. In addition, for the condition in (\[ch1.sec3.sec1.thm1.eq2\]), it is also easy to see that $\Phi=0.01223636$, and $$\min{[\Phi_{1}(S^{*}_{1})^{2}, \Phi_{2}(E^{*}_{1})^{2}, \Phi_{3}(I^{*}_{1})^{2}]}=\min{(2.697389, 0.03695982, 1.546866) }.$$ Thus, (\[ch1.sec3.sec1.thm1.eq2\]) is satisfied. The approximate distributions of the different disease states $S, E, I, R$ based on samples of 5000 simulation realizations, at the different times $t=600$, $t=700$ and $t=900$ are given in Figure \[ch1.sec4.figure 7\]-Figure \[ch1.sec4.figure 9\], respectively.
Disease transmission rate $\beta$ $0.6277e^{-\mu(T_{1}+T_{2})}$
--------------------------------- ---------- --------------------------------------------
Constant Birth rate $B$ $ 1$
Recovery rate $\alpha$ $5.067\times 10^{-8}e^{-\mu(T_{1}+T_{2})}$
Disease death rate $d$ $1.838\times 10^{-8}e^{-\mu(T_{1}+T_{2})}$
Natural death rate $\mu$ $0.2433696$
Incubation delay time in vector $T_{1}$ 2 units
Incubation delay time in host $T_{2}$ 1 unit
Immunity delay time $T_{3}$ 4 units
: A list of specific values chosen for the system parameters for the example in subsection \[ch1.sec4.subsec2\].[]{data-label="ch1.sec4.table3"}
![(a7), (b7), (c7) and (d7) depict an approximate distribution for the different disease states $(S,E,I,R)$ respectively, at the time when $t=600$. []{data-label="ch1.sec4.figure 7"}](statstionary-distribution-at-t-600.eps){height="6cm"}
![ (a8), (b8), (c8) and (d8) depict an approximate distribution for the different disease states $(S,E,I,R)$ respectively, at the time when $t=700$. []{data-label="ch1.sec4.figure 8"}](statstionary-distribution-at-t-700.eps){height="6cm"}
![ (a9), (b9), (c9) and (d9) depict an approximate distribution for the different disease states $(S,E,I,R)$ respectively, at the time when $t=900$. []{data-label="ch1.sec4.figure 9"}](statstionary-distribution-at-t-900.eps){height="6cm"}
It is easy to see from Figure \[ch1.sec4.figure 7\]-Figure \[ch1.sec4.figure 9\] that the approximate distributions for the different states $S, E, I$ and $R$ have approximately common location, scale and shape parameters. These same conclusions can be reached about the location, scale and shape parameters of the corresponding distributions for the different states $S, E, I$ and $R$ obtained for all bigger times than $t=600$, that is, $\forall t\geq 600$. This clearly proves the existence of a stationary distribution for the states of the stochastic system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]), which is the limit of convergence (in distribution) of the sequence (collection) of distributions of the different states of the system, that is, $S(t), E(t), I(t), R(t)$, indexed by the time $t>0$, whenever the conditions in Theorem \[ch1.sec3.sec1.thm1\] are satisfied, and all possible simulation realizations are generated for the different states in the system at each time $t$.
Furthermore, one can infer from the descriptive statistics of the approximated distributions presented in Figure \[ch1.sec4.figure 7\]-Figure \[ch1.sec4.figure 9\] that the location or centering parameters (e.g. ensemble mean) of the true stationary distribution for the susceptible population lies approximately in the interval \[6.5,8.0\]. Moreover, the shape of the distribution is approximately symmetric with a single peak. Similarly, for the disease related classes namely:- exposed, infectious and removal classes, one can infer that the shapes of the stationary distributions for each of these classes are approximately symmetric with a single peak, and the location or centering parameters lie approximately in the intervals $[2.45,2.85]$, $[2.8,3.6]$ and $[0.95,1.90]$, respectively.
It is also important to note that whilst the statistical parameters such as the measures of center, variation and moments of the stationary distribution for the states of the system (\[ch1.sec0.eq8\])-(\[ch1.sec0.eq11\]) may change for each given set of values selected for the parameters of the system in Table \[ch1.sec4.table3\], the stationary distribution obtained is unique for that set of system parameter values.
Conclusion
==========
The presented family of stochastic malaria models with nonlinear incidence rates, random delays and environmental perturbations characterizes the general dynamics of malaria in a highly random environment with variability originating from (1.) the disease transmission rate between mosquitoes and humans, and also from (2.) the natural death rates of the susceptible, exposed, infectious and removal individuals of the human population. The random environmental fluctuations are formulated as independent white noise processes, and the malaria dynamics is expressed as a family of Ito-Doob type stochastic differential equations. Moreover, the family type is determined by a general nonlinear incidence function $G$ with various mathematical properties. The nonlinear incidence function $G$ can be used to describe the nonlinear character of malaria transmission rates in various disease scenarios where the rate of malaria transmission may initially increase or decrease, and become steady or bounded as the number of malaria cases increase in the population.
This work furthers investigation about the malaria project initiated in Wanduku[@wanduku-biomath; @wanduku-comparative] and focuses on (1) the stochastic permanence of malaria, and (2) the existence of a stationary distribution for the random process describing the states of the disease over time. The investigation about these two aspects above centers on analyzing the behavior of the sample paths of the different states of the stochastic process in the neighborhood of the potential endemic equilibrium state of the dynamic system. Lyapunov functional techniques, and other local martingale characterizations are applied to characterize the trajectories of the solution process of the stochastic dynamic system.
In addition, much emphasis is laid on analyzing the impacts of (a) the sources of the noises-disease transmission or natural death rates, and (b) the intensities of the noises on the permanence of malaria and the existence of a stationary distribution for the disease. Expansive and exhaustive discussions are presented to elucidate the qualitative character of the permanence of malaria and stationary distribution for the disease under the influence of the different sources, and different intensities of the noises in the system.
The results of this study are illuminated by detailed numerical simulation examples that examine the trajectories of the states of the stochastic process in the neighborhood of the endemic equilibrium, and also under the influence of the different sources, and different intensities of the noises in the system.
The numerical simulation results suggest that higher intensities of the white processes in the system drive the sample paths of the stochastic system further away from the potential endemic steady state. Moreover, the intensities of the noises from the natural death rates seem to have stronger consequences on the evolution of the disease in the system compared to the intensity of the noise from the disease transmission rate. Also, there is some evidence of a high chance of the population becoming extinct over time, whenever the intensities of the white noise processes become large.
Furthermore, in the absence of explicit solution process for the nonlinear stochastic system of differential equations, the stationary distribution is numerically approximated for a given set of parameter values for the stochastic dynamic system of equations.
References
==========
[300]{} Y. Cai, Y. kang et al., a stochastic epidemic model incorporatinmg media coverage, commun. math sci, vol. 14, n0.4,(2016) 893-910 A. Gray, D. Greenhalgh, L. Hu, X. Mao, and J. Pan, A Stochastic Differential Equation SIS Epidemic Model, SIAM J. Appl. Math., 71(3), (2011) 876–902 A. lahrouz, L. Omari, extinction and stationary distribution of a stochastic SIRS epidemic model with non-linear incidence, Statistics & Probability Letters 83(4):(2013)960–968 D. Wanduku, Threshold conditions for a family of epidemic dynamic models for malaria with distributed delays in a non-random environment, International Journal of Biomathematics Vol. 11, No. 6 (2018) 1850085 (46 pages), DOI: 10.1142/S1793524518500857 Li, Y., On the almost surely asymptotic bounds of a class of ornstein-Uhlenbeck Process in finifte dimensions, Journal of Systems Science and Complexity, 21 (2008), 416-426. X. Mao, stochastic differential equations and application, 2nd ed., WP, 2007 A. G. Ladde, G.S.Ladde, An introduction to differential equations: stochastic modelling, methods and analysis, vol 2, world scientific publishing, 2013 D. Wanduku, Modeling highly random dynamical infectious systems (book chapter), in press, 2017 D. Wanduku, Analyzing the qualitative properties of white noise on a family of infectious disease models in a highly random environment, available at arXiv:1808.09842 \[q-bio.PE\] D. Wanduku, A comparative stochastic and deterministic study of a class of epidemic dynamic models for malaria: exploring the impacts of noise on eradication and persistence of disease, in press, 2017 Y. Cai, Y. Kang, W. Wang, a stochastic SIRS epidemic model with nonlinear incidence, applied mathematics and computation 305 (2017)221-240 Y. zhang, k. Fan, S. Gao, S. Chen, A remark on stationary distributionof a stochastic SIR epidemic model with double saturated rates, applied mathamtics letters 76 (2018) 46-52 Cooke, Kenneth L. Stability analysis for a vector disease model. Rocky Mountain Journal of Mathematics 9 (1979), no. 1, 31-42 Y. Takeuchi, W. Ma and E. Beretta, Global asymptotic properties of a delay SIR epidemic model with finite incubation times, Nonlinear Anal. 42 (2000), 931-947. E. Beretta, V. Kolmanovskii, L. Shaikhet, Stability of epidemic model with time delay influenced by stochastic perturbations, Mathematics and Computers in Simulation 45 (1998) 269-277 Vincenzo Capasso, Mathematical Structures of Epidemic Systems, Lecture Notes in Biomathematics,Volume 97 1993 Liu WM, Hethcote HW, Levin SA. Dynamical behavior of epidemiological models with nonlinear incidence rates. J.Math Biol (1987); 25:359 Capasso V, Serio G.A. A generalization of the Kermack-Mckendrick deterministic epidemic model. Math Biosc 1978; 42:43 D. Xiao, S. Ruan, Global analysis of an epidemic model with nonmonotone incidence rate. Math Biosci. 208(2):(2007) 419-29 Huo, Hai-Feng; Ma, Zhan-Ping, Dynamics of a delayed epidemic model with non-monotonic incidence rate, Communications in Nonlinear Science and Numerical Simulation, Volume 15, Issue 2,(2010) p. 459-468. S.M. Moghadas, A.B. Gumel, Global Statbility of a two-stage epidemic model with generalized nonlinear incidence, Mathematics and computers in simulation 60 (2002), 107-118 Yuliya N. Kyrychko, Konstantin B. Blyussb, Global properties of a delayed SIR model with temporary immunity and nonlinear incidence rate, Nonlinear Analysis: Real World Applications Volume 6, Issue 3, July 2005, Pages 495-507 Qun Liu, Daqing Jiang, Ningzhong Shi, Tasawar Hayat, Ahmed Alsaedi, Asymptotic behaviors of a stochastic delayed SIR epidemic model with nonlinear incidence, Communications in Nonlinear Science and Numerical Simulation Volume 40, November 2016, Pages 89-99. Q. Liu, Q. Chen Analysis of the deterministic and stochastic SIRS epidemic models with nonlinear incidence Physica A, 428 (2015), pp. 140–153 Cooke KL, van den Driessche P., Analysis of an SEIRS epidemic model with two delays, J Math Biol. 1996 Dec;35(2):240-60. Nguyen Huu Du, Nguyen Ngoc Nhu, Permanence and extinction of certain stochastic SIR models perturbed by a complex type of noises, Applied Mathematics Letters 64 (2017) 223-230 Joaquim P. Mateusa, , César M. Silvab, Existence of periodic solutions of a periodic SEIRS model with general incidence, Nonlinear Analysis: Real World Applications Volume 34, April 2017, Pages 379-402 M. De la Sena, S. Alonso-Quesadaa, A. Ibeasb, On the stability of an SEIR epidemic model with distributed time-delay and a general class of feedback vaccination rules, Applied Mathematics and Computation Volume 270, 1 November 2015, Pages 953-976 Zhichao Jianga, b, Wanbiao Mab, Junjie Wei, Global Hopf bifurcation and permanence of a delayed SEIRS epidemic model, Mathematics and Computers in Simulation Volume 122, April 2016, Pages 35–54 Joaquim P. Mateus, César M. Silva, A non-autonomous SEIRS model with general incidence rate , Applied Mathematics and Computation Volume 247, 15 November 2014, Pages 169-189 M. De la Sen, S. Alonso-Quesada, , A. Ibeas, On the stability of an SEIR epidemic model with distributed time-delay and a general class of feedback vaccination rules, Applied Mathematics and Computation Volume 270, 1 November 2015, Pages 953-976 Shujing Gao, Zhidong Teng, Dehui Xie, The effects of pulse vaccination on SEIR model with two time delays, Applied Mathematics and Computation Volume 201, Issues 1-2, 15 July 2008, Pages 282-292 B. G. Sampath Aruna Pradeep , Wanbiao Ma, Global Stability Analysis for Vector Transmission Disease Dynamic Model with Non-linear Incidence and Two Time Delays, Journal of Interdisciplinary Mathematics, Volume 18, 2015 - Issue 4 Zhenguo Bai, Yicang Zhou, Global dynamics of an SEIRS epidemic model with periodic vaccination and seasonal contact rate, Nonlinear Analysis: Real World Applications Volume 13, Issue 3, June 2012, Pages 1060-1068 Yanli Zhou, Weiguo Zhang, Sanling Yuan, Hongxiao Hu, Persistence And Extinction In Stochastic Sirs Models With General Nonlinear Incidence Rate, Electronic Journal of Differential Equations, Vol. 2014 (2014), No. 42, pp. 1-17. Eric Avila, ValesBruno Buonomo, Analysis of a mosquito-borne disease transmission model with vector stages and nonlinear forces of infection, Ricerche di Matematica November 2015, Volume 64, Issue 2, pp 377-390 S. Syafruddin, Mohd Salmi Md. Noorani, Lyapunov function of SIR and SEIR model for transmission of dengue fever disease, International Journal of Simulation and Process Modelling (IJSPM), Vol. 8, No. 2-3, 2013 Liuyong Pang, Shigui Ruan, Sanhong Liu , Zhong Zhao , Xinan Zhang, Transmission dynamics and optimal control of measles epidemics, Applied Mathematics and Computation 256 (2015) 131–147 Ling ZhuHongxiao Hu, A stochastic SIR epidemic model with density dependent birth rate, Advances in Difference Equations December 2015, 2015:330 G.S. Ladde, V. Lakshmikantham, Random Differential Inequalities, Academic press, New York, 1980 R. Ross, The Prevention of Malaria, John Murray, London, 1911. Macdonald, G., The analysis of infection rates in diseases in which superinfection occurs. Trop. Dis. Bull. 47, (1950) 907-915 G. A. Ngwa, W. Shu, A mathematical model for endemic malaria with variable human and mosquito population, Math. computer. model. 32, (2000)747-763 Hyun, M.Y., Malaria transmission model for different levels of acquired immunity and temperature dependent parameters (vector). Rev. Saude Publica 2000, 34 (3), 223–231 Anderson, R.M., May, R.M., Infectious Diseases of Humans: Dynamics and Control. Oxford University Press, Oxford, 1991. G.A. Ngwa, A.M. Niger, A.B. Gumel, Mathematical assessment of the role of non-linear birth and maturation delay in the population dynamics of the malaria vector, Appl. Math. Comput. 217 (2010) 3286. Ngonghala CN, Ngwa GA, Teboh-Ewungkem MI, Periodic oscillations and backward bifurcation in a model for the dynamics of malaria transmission. Math Biosci, 240(1):(2012)45-62 N. Chitnis, J.M. Hyman, J.M. Cushing, Determining important parameters in the spread of malaria through the sensitivity analysis of a mathematical model, Bull. Math. Biol. 70 (2008) 1272. M.I. Teboh-Ewungkem, T. Yuster, A within-vector mathematical model of plasmodium falciparum and implications of incomplete fertilization on optimal gametocyte sex ratio, J. Theory Biol. 264 (2010) 273. D. Wanduku, Complete Global Analysis of a Two-Scale Network SIRS Epidemic Dynamic Model with Distributed Delay and Random Perturbation, Applied Mathematics and Computation Vol. 294 (2017) p. 49 - 76 D. Wanduku, G.S. Ladde, Global properties of a two-scale network stochastic delayed human epidemic dynamic model, nonlinear Analysis: Real World Applications 13(2012)794-816 D. Wanduku, G.S. Ladde, The global analysis of a stochastic two-scale Network Human Epidemic Dynamic Model With Varying Immunity Period, Journal of Applied Mathematics and Physics, 2017, 5, 1150-1173 D. Wanduku and G.S. Ladde Global Stability of Two-Scale Network Human Epidemic Dynamic Model, Neural, Parallel, and Scientific Computations 19 (2011) 65-90 D. Wanduku, G.S. Ladde , Fundamental Properties of a Two-scale Network stochastic human epidemic Dynamic model, Neural, Parallel, and Scientific Computations 19(2011) 229-270 M. Xuerong, Stochastic differential equations and applications Horwood Publishing Ltd.(2008), 2nd ed. G.S., Ladde, Cellular Systems-II. Stability of Campartmental Systems. Math. Biosci. 30(1976), 1-21 James M. Crutcher, Stephen L. Hoffman, Malaria, Chapter 83-malaria, Medical Microbiology, 4th edition, Galveston (TX): University of Texas Medical Branch at Galveston; 1996. http://www.who.int/denguecontrol/human/en/ https://www.cdc.gov/malaria/about/disease.html L. Hviid, Naturally acquired immunity to Plasmodium falciparum malaria ,Acta Tropica 95(3): October 2005, 270-5 Denise L. Doolan, Carlota Dobano,J. Kevin Baird, Acquired Immunity to Malaria, clinical microbiology reviews,Vol. 22, No. 1, Jan. 2009, p. 13–36 Yakui Xue And Xiafeng Duan, Dynamic Analysis Of An Sir Epidemic Model With Nonlinear Incidence Rate And Double Delays, International Journal Of Information And Systems SciencesVolume 7, Number 1, (2011) Pages 92–102 Yoshiaki Muroya , Yoichi Enatsu, Yukihiko Nakata,Global stability of a delayed SIRS epidemic model with a non-monotonic incidence rate, Journal of Mathematical Analysis and Applications Volume 377, Issue 1, 1 May 2011, Pages 1–14 W.M. Liu, H.W. Hethcote, S.A. Levin, Dynamical behavior of epidemiological models with nonlinear incidence rates, J. Math. Biol. 25 (4) (1987) 359–380 A. Korobeinikov, P.K. Maini, A Lyapunov function and global properties for SIR and SEIR epidemiological models with nonlinear incidence, Math. Biosci. Eng. 1 (1) (2004) 57–60. Chen, L, Chen, J., Nonlinear Biologiical Dynamical System, beijing, Science Press, 1993. Y. Kuang, delay Differential Equations with Applications in population Dynamics, Academic Press, boston. 1993 D. Wanduku, G.S. Ladde, Global stability of a two-scale network SIR delayed epidemic dynamic model Proceedings of Dynamic Systems and Applications 6 (2012) 437–441 D. Wanduku, Two-Scale Network Epidemic Dynamic Model for Vector Borne Diseases, Proceedings of Dynamic Systems and Applications 6 (2016) 228–232
[^1]: Corresponding author. Tel: +14073009605.
[^2]: A seed is set on the random number generator to reproduce the same sequence of random numbers for the Brownian motion in order to generate reliable graphs for the trajectories of the system under different intensity values for the white noise processes, so that comparison can be made to identify differences that reflect the effect of intensity values.
| {
"pile_set_name": "ArXiv"
} |
---
author:
- 'Christian Vollmann[^1]'
- Volker Schulz
bibliography:
- 'literatur.bib'
title: '**Exploiting multilevel Toeplitz structures in high dimensional nonlocal diffusion**'
---
**Abstract.** We present a finite element implementation for the steady-state nonlocal Dirichlet problem with homogeneous volume constraints. Here, the nonlocal diffusion operator is defined as integral operator characterized by a certain kernel function. We assume that the domain is an arbitrary $d$-dimensional hyperrectangle and the kernel is translation invariant. Under these assumptions, we carefully analyze the structure of the stiffness matrix resulting from a continuous Galerkin method with multilinear elements and exploit this structure in order to cope with the curse of dimensionality associated to nonlocal problems. For the purpose of illustration we choose a particular kernel, which is related to space-fractional diffusion and present numerical results in 1d, 2d and for the first time also in 3d.\
\
**Keywords.** Nonlocal diffusion, finite element method, translation invariant kernel, multilevel Toeplitz, fractional diffusion.
Introduction
============
The field of nonlocal operators attracts increasing attention from the mathematical society. This is due to the steadily growing pool of applications where nonlocal models are in use; including e.g., image processing [@imageproc1; @imageproc2], machine learning [@machinelearning], peridynamics [@peridym1; @peridym2], fractional diffusion [@fractlapl] or nonlocal Dirichlet Forms [@nonlocaldirichletform] and jump processes [@jumpprocess]. In contrast to local diffusion problems, interactions can occur at distance in the nonlocal case. This relies on the definition of the nonlocal diffusion operator $-{\mathcal{L}}$, which acts on a function ${u\colon {\mathbb{R}}^d \to {\mathbb{R}}}$ by $$-{\mathcal{L}}u(x) {\coloneqq}2\int_{{\mathbb{R}}^d} (u(x)-u(y)){\gamma}(x,y)dy,$$ where ${\gamma}\colon {\mathbb{R}}^d \times {\mathbb{R}}^d \to {\mathbb{R}}$ is a nonnegative and symmetric function characterizing the precise nonlocal diffusion. In this paper we are interested in the *steady-state nonlocal Dirichlet problem with volume constraints* given by $$\begin{aligned}
-{\mathcal{L}}u(x) = f(x) ~~~&(x\in \Omega) ,\nonumber\\
u(x) = g(x) ~~~&(x\in \Omega_I), \label{prob}\end{aligned}$$ where $\Omega \subset {\mathbb{R}}^d$ is a bounded domain. Here, the constraints are defined on a volume $\Omega_I$, the so called interaction domain, which is disjoint from $\Omega$.
The recently developed vector calculus by Gunzburger et al. [@nonlocalveccal] builds a theoretic foundation for the description of these nonlocal diffusion phenomena. In particular, this framework allows us to consider finite-dimensional approximations using Galerkin methods similar to the analysis of (local) partial differential equations. However, in contrast to local finite element problems, here we are faced with two basic difficulties. On the one hand, the assembling procedure may require sophisticated numerical integration tools in order to cope with possible singularities of the kernel function. On the other hand, discretizing nonlocal problems leads to densely populated systems. The latter tremendously affects the solving procedure, especially in higher dimensions. Thus, numerical implementations are challenging and in order to lift the concept of nonlocal diffusion from a theoretical standpoint to an applicable approach in practice, the development of efficient algorithms, which go beyond preliminary cases in 1d and 2d, is essential in this context.
In recent works, several approaches for discretizing problem (\[prob\]) have been presented. We want to mention for instance a 1d finite element code by D’Elia and Gunzburger [@fractlapl], where the fractional kernel ${\gamma}(x,y) =\frac{c_{d,s}}{2||y-x||^{d+2s}}$ has been used, but which can easily be extended to general (singular) kernels. Also for the fractional kernel, Acosta, Bersetche and Borthagaray [@acosta] developed a finite element implementation for the 2d case. In general, a lot of work has been done for the discretization of fractional diffusion problems and fractional derivatives in various definitions (mainly via finite difference schemes) not only for 1d and 2d [@wangToeplitz; @wang2d], but also for the 3d case [@wang3d]. However, to the best of the authors’ knowledge, for general kernel functions 3d finite element implementations for problem (\[prob\]) are not yet available.
In this paper we study a finite element approximation for problem (\[prob\]) on an arbitrary $d$-dimensional hyperrectangle (parallel to the axis) for translation invariant kernel functions. More precisely, we analyze from a computational point of view a continuous Galerkin discretization with multilinear elements of the following setting:
- We set $\Omega {\coloneqq}\prod_{i=0}^{d-1}[a_i,b_i]$, where $[a_i, b_i]$ are compact intervals on ${\mathbb{R}}$.
- We assume that the kernel ${\gamma}$ is *translation invariant*, such that $${\gamma}(b+Qx, b+Qy) = {\gamma}(x,y)$$ for all $b\in{\mathbb{R}}^d$ and $Q \in {\mathbb{R}}^{d \times d}$ orthonormal.
As a consequence, these structural assumptions on the underlying problem are reflected in the stiffness matrix; we obtain a $d$-level Toeplitz matrix, which has two crucial advantages. On the one hand, we only need to assemble (and store) the first row (or column) of the stiffness matrix. On the other hand, we can benefit from an efficient implementation of the matrix-vector product for solving the linear system. This result is presented in Theorem \[multilevel\_toeplitz\] and is crucial for this work, since it finally enables us to solve the discretized system in an affordable way. For illustrative purposes we choose the fractional kernel and exploit a third assumption on the interaction horizon for simplifying the implementation:
- We assume that interactions only occur at a certain distance $R$, which we assume to be larger or equal the diameter of the domain $\Omega$, such that $\Omega \subset B_R(x)$ for all $x \in \Omega$.
The paper is organized as follows. In Section \[sec:theory\] we cite the basic results about existence and uniqueness of weak solutions and finite-dimensional approximations. In Section \[sec:femsetting\] we give details about the precise finite element setting and proof our main result, that the stiffness matrix is multilevel Toeplitz. In Sections \[sec:assembling\] and \[sec:solving\] we explain in detail the implementation of the assembling and solving procedure, respectively. In Section \[sec:numres\] we round off these considerations by presenting numerical results with application to space-fractional diffusion.
Nonlocal diffusion problems {#sec:theory}
===========================
We review the relevant aspects of nonlocal diffusion problems as they are introduced in [@wellposedness] which constitute the theoretic fundamentals of this work.
Let $\Omega \subset {\mathbb{R}}^d$ be a bounded domain with piecewise smooth boundary. Further let ${\gamma}\colon {\mathbb{R}}^d \times {\mathbb{R}}^d \to {\mathbb{R}}$ be a nonnegative and symmetric function (i.e., ${\gamma}(x,y) = {\gamma}(y,x) \geq 0$), which we refer to as *kernel*. Then we define the action of the *nonlocal diffusion operator* $-{\mathcal{L}}{\coloneqq}-{\mathcal{L}}_{\gamma}$ on a function $u \colon {\mathbb{R}}^d \to {\mathbb{R}}$ by $$-{\mathcal{L}}u(x) {\coloneqq}2\int_{{\mathbb{R}}^d} (u(x)-u(y)){\gamma}(x,y)dy$$ for $x \in \Omega$. In addition to that, we assume that there exists a constant ${\gamma}_0 > 0$ and a finite *interaction horizon* or *radius* $R > 0$ such that for all $x \in \Omega$ we have $$\begin{aligned}
{\gamma}(x,y) \geq 0 ~~~&\forall~ y \in B_R(x), \nonumber\\
{\gamma}(x,y) \geq {\gamma}_0 > 0 ~~~&\forall ~y \in B_{R/2}(x),\nonumber\\
{\gamma}(x,y) = 0 ~~~&\forall ~y \in (B_{R}(x))^c, \label{gen_assumption}\end{aligned}$$ where $B_R(x) {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}y \in {\mathbb{R}}^d \colon ||x-y||_2< R\rk$. Then we define the *interaction domain* by $$\Omega_I {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}y \in \Omega^c\colon \exists x\in \Omega\colon{\gamma}(x,y) \neq 0 \rk$$ and finally introduce the *steady-state nonlocal Dirichlet problem with volume constraints* as $$\begin{aligned}
-{\mathcal{L}}u(x) = f(x) ~~~&(x\in \Omega) ,\nonumber\\
u(x) = g(x) ~~~&(x\in \Omega_I), $$ where $f\colon \Omega \to {\mathbb{R}}$ is called the *source* and $g: \Omega_I \to {\mathbb{R}}$ specifies the Dirichlet volume constraints. In the remainder of this paper we assume $g\equiv 0$.
Weak formulation
----------------
For the purpose of constructing a finite element framework for nonlocal diffusion problems, we introduce the concept of weak solutions as it is presented in [@wellposedness].
We define the bilinear form $$\begin{aligned}
a(u,v) {\coloneqq}& \int_{{\Omega \cup \Omega_I}} v(-{\mathcal{L}}u ) dx,\end{aligned}$$ and the associated linear functional $$\begin{aligned}
\ell(v) {\coloneqq}\int_{\Omega} fvdx.\end{aligned}$$ By establishing a nonlocal vector calculus it is shown in [@wellposedness], that the following equality holds: $$\begin{aligned}
a(u,v)= &\int_{{\Omega \cup \Omega_I}} \int_{{\Omega \cup \Omega_I}\cap B_R(x)} (u(y)-u(x))(v(y)-v(x)){\gamma}(x,y) dydx.\end{aligned}$$ This implies that $a$ is symmetric and nonnegative, or equivalently, the linear nonlocal diffusion operator $-{\mathcal{L}}$ is self-adjoint with respect to the $L^2$-product and nonnegative. Furthermore, we define the *nonlocal energy space* $$V({\Omega \cup \Omega_I}) {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}u \in L^2({\Omega \cup \Omega_I}) \colon |||u||| <\infty \rk,$$ where $ |||u|||{\coloneqq}\sqrt{\tfrac{1}{2}a(u,u)}$ and the *nonlocal constrained energy space* $$V_c({\Omega \cup \Omega_I}) {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}u \in L^2({\Omega \cup \Omega_I}) \colon |||u||| <\infty ~\text{and}~ u_{|\Omega_I} \equiv 0 ~\text{a.e.}\rk.$$ We note that $||| \cdot |||$ constitutes a semi-norm on $V({\Omega \cup \Omega_I}) $ and due to the volume constraints a norm on $V_c({\Omega \cup \Omega_I}) $. With these preparations at hand, a weak formulation of (\[prob\]) can be formulated as $$\begin{aligned}
\text{\textit{Find $u \in V_c({\Omega \cup \Omega_I}) $ such that }} a(u,\cdot) \equiv \ell(\cdot) ~ \text{\textit{on}}~ V_c({\Omega \cup \Omega_I}) \label{weakform}.\end{aligned}$$
In order to make statements about the existence and uniqueness of weak solutions we have to further specify the kernel. In [@wellposedness] the authors consider, among others, a certain class of kernel functions, on which we will focus in the remainder of this section and in our numerical experiments. More precisely, we require that there exists a fraction $s\in (0,1)$ and constants ${\gamma}_1,{\gamma}_2 > 0$, such that for all $x \in \Omega$ it holds that $$\begin{aligned}
{\gamma}(x,y)||y-x||^{d+2s} \in [{\gamma}_1,{\gamma}_2] ~~~&\forall ~y \in B_R(x) \label{case_1}.\end{aligned}$$ Then it is shown in [@wellposedness] that the nonlocal energy space $V({\Omega \cup \Omega_I})$ is equivalent to the fractional-order Sobolev space $$\begin{aligned}
H^s({\Omega \cup \Omega_I}) {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}u\colon ||u||_{H^s({\Omega \cup \Omega_I})} {\coloneqq}||u||_{L^2({\Omega \cup \Omega_I})} + |u|_{H^s({\Omega \cup \Omega_I})} < \infty \rk,\end{aligned}$$ where $$|u|_{H^s({\Omega \cup \Omega_I})}^2 {\coloneqq}\int_{{\Omega \cup \Omega_I}} \int_{{\Omega \cup \Omega_I}} \frac{(u(x)-u(y))^2}{||x-y||^{d+2s}}dydx.$$ Hence, there exist two positive constants $C_1$ and $C_2$ such that $$\begin{aligned}
C_1 ||u||_{H^s({\Omega \cup \Omega_I})} \leq |||u||| \leq C_2 ||u||_{H^s({\Omega \cup \Omega_I})} ~~~\forall~ u \in V_c({\Omega \cup \Omega_I}).\end{aligned}$$
This equivalence implies that $(V_c({\Omega \cup \Omega_I}),|||\cdot|||)$ is a Banach space for kernel functions of this class. Applying Lax-Milgram Theorem finally brings in the well posedness of problem (\[weakform\]).
Let $\Omega \subset {\mathbb{R}}^d$ be a bounded domain with piecewise smooth boundary and let the kernel function ${\gamma}$ satisfy the general requirement (\[gen\_assumption\]) and the specific assumption in (\[case\_1\]). Then for any linear functional $\ell$ there exists a unique $u \in V_c({\Omega \cup \Omega_I})$ such that $a(u,v) = \ell(v) $ for all $v$ in $V_c({\Omega \cup \Omega_I})$.
See [@wellposedness].
Finite-dimensional approximation
--------------------------------
With the concept of weak solutions we can proceed as in the local case to develop finite element approximations of (\[weakform\]).
Therefore, let ${\left\{}
\newcommand{\rk}{\right\}}V_c^N\rk_N$ be a sequence of finite-dimensional subspaces of $V_c({\Omega \cup \Omega_I})$, where $N = \dim (V_c^N)$, and let $u_N$ denote the solution of $$\begin{aligned}
\emph{\text{Find $u_N \in V_c^N$ such that } $a(u_N, v_N) = \ell(v_N)$ \text{ for all $v_N$ in $V_c^N$} }. \label{finiteelementproblem}\end{aligned}$$ Then from [@wellposedness] we recall the following result.
\[fem\_theorem\] Let $m$ be a nonnegative integer and $s \in (0,1)$. Further, let $ \Omega$ and ${\Omega \cup \Omega_I}$ be polyhedral domains and let $V_c^N$ consist of piecewise polynomials of degree no more than $m$. Assume that the triangulation is shape-regular and quasi-uniform as $h \to 0$ and suppose that ${u^* \in V_c({\Omega \cup \Omega_I}) \cap H^{m+t}({\Omega \cup \Omega_I})}$, where $t \in [s,1]$. Then there exists a constant $C>0$ such that, for sufficiently small $h$, $$\begin{aligned}
||u^*-u_N|| _{H^s({\Omega \cup \Omega_I})} \leq C h^{m+t-s}||u^*||_{H^{m+t}({\Omega \cup \Omega_I})}. \label{fem_estimate_fract}\end{aligned}$$
See [@wellposedness Theorem 6.2].
The derivation of the discretized problem then relies on the construction of the stiffness matrix. Therefore let ${\left\{}
\newcommand{\rk}{\right\}}\varphi_0, \ldots, \varphi_{N-1} \rk$ be a basis of $V_c^N$, such that the finite element solution $u^N \in V_c^N$ can be expressed as a linear combination $u^N = \sum_{k=0}^{N-1} u_k^N {{\varphi_k}}$. If the basis functions are chosen in a way such that ${{\varphi_k}}(x_k) = 1$ on appropriate grid points $x_k$, then the coefficients satisfy $u_k^N = u^N(x_k)$. We test for all basis functions, such that the finite element problem (\[finiteelementproblem\]) reads as $$\begin{aligned}
\textit{Find $u^N\in {\mathbb{R}}^{N}$ such that }\sum_{k=0}^{N-1} u_k^N a({{\varphi_k}}, {{\varphi_j}}) = \ell({{\varphi_j}}) =: b_j \textit{ for } 0 \leq j < N.\end{aligned}$$ The stiffness matrix $A^N=(a_{kj})_{kj} \in {\mathbb{R}}^{N \times N}$ is given by $$\begin{aligned}
a_{kj} {\coloneqq}\int_{{\Omega \cup \Omega_I}} \int_{{\Omega \cup \Omega_I}} ({{\varphi_k}}(y)-{{\varphi_k}}(x))({{\varphi_j}}(y)-{{\varphi_j}}(x)){\gamma}(x,y)dydx\end{aligned}$$ and we finally want to solve the discretized *Galerkin system* $$\begin{aligned}
A^Nu^N = b^N, \label{discretizedproblem}\end{aligned}$$ where $u^N, b^N \in {\mathbb{R}}^{N}$. The properties of the bilinear form $a$ imply that $A^N$ is symmetric and positive definite, such that there exists a unique solution $u^N$ of the finite-dimensional problem (\[discretizedproblem\]).
Finite element setting {#sec:femsetting}
======================
In this section we study a continuous Galerkin discretization of the homogeneous nonlocal Dirichlet problem, given by $$\begin{aligned}
2\int_{{\Omega \cup \Omega_I}} (u(x)-u(y)){\gamma}(x,y)dy = f(x) ~~~&(x\in \Omega) ,\nonumber\\
u(x) = 0 ~~~~~~~&(x\in \Omega_I),\end{aligned}$$ under the following assumptions:
- We set $\Omega {\coloneqq}\prod_{i=0}^{d-1}[a_i,b_i]$, where $[a_i, b_i]$ are compact intervals on ${\mathbb{R}}$.
- We assume that the kernel ${\gamma}$ is *translation invariant*, such that $${\gamma}(b+Qx, b+Qy) = {\gamma}(x,y)$$ for all $b\in{\mathbb{R}}^d$ and $Q \in {\mathbb{R}}^{d \times d}$ orthonormal.
Assumption (A1) allows for a simple triangulation of $\Omega$, which we use to define a finite-dimensional energy space $V^{N}_c$. Together with (A2) we can show that this discretization yields the multilevel Toeplitz structure of the stiffness matrix, where the order of the matrix is determined by the number of grid points in each respective space dimension.
Definition of the finite-dimensional energy space
-------------------------------------------------
We decompose the domain $\Omega = \prod_{i=0}^{d-1}[a_i,b_i]$ into $d$-dimensional hypercubes with sides of length $ h>0 $ in each respective dimension. Note that we can omit a discretization of $\Omega_I$ since we assume homogeneous Dirichlet volume constraints. Let $\operatorname{\mathbf{N}}= (N_i)_{0 \leq i < d} {\coloneqq}(\frac{b_i-a_i}{ h })_{0 \leq i < d}$ and ${\operatorname{\mathbf{L}}{\coloneqq}(N_i-1)_{0 \leq i < d}}$, then for the interior of $\Omega$ this procedure results in $\operatorname{\mathbf{L}}^d {\coloneqq}\prod_{i=0}^{d-1} L_i$ degrees of freedom. Due to the simple structure of the domain we can choose a canonical numeration for the resulting grid $\prod_{i=0}^{d-1} \left( a_i + h {\left\{}
\newcommand{\rk}{\right\}}0, \ldots, L_i -1\rk \right) $ of inner points. More precisely, we will employ the map $$\begin{aligned}
E^{\operatorname{\mathbf{n}}}(z) {\coloneqq}\sum_{i=0}^{d-1} z_i p_i(\operatorname{\mathbf{n}}),\end{aligned}$$ where $p_i(\operatorname{\mathbf{n}}) {\coloneqq}\prod_{j>i}n_j$, for establishing an order on a structured grid $\prod_{i=0}^{d-1} {\left\{}
\newcommand{\rk}{\right\}}0, \ldots n_i-1 \rk$, where $\operatorname{\mathbf{n}}=(n_0,\ldots,n_{d-1}) \in \mathbb{N}^d$. Its inverse is given by $$\begin{aligned}
(E^{-\operatorname{\mathbf{n}}}(k))_{0 \leq i < d} = (\lfloor \tfrac{k}{p_i(n)}\rfloor - \lfloor \tfrac{k}{p_{i-1(n)}}\rfloor n_i )_{0 \leq i < d}.\end{aligned}$$ Let $e {\coloneqq}(1,\ldots, 1)\in {\mathbb{R}}^{d}$ and $a {\coloneqq}(a_0,\ldots,a_{d-1})$, then we define the ordered array of inner grid points $
(x_k)_{0 \leq k < \operatorname{\mathbf{L}}^d} \in {\mathbb{R}}^{\operatorname{\mathbf{L}}^d \times d}
$ by $$x_k {\coloneqq}a + h ( E^{-\operatorname{\mathbf{L}}}(k)+e)$$ for $0 \leq k < \operatorname{\mathbf{L}}^d$. We further define finite elements $S_k {\coloneqq}b_k + h \square$, where $ (b_k^i)_{0 \leq i < d} {\coloneqq}a + h E^{-\operatorname{\mathbf{N}}}(k) $ and $\square {\coloneqq}[0,1]^d$, such that $\Omega = \bigcup_{k=0}^{\operatorname{\mathbf{N}}^d - 1} S_k.$ Next we aim to define appropriate element basis functions on the reference element $\square$. Therefore we denote by $$(v_k)_{0 \leq k < 2^d} \in {\mathbb{R}}^{2^d \times d}$$ the vertices of the unit cube $\square$ ordered according to $
v_k {\coloneqq}E^{-(2,\ldots, 2)}(k).
$ Then for each vertex $v_k$, $0 \leq k < 2^d$, we define an *element basis function* $\psi_k \colon \square \to [0,1]$ by $$\begin{aligned}
\psi_k(x) &= \left(\prod_{i=0, v_k^i = 0}^{d-1} (1-x_i) \right)\left(\prod_{i=0, v_k^i = 1}^{d-1} x_i \right) .\end{aligned}$$ For dimensions $d \in {\left\{}
\newcommand{\rk}{\right\}}1,2,3 \rk$ respectively, these are the usual linear, bilinear and trilinear element basis functions (see e.g. [@fem_book Chapter 1]). They are defined in a way such that $0 \leq \psi_k \leq 1$ and $\psi_k(v_k) = 1$. Moreover, we define the *reference basis function* $\varphi \colon {\mathbb{R}}^d \to [0,1]$ by $$\begin{aligned}
\varphi(x) &{\coloneqq}\begin{cases}
\psi_i(v_i + x)&: x \in (\square - v_i)\\
0&:else.
\end{cases} \label{reference_basisfunction}\end{aligned}$$ We note that $J{\coloneqq}[-1,1]^d = \dot\bigcup_{i=0}^{2^d} (\square -v_i ) $ (disjoint union), such that $\varphi$ is well defined and $supp(\varphi) = J$. Now let the physical support be defined as $$I_k {\coloneqq}\bigcup {\left\{}
\newcommand{\rk}{\right\}}S_i \colon x_k \in S_i, 0 \leq i < \operatorname{\mathbf{N}}^d \rk ,$$ which is a patch of the elements touching the node $x_k$. We associate to each element $S_k$ the transformation $T_k \colon J \to I_k$, $T_k(v) {\coloneqq}x_k + h v $. We note that $\det dT_k(x) \equiv h^d $. Then for each node $x_k$ we define a basis function $\varphi_k \colon {\Omega \cup \Omega_I}\to [0,1]$ by $$\begin{aligned}
\varphi_k(x) &{\coloneqq}\begin{cases}
\varphi(T_k^{-1}(x))&: x \in I_k\\
0&:else
\end{cases}\\
&=\begin{cases}
\psi_i(v_i + T_k^{-1}(x))&: T_k^{-1}(x)\in (\square - v_i)\\
0&:else,
\end{cases}\end{aligned}$$ which satisfies $0\leq \varphi_k \leq 1$ and $\varphi_k(x_k) = 1$. Figure \[fig:basisfunction\] illustrates the latter considerations for $d=2$, $a = (0,0)$ and $b=(1,1)$.
Finally, we can define a constrained finite element space by $$\begin{aligned}
V_c^{ h } {\coloneqq}\operatorname{span}{\left\{}
\newcommand{\rk}{\right\}}\varphi_k \colon 0 \leq k < \operatorname{\mathbf{L}}^d \rk \label{fem_space},\end{aligned}$$ such that each linear combination consisting of a set of these basis functions fulfills the homogeneous Dirichlet volume constraints. Notice that we parametrize these spaces by the grid size $ h $ indicating the dimension $\operatorname{\mathbf{L}}^d$, which is by definition a function of $ h $. Finally, we close this subsection with the following observations, which we exploit above.
Let $x \in {\mathbb{R}}^d$, then:
- $\varphi(x) = \varphi(|x|)$, where $|x| {\coloneqq}( |x_i|)_i$.
- Let $R_i \colon {\mathbb{R}}^d \to {\mathbb{R}}^d$, for $0 \leq i < d$, denote the reflection $
R_i(x) = (x_0, \ldots, -x_i, \ldots, x_{d-1}),
$ then $$\begin{aligned}
\varphi(x) = \varphi (|x| ) \Leftrightarrow \varphi(x) = \varphi (R_i(x)) ~~\forall~ 0 \leq i < d. \label{reflection_invariance}
\end{aligned}$$
- $\varphi(x) = \varphi((x_{\sigma(i)})_i)$ for all permutations $\sigma \colon {\left\{}
\newcommand{\rk}{\right\}}0,\ldots, d-1 \rk \to {\left\{}
\newcommand{\rk}{\right\}}0,\ldots, d-1 \rk$.
We first show i). Since $\varphi(x) = 0 = \varphi(|x|)$ for $x$ in $int({J})^c$, let $x \in int({J})$. Thus, there exists an index $0 \leq k < d $ such that $x \in int(\square) - v_k,$ which implies that $x_i < 0$ if and only if $v_k^i = 1. $ Hence, we can conclude that $$\begin{aligned}
\varphi(x) &= \psi_k(x + v_k) \\
&= \left(\prod_{i=0, v_k^i = 0}^{d-1} (1-x_i) \right)\left(\prod_{j=0, v_k^i = 1}^{d-1} (1+x_i) \right) \\
&= \left(\prod_{i=0, v_k^i = 0}^{d-1} (1-|x_i|) \right)\left(\prod_{j=0, v_k^i = 1}^{d-1} (1-|x_i|) \right) \\
&= \prod_{i=0}^{d-1} (1-|x_i|) \\
&=\psi_0 (|x| + v_0) \\
&= \varphi(|x|).
\end{aligned}$$ Then, on the one hand, we have that $|x| = |R_i(x)|$ and therefore $\varphi(x) = \varphi (R_i(x))$ for all $0 \leq i < d$. On the other hand, we note that the operation $|\cdot|$ is a composition of reflections $R_i$, more precisely $$|x| = \left(\prod_{x_i < 0}R_i\right)(x) .$$ Thus, we obtain the equivalence stated in ii). Statement iii) follows from the representation $
\varphi(x) = \varphi(|x|)
=\prod_{i=0}^{d-1} (1-|x_i|)
$ due to the commutativity of the product.
Multilevel Toeplitz structure of the stiffness matrix
-----------------------------------------------------
Now we aim to show that the stiffness matrix $A$ owns the structure of a $d$-level Toeplitz matrix. This is decisive for this work, since it finally enables us to solve the discretized system (\[discretizedproblem\]) in an affordable way.
From now on the assumption (A2) on the kernel function becomes relevant. At this point we note that the mapping $(x,y) \mapsto \mathcal{X}_{B_R(x)}(y) $ is translation invariant, since translations of the upper form $x \mapsto b + Qx$ are isometric in Euclidean space. This also holds if we define the ball $B_R$ with respect to another than the $||\cdot||_2$-norm. Hence, we can regard the kernel as a function $${\gamma}(x,y) = \mathcal{X}_{B_R(x)}(y)\sigma(x,y),~~x,y \in {\mathbb{R}}^d ,$$ for some translation invariant function $\sigma$. In order to analyze the multilevel structure of ${A \in {\mathbb{R}}^{\operatorname{\mathbf{L}}^d \times \operatorname{\mathbf{L}}^d}}$ it is convenient to introduce an appropriate multi-index notation. To this end, we choose $E^{\operatorname{\mathbf{L}}}$ from above as index bijection and we identify $a_{\operatorname{\mathbf{i}}\operatorname{\mathbf{j}}} = a_{E^{\operatorname{\mathbf{L}}}(\operatorname{\mathbf{i}}),E^{\operatorname{\mathbf{L}}}(\operatorname{\mathbf{j}})}. $ We call the matrix $A$ *$d$-level Toeplitz* if $$a_{\operatorname{\mathbf{i}}\operatorname{\mathbf{j}}} = a(\operatorname{\mathbf{i}}-\operatorname{\mathbf{j}}).$$ If even $a_{\operatorname{\mathbf{i}}\operatorname{\mathbf{j}}} = a(|\operatorname{\mathbf{i}}-\operatorname{\mathbf{j}}|)$, then each level is symmetric and we can reconstruct the whole matrix from the first row (or column). For a more general and detailed consideration of multilevel Toeplitz matrices see for example [@multileveltoeplitz]. However, with this notation at hand we can now formulate
\[multilevel\_toeplitz\] Let $\Omega = \prod_{i=0}^{d-1}[a_i,b_i]$ and assume that the kernel ${\gamma}$ is translation invariant and satisfies the general assumptions (\[gen\_assumption\]) as well as the specific assumption (\[case\_1\]). Further let the finite element space $V_c^{ h }$ be defined as in (\[fem\_space\]) for a grid size $ h >0$. Then the stiffness matrix $A$ associated to problem (\[finiteelementproblem\]) is $d$-level Toeplitz, where each level is symmetric.
The key point in the proof is the relation $$\begin{aligned}
a_{kj} = a(|{ h }^{-1}(x_k-x_j)|),\end{aligned}$$ which we show in two steps. First we show that $a_{kj} = a({ h }^{-1}(x_k-x_j)) $ and then we proof $a(z) = a(|z|).$ Therefore let us recall that the entry $a_{kj}$ of the stiffness matrix $A$ is given by $$a_{kj} = \int_{{\Omega \cup \Omega_I}} \int_{{\Omega \cup \Omega_I}} ({{\varphi_k}}(y)-{{\varphi_k}}(x))({{\varphi_j}}(y)-{{\varphi_j}}(x)){\gamma}(x,y)dydx.$$ Having a closer look at the support of the integrand, we find that $$\begin{aligned}
({{\varphi_k}}(y)-{{\varphi_k}}(x))({{\varphi_j}}(y)-{{\varphi_j}}(x)) = 0 \Leftrightarrow (x,y) \in (I_k^c \times I_k^c) \cup (I_j^c \times I_j^c) \cup {\left\{}
\newcommand{\rk}{\right\}}(x,x)\colon x \in {\Omega \cup \Omega_I}\rk.\end{aligned}$$ Since ${\left\{}
\newcommand{\rk}{\right\}}(x,x)\colon x \in {\Omega \cup \Omega_I}\rk$ has null $\lambda_{2d}$-Lebesgue measure we can neglect it in the integral and obtain $$\begin{aligned}
a_{kj} = \int_{\left( I_k^c \times I_k^c\right)^c \cap \left(I_j^c \times I_j^c \right)^c} ({{\varphi_k}}(y)-{{\varphi_k}}(x))({{\varphi_j}}(y)-{{\varphi_j}}(x)){\gamma}(x,y)dydx.\end{aligned}$$\
Aiming to show $a_{kj} = a({ h }^{-1}(x_k-x_j))$ we need to carry out some basic transformations of this integral. Since by definition $\varphi_j = \varphi \circ T_j^{-1}$ and also $\det T_j(x) \equiv h^d$ we find $$\begin{aligned}
a_{kj} &= \int_{\left( I_k^c \times I_k^c\right)^c \cap \left(I_j^c \times I_j^c \right)^c}
({{\varphi_k}}(y)-{{\varphi_k}}(x))({{\varphi_j}}(y)-{{\varphi_j}}(x)){\gamma}(x,y)dydx\\
&= h ^{2d}\int_{T_j^{-1}({\mathbb{R}}^d) \times T_j^{-1}({\mathbb{R}}^d) }
(1-\mathcal{X}_{I_j^c \times I_j^c }(T_j(v),T_j(w)))
(1-\mathcal{X}_{I_k^c \times I_k^c }(T_j(v),T_j(w)))\\
&~~~~~~~~~~~~ ((\varphi \circ T_k^{-1})(T_j(w))-(\varphi \circ T_k^{-1})(T_j(v)))(\varphi(w)-\varphi(v)){\gamma}(T_j(v),T_j(w))dwdv.\end{aligned}$$ Now we make a collection of observations. Due to assumption (A2) we have $${\gamma}(T_j(v),T_j(w)) = {\gamma}(x_j + h v,x_j + h w) = {\gamma}( h v, h w).$$ Furthermore, by definition of the transformations $T_j, T_k$ we find that $T_j^{-1}({\mathbb{R}}^d)={\mathbb{R}}^d$ as well as $(T_k^{-1}\circ T_j)(v) = h ^{-1}(x_j + h v - x_k) = h ^{-1}(x_j - x_k)+ v $. Since these transformations are bijective we also have that $\mathcal{X}_{M^c\times M^c }(T_j(x),T_j(y)) = \mathcal{X}_{(T_j^{-1}(M))^c\times (T_j^{-1}(M))^c }(x,y)$ for a set $M\subset {\mathbb{R}}^d$. Hence, defining $x_{jk} {\coloneqq}h ^{-1}(x_j - x_k) = - x_{kj} $ and recognizing $T_j^{-1}(I_k) = x_{kj} + J$ we finally obtain $$\begin{aligned}
a_{kj}
&= { h }^{2d}\int_{\left( J^c \times J^c\right)^c \cap \left((x_{kj}+J )^c \times (x_{kj}+J )^c\right)^c}
\\
&~~~~~~~~~~~~ (\varphi(w-x_{kj})-\varphi(v-x_{kj}))(\varphi(w)-\varphi(v)){\gamma}( h w, h v)dwdv\\
&=a(x_{kj}).\end{aligned}$$
Next, we proof that this functional relation fulfills $a(z) = a(|z|)$. Let us for this purpose define $F(x,y;z) {\coloneqq}(\varphi(y-z)-\varphi(x-z))(\varphi(y)-\varphi(x)){\gamma}( h y, h x)$ such that $$a(z) = { h }^{2d} \int_{\left( J^c \times J^c\right)^c \cap \left((z+J )^c \times (z+J )^c \right)^c}
F(x,y;z)dydx.$$ Let $z \in {\left\{}
\newcommand{\rk}{\right\}}x_{kj}: 0 \leq k,j<\operatorname{\mathbf{L}}^d \rk$. Then there exists an orthonormal matrix $R= R(z) \in {\mathbb{R}}^{d \times d}$, which is a composition of reflections $R_i$ from (\[reflection\_invariance\]), such that $Rz = |z|$. Then from (\[reflection\_invariance\]) and the assumption (A2) on the kernel, we obtain for $x,y \in {\mathbb{R}}^d$ that $$\begin{aligned}
F(Rx,Ry;|z|) & = (\varphi(Ry-Rz)-\varphi(Rx-Rz))(\varphi(Ry)-\varphi(Rx)){\gamma}( h Ry, h Rx) \\
&= (\varphi(y-z)-\varphi(x-z))(\varphi(y)-\varphi(x)){\gamma}( h y, h x) \\
&=F(x,y;z).\end{aligned}$$ Since $R(J ) = J $ and therefore $$\begin{aligned}
&R\left(\left( J^c \times J^c\right)^c \cap \left((z+J )^c \times (z+J )^c\right)^c\right) \\
=&\left( J^c \times J^c\right)^c \cap \left((|z|+J )^c \times (|z|+J )^c\right)^c,\end{aligned}$$ we eventually obtain $$\begin{aligned}
a(|z|)&={ h }^{2d} \int_{\left( J^c \times J^c\right)^c \cap \left((|z|+J )^c \times (|z|+J )^c\right)^c}
F(x,y;|z|)dydx \\
&={ h }^{2d} \int_{\left( J^c \times J^c\right)^c \cap \left((z+J )^c \times (z+J )^c\right)^c}
F(Rx,Ry;|z|)dydx \\
&={ h }^{2d} \int_{\left( J^c \times J^c\right)^c \cap \left((z+J )^c \times (z+J )^c\right)^c}
F(x,y;z)dydx \\
&=a(z).\end{aligned}$$ Finally, we can show that $A$ carries the structure of a $d$-level Toeplitz matrix. By having a closer look at the definitions of $E^{\operatorname{\mathbf{L}}}$ and the grid points $x_k$ we can conclude that $$\begin{aligned}
a_{\operatorname{\mathbf{i}}\operatorname{\mathbf{j}}} &= a_{E^{\operatorname{\mathbf{L}}}(\operatorname{\mathbf{i}})E^L(\operatorname{\mathbf{j}})}
= a(|{ h }^{-1}(x_{E^{\operatorname{\mathbf{L}}}(\operatorname{\mathbf{i}})}-x_{E^{\operatorname{\mathbf{L}}}(\operatorname{\mathbf{j}})})|)
= a(|\operatorname{\mathbf{i}}- \operatorname{\mathbf{j}}|).\end{aligned}$$ Thus, the entry $a_{\operatorname{\mathbf{i}}\operatorname{\mathbf{j}}}$ only depends on the difference $\operatorname{\mathbf{i}}- \operatorname{\mathbf{j}}$.
As a consequence, in order to implement the matrix-vector product, it is sufficient to assemble solely the first row or column $$M {\coloneqq}(a_{\ell0})_\ell = (a({ h }^{-1}(x_\ell-x_0)))_\ell = (a(E^{\operatorname{\mathbf{L}}}(\ell)))_\ell$$ of the stiffness matrix $A$, since for $\ell(k,j) {\coloneqq}E^L( h ^{-1}(|x_k-x_j|))$ we get $$a_{kj} = a( h ^{-1}(|x_k-x_j|)) = a(E^{\operatorname{\mathbf{L}}}(\ell(k,j))) = M_{\ell(k,j)}.$$ Note that $\ell(k,j) {\coloneqq}E^{\operatorname{\mathbf{L}}}( h ^{-1}(|x_k-x_j|))$ is well defined, since $ h ^{-1}(|x_k-x_j|) $ lies in the domain of definition of $E^{\operatorname{\mathbf{L}}}$.
\[L1\_radial\] Exploiting that $\varphi$ is invariant under permutations, the same proof (by composing the reflection $R$ with a permutation matrix) shows that $a(z) = a((z_{\sigma(i)})_i)$ for all permutations $\sigma \colon {\left\{}
\newcommand{\rk}{\right\}}0,\ldots, d-1 \rk \to {\left\{}
\newcommand{\rk}{\right\}}0,\ldots, d-1 \rk$. We will use this observation to accelerate the assembling process.
Assembling procedure {#sec:assembling}
====================
In this section we aim to analyze the entries $a_{kj} = a(x_{kj})$ of the stiffness matrix $A$ more closely and derive a representation which can be efficiently implemented.
We first characterize the domain of integration occurring in the integral in $a(x_{kj})$. Let us define ${J_{kj} {\coloneqq}(x_{kj}+J)} $ then $$\begin{aligned}
&\left( J^c \times J^c \cup J_{kj}^c \times J_{kj}^c \right)^c =\left( J^c \times J^c\right)^c \cap \left(J_{kj}^c \times J_{kj}^c \right)^c\\
=&\left( (J \times J) \cup (J^c \times J) \cup (J \times J^c) \right) \cap \left((J_{kj} \times J_{kj}) \cup (J_{kj}^c \times J_{kj}) \cup (J_{kj} \times J_{kj}^c)\right) \\
=&(C \times C) \cup (D_k \times C) \cup (C \times D_k)
\cup (D_j \times C) \cup (J^c \cap J_{kj}^c \times C) \cup (D_j \times D_k)\\
&\cup (C \times D_j) \cup (D_k \times J_{kj}) \cup (C \times J^c \cap J_{kj}^c),
$$ where we set $C {\coloneqq}J \cap J_{kj}$, $D_k {\coloneqq}J \cap J_{kj}^c $ and $D_j {\coloneqq}J^c \cap J_{kj} $. By exploiting the symmetry of the integrand we thus get $$\begin{aligned}
a_{kj}/ h ^{2d} = & \int_{J \cap J_{kj} } \int_{(J \cap J_{kj})} F(x,y;x_{kj})dydx\\
&+2\int_{J \cap J_{kj}} \int_{J_{kj}^c \cap J } F(x,y;x_{kj})dydx \\
&+2\int_{J \cap J_{kj}} \int_{J_{kj} \cap J^c } F(x,y;x_{kj})dydx \\
&+2\int_{J \cap J_{kj}} \int_{J^c \cap J_{kj}^c} F(x,y;x_{kj})dydx \\
&+2\int_{J \cap J_{kj}^c} \int_{J^c \cap J_{kj} } F(x,y;x_{kj})dydx ,\end{aligned}$$ where $F(x,y;z) = (\varphi(y-z)-\varphi(x-z))(\varphi(y)-\varphi(x)){\gamma}( h y, h x)$. Note that this representation holds for a general setting without assuming (A1) and (A2). For implementation purpose we additionally require from now on:
- We assume that $R \geq diam(\Omega) = ||\operatorname{b}- a ||_2$, such that $\Omega \subset B_R(x)$ for all $x \in \Omega$.
This third assumption simplifies the domain of integration in the occurring integrals in the sense that we can omit the intersection with the ball $B_{R}( h x)$ which is hidden behind the kernel function. This coincides with the application to space-fractional diffusion problems where we aim to model $R\to \infty$ (see Section \[sec:numres\]). Furthermore, since we can construct the whole stiffness matrix $A$ from the first row $M$, it is convenient to introduce the following $\operatorname{\mathbf{L}}^d$-dimensional vectors: $$\begin{aligned}
{\texttt{sing}}_k &{\coloneqq}{ h }^{2d} \int_{J \cap J_k } \int_{J \cap J_k} F(x,y;E^{\operatorname{\mathbf{L}}}(k))dydx \\
&+2{ h }^{2d}\int_{J \cap J_k } \int_{J_k^c \cap J } F(x,y;E^{\operatorname{\mathbf{L}}}(k))dydx \\
&+2{ h }^{2d}\int_{J \cap J_k } \int_{J_k \cap J^c } F(x,y;E^{\operatorname{\mathbf{L}}}(k))dydx, \\
{\texttt{rad}}_k &{\coloneqq}2{ h }^{2d}\int_{J \cap J_k } \int_{ (J \cup J_k )^c} F(x,y;E^{\operatorname{\mathbf{L}}}(k))\mathcal{X}_{B_R( h x)}( h y) dydx ,\\
{\texttt{dis}}_k &{\coloneqq}2{ h }^{2d}\int_{J \cap J_{k}^c} \int_{J^c \cap J_k } F(x,y;E^{\operatorname{\mathbf{L}}}(k))dydx ,\end{aligned}$$ for $0 \leq k < \operatorname{\mathbf{L}}^d$, where $J_k {\coloneqq}J_{0k} = E^{\operatorname{\mathbf{L}}}(k)+J$. Since $x_{k0} = \tfrac{x_k-x_0}{ h } = E^{\operatorname{\mathbf{L}}}(k)$, we have ${M = {\texttt{sing}}+{\texttt{rad}}+ {\texttt{dis}}}$. As we will see in the subsequent program, each of these vectors requires a different numerical handling which justifies this separation. In these premises we point out, that on the one hand we may touch possible singularities of the kernel function along the integration in ${\texttt{sing}}_k$. On the other hand, the computation of ${\texttt{rad}}_k$ may require the integration over a “large” domain if $R \to \infty$ (e.g. fractional kernel). Both are numerically demanding tasks and complicate the assembling process. In contrast to that, the computation of ${\texttt{dis}}_k$ turns out to be numerically viable without requiring a special treatment. However, we fortunately find for $k$ with $J \cap J_k = \emptyset$ that ${\texttt{sing}}_k=0 ={\texttt{rad}}_k$. Hence, it is worth identifying those indices and treat them differently in the assembling loop. Therefore, from $J = \dot\bigcup_{i=0}^{2d} (\square -v_i )$ we can deduce that $J \cap J_k \neq \emptyset$ if and only if $k = E^{\operatorname{\mathbf{L}}}(v_i)$ for an index $0 \leq i < 2^d$. As a consequence, we only have to compute ${\texttt{sing}}_k$ and ${\texttt{rad}}_k$ for $k \in {\texttt{idx}}_0 {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}E^{\operatorname{\mathbf{L}}}(v_i) \colon 0 \leq i < 2^d \rk $. We can cluster these indices even more. For that reason let us define on ${\texttt{idx}}_0$ the equivalence relation $$k \sim j :\Leftrightarrow \exists~ \text{permutation matrix}~ P \in {\mathbb{R}}^{\operatorname{\mathbf{L}}^d \times \operatorname{\mathbf{L}}^d}\colon ~E^{-\operatorname{\mathbf{L}}}(k) =P E^{-\operatorname{\mathbf{L}}}(j).$$ Then due to Remark \[L1\_radial\] after Theorem \[multilevel\_toeplitz\] we have to compute the values ${\texttt{sing}}_k$ and ${\texttt{rad}}_k$ only for $k \in {\texttt{idx}}_s^k{\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}[j]_\sim \colon j \in {\texttt{idx}}_0 \rk$. In order to make this more precise, we figure out that the quotient set can further be specified as $${\texttt{idx}}_s^k = {\left\{}
\newcommand{\rk}{\right\}}[E^{\operatorname{\mathbf{L}}}(z)]_\sim \colon z \in S \rk,$$ where $$S {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}(0, \ldots, 0), (1,0, \ldots, 0),(1,1,0, \ldots, 0), \ldots, (1,1, \ldots, 1)\rk \subset {\left\{}
\newcommand{\rk}{\right\}}v_i \colon 0 \leq i < 2^d\rk$$ with $|S| = d+1 \leq 2^d$. With other words, we group those $v_i$ which are permutations of one another. The associated indices $0 \leq i < 2^d$ are thus given by ${\left\{}
\newcommand{\rk}{\right\}}E^{(2,\ldots,2)}(z) \colon z \in S \rk =:{\texttt{idx}}_s^i$.
In addition to the preceding considerations, we also want to partition the integration domains $J \cap J_k$, $J \cap J_k^c$ and $J^c \cap J_k$, where $k\in{\texttt{idx}}_s^k$, into cubes $(\square - v_\nu)$, such that we can express the reference basis function $\varphi$ with the help of the element basis functions $\psi_i$. This is necessary in order to compute the integrals in a vectorized fashion and obtain an efficient implementation. Let us start with $J \cap J_k$, where $k = E^{\operatorname{\mathbf{L}}}(v_i)$ for $0 \leq i < 2^d $ such that $J_k = v_i + J$. Then we define the set $$\begin{aligned}
D_i &{\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}0 \leq \mu < 2^d \colon \square - v_\mu \in J \cap J_k \rk
= {\left\{}
\newcommand{\rk}{\right\}}0 \leq \mu < 2^d \colon \exists \kappa \colon v_\mu + v_i = v_\kappa \rk\\
&= {\left\{}
\newcommand{\rk}{\right\}}0 \leq \mu < 2^d \colon (v_\mu + v_i)_j < 2 ~\forall~ 0 \leq j < d \rk .\end{aligned}$$ Since $J = \dot\bigcup_{i=0}^{2^d} (\square -v_i )$, we find that $J \cap J_k = \bigcup_{\nu\in D_i}( \square - v_\nu)$. With this, we readily recognize that $J \cap J_k^c = \bigcup_{\nu\in{\left\{}
\newcommand{\rk}{\right\}}0, \ldots, 2^d - 1 \rk \backslash D_i} (\square - v_\nu)$. Similarly, one can derive a set $D_i^c$, such that ${J^c \cap J_k = v_i + \bigcup_{\nu\in D_i^c}( \square - v_\nu)}$. Figure \[fig:indexsets\] illustrates the latter considerations and in Table \[indexsets\] these sets are listed for dimensions $d\in {\left\{}
\newcommand{\rk}{\right\}}1,2,3\rk$, respectively.
\
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
$d$ $i \in {\texttt{idx}}_s^i$ $k = E^L(v_i)\in {\texttt{idx}}_s^k$ $D_i$ $D_i^c = {\left\{} $\kappa(D_i, i)$
\newcommand{\rk}{\right\}}0\leq j < i \rk$
----- ---------------------------- -------------------------------------- --------------------- -------------------------------------------- ----------------------
1 $0$ 0 $(0,1)$ $\emptyset$ $ (0,1)$
$1$ 1 $0$ $0$ $ 1$
2 $0$ $0$ $(0,1,2,3)$ $\emptyset$ $ (0,1,2,3)$
$2$ $p_0$ $(0,1)$ $(0,1)$ $ (2,3)$
$3$ $p_0 + p_1$ $0$ $(0,1,2)$ $ 3$
3 $0$ $0 $ $(0,1,2,3,4,5,6,7)$ $\emptyset$ $ (0,1,2,3,4,5,6,7)$
$4$ $p_0$ $(0,1,2,3)$ $(0,1,2,3)$ $ (4,5,6,7)$
$6$ $p_0 + p_1 $ $(0,1)$ $(0,1,2,3,4,5)$ $ (6,7)$
$7$ $p_0+p_1+p_2 $ $0$ $(0,1,2,3,4,5,6)$ $ 7$
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
: Index sets for the implementation.[]{data-label="indexsets"}
With this at hand, we can now have a closer look at the vectors ${\texttt{sing}}$, ${\texttt{rad}}$ and ${\texttt{dis}}$ and put them into a form which is suitable for the implementation.
### sing {#sing .unnumbered}
Let $i \in {\texttt{idx}}_s^i$, such that $E^{\operatorname{\mathbf{L}}}(k) = v_i$. Since $J \cap J_k = \bigcup_{\nu\in D_i} \square - v_\nu$, we can transform the first integral in ${\texttt{sing}}$ as follows $$\begin{aligned}
\int_{J \cap J_k } \int_{J \cap J_k} F(x,y;E^{\operatorname{\mathbf{L}}}(k))dydx
&= \sum_{\nu \in D_i} \sum_{\mu \in D_i} \int_{\square - v_\nu} \int_{\square - v_\mu} F(x,y; v_i)dydx \\
&= \sum_{\nu \in D_i} \sum_{\mu \in D_i} \int_{\square } \int_{\square } F(x- v_\nu,y- v_\mu; v_i)dydx .\end{aligned}$$ By definition of $D_i$ we have for $\mu \in D_i$ that $v_\mu + v_i = v_{\kappa(\mu, i)}$ with $\kappa(\mu,i) = E^{(2,\ldots,2)}(v_i+v_\mu)$ and since $$F(x- v_\nu,y- v_\mu; v_i) = (\varphi(y-v_{\kappa(\mu, i)})-\varphi(x-v_{\kappa(\nu, i)}))(\varphi(y-v_\mu)-\varphi(x-v_\nu)){\gamma}( h (y-v_\mu), h (x-v_\nu))$$ we find due to (\[reference\_basisfunction\]) that $$\begin{aligned}
& \sum_{\nu \in D_i} \sum_{\mu \in D_i} \int_{\square } \int_{\square } F(x- v_\nu,y- v_\mu; v_i)dydx \\
=& \sum_{\nu \in D_i} \sum_{\mu \in D_i} \int_{\square } \int_{\square } ((\psi_{\kappa(\mu, i)}(y)-\psi_{\kappa(\nu, i)}(x))(\psi_{\mu}(y)-\psi_{\nu}(x))){\gamma}( h (y- v_\mu), h (x- v_\nu))dydx. \end{aligned}$$ We separate the case $\mu = \nu$ since the kernel may have singularities at $(x,y)$ with $x=y$ and therefore these integrals need a different numerical treatment. At this point we find that $$\begin{aligned}
&\sum_{\nu \in D_i}\int_{\square } \int_{\square } ((\psi_{\kappa(\nu, i)}(y)-\psi_{\kappa(\nu, i)}(x))(\psi_{\nu}(y)-\psi_{\nu}(x))){\gamma}( h y, h x)dydx\\
=&|D_i|\int_{\square } \int_{\square } ((\psi_{i}(y)-\psi_{i}(x))(\psi_{0}(y)-\psi_{0}(x))){\gamma}( h y, h x)dydx .\end{aligned}$$ This simplification follows from some straightforward transformations exploiting assumption (A2) and the fact that each element basis function $\psi_\nu$ can be expressed by $\psi_0$ through the relation $\psi_\nu \circ g_\nu = \psi_0 ,$ where $g_\nu(x) {\coloneqq}v_\nu + R_\nu x$ for an appropriate rotation matrix $R_\nu$. With this observation we also find that the other two integrals in ${\texttt{sing}}$ are equal, i.e., $$\begin{aligned}
\int_{J \cap J_k} \int_{J_k^c \cap J } F(x,y;E^{\operatorname{\mathbf{L}}}(k))dydx
=\int_{J \cap J_k} \int_{J_k \cap J^c } F(x,y;E^{\operatorname{\mathbf{L}}}(k))dydx.\end{aligned}$$ By exploiting again that $J \cap J_k = \bigcup_{\nu\in D_i} \square - v_\nu$ and thus $J \cap J_k^c = \bigcup_{\nu\in{\left\{}
\newcommand{\rk}{\right\}}0, \ldots, 2^d - 1 \rk \backslash D_i} \square - v_\nu$ we obtain $$\begin{aligned}
&\int_{J \cap J_k} \int_{J_k^c \cap J } F(x,y;E^{\operatorname{\mathbf{L}}}(k))dydx \\
=&-\sum_{ \nu \in D_i} \sum_{\mu \in {\left\{}
\newcommand{\rk}{\right\}}0, \ldots, 2^d \rk \backslash D_i}\int_{\square } \int_{\square } \psi_{\kappa(\nu, i)}(x) (\psi_{\mu}(y)-\psi_{\nu}(x)){\gamma}( h (v_\nu- v_\mu), h (x-y ))dydx.\end{aligned}$$ Note that by definition of $D_i$ we have that for $\mu \in {\left\{}
\newcommand{\rk}{\right\}}0, \ldots, 2^d \rk \backslash D_i$ there is no $\kappa \in {\left\{}
\newcommand{\rk}{\right\}}0, \ldots, 2^d \rk$ such that $v_\mu + v_i = v_\kappa$ and therefore $\varphi(y - (v_\mu + v_i)) = 0$ for all $y \in \square$. All in all we have
$$\begin{aligned}
&{\texttt{sing}}_k/ h ^{2d} =\\
& |D_i|\int_{\square } \int_{\square } ((\psi_{i}(y)-\psi_{i}(x))(\psi_{0}(y)-\psi_{0}(x))){\gamma}( h y, h x)dydx \\
&+ \sum_{\nu \in D_i}[ \sum_{\mu \in D_i \mu \neq \nu} \int_{\square } \int_{\square } ((\psi_{\kappa(\mu, i)}(y)-\psi_{\kappa(\nu, i)}(x))(\psi_{\mu}(y)-\psi_{\nu}(x))){\gamma}( h (v_\mu- v_\nu), h (y- x))dydx \\
&-4 \sum_{\mu \in {\left\{}
\newcommand{\rk}{\right\}}0, \ldots, 2^d\rk \backslash D_i}\int_{\square } \int_{\square } \psi_{\kappa(\nu, i)}(x) (\psi_{\mu}(y)-\psi_{\nu}(x)){\gamma}( h (v_\mu- v_\nu), h (y-x ))dydx ].\end{aligned}$$
### rad {#rad .unnumbered}
Let $i \in {\texttt{idx}}_s^i$, such that $E^{\operatorname{\mathbf{L}}}(k) = v_i$. Proceeding as above we obtain $$\begin{aligned}
{\texttt{rad}}_k &= 2 h ^{2d}\int_{J \cap J_k} \int_{ (J \cup J_k)^c} F(x,y;v_i)dydx \\
&= 2 h ^{2d}\sum_{\nu \in D_i}\int_{\square} \psi_\nu(x)\psi_{\kappa(\nu,i)}(x) \int_{ (J \cup J_k)^c} {\gamma}( h (x-v_\nu), h y) \mathcal{X}_{B_{R}( h (x-v_\nu))}( h y)dydx.\end{aligned}$$ Let us define $$\begin{aligned}
P_\nu(x) {\coloneqq}& \int_{ (J \cup J_k)^c} {\gamma}( h (x-v_\nu), h y) \mathcal{X}_{B_{R}( h (x-v_\nu))}( h y)dy.\end{aligned}$$ Again, we can use $\psi_\nu \circ g_\nu = \psi_0$ in order to show by some straightforward transformations that $$\int_{\square} \psi_0(x)\psi_{i}(x) P_0(x)dx = \int_{\square}\psi_\nu(x)\psi_{\kappa(\nu,i)}(x) P_\nu(x)dx$$ for all $\nu \in D_i$. Hence, we get
$${\texttt{rad}}_k = 2 h ^{2d}|D_i|\int_{\square} \psi_0(x)\psi_{i}(x) P_0(x)dx .$$
### dis {#dis .unnumbered}
Now we distinguish between the case where ${\texttt{rad}}_k$ and ${\texttt{sing}}_k$ are zero and the complement case. First let $i \in {\texttt{idx}}_s^i$, such that $E^{-\operatorname{\mathbf{L}}}(k) = v_i$, then we find $$\begin{aligned}
{\texttt{dis}}_k &{\coloneqq}2{ h }^{2d}\int_{J_k^c \cap J} \int_{J_k \cap J^c } F(x,y;E^{-\operatorname{\mathbf{L}}}(k))dydx \\
& = -2{ h }^{2d} \sum_{\nu \in {\left\{}
\newcommand{\rk}{\right\}}0,\ldots, 2^d\rk \backslash D_i} \sum_{\mu \in D_i^c} \int_\square \int_\square \psi_\mu(y)\psi_\nu(x) {\gamma}( h (v_\mu - v_\nu - v_i), h (y-x)) dydx.\end{aligned}$$ Now let $k \neq E^{-\operatorname{\mathbf{L}}}(v_i)$ for any $0 \leq i < 2^d$, then $J_k^c \cap J = J$ and $J_k \cap J^c =J_k$ such that $$\begin{aligned}
\frac{{\texttt{dis}}_k}{2{ h }^{2d}}= \int_{J} \int_{J_k} F(x,y;E^{-\operatorname{\mathbf{L}}}(k))dydx
= -\int_{J} \int_{J_k} \varphi(y+E^{-\operatorname{\mathbf{L}}}(k))\varphi(x){\gamma}( h (y+E^{-\operatorname{\mathbf{L}}}(k)), h x)dydx.\end{aligned}$$ Since $J_k = E^{-\operatorname{\mathbf{L}}}(k) + J $ by definition and $J = \dot\bigcup_{i=0}^{2^d} (\square -v_i )$ we obtain $$\begin{aligned}
{\texttt{dis}}_k
&=-2{ h }^{2d} \sum_{0 \leq \nu < 2^d} \sum_{0 \leq \mu < 2^d} \int_{\square} \int_{\square} \psi_\nu(x)\psi_\mu(y){\gamma}( h (v_\mu-E^{-\operatorname{\mathbf{L}}}(k)-v_\nu), h (y-x))dydx.\end{aligned}$$
All in all we conclude
$$\begin{aligned}
&{\texttt{dis}}_k/-2{ h }^{2d} \\
= &\begin{cases}
\sum_{\nu \in {\left\{}
\newcommand{\rk}{\right\}}0,\ldots, 2^d\rk \backslash D_i} \sum_{\mu \in D_i^c} \int_\square \int_\square \psi_\nu(x) \psi_\mu(y) {\gamma}( h (v_\mu - v_\nu - E^{-\operatorname{\mathbf{L}}}(k)), h (y-x)) dydx: &k = E^{\operatorname{\mathbf{L}}}(v_i)\\
\sum_{0 \leq \nu < 2^d} \sum_{0 \leq \mu < 2^d} \int_{\square} \int_{\square} \psi_\nu(x)\psi_\mu(y){\gamma}( h (v_\mu-v_\nu-E^{-\operatorname{\mathbf{L}}}(k)), h (y-x))dydx: &else.
\end{cases}\end{aligned}$$
### Source term {#source-term .unnumbered}
We compute $b^{ h } \in {\mathbb{R}}^{\operatorname{\mathbf{L}}^d} $ by
$$\begin{aligned}
b^{ h }_k = \int_{\Omega} f {{\varphi_k}}dx
=h^d \sum_{\nu = 0}^{2^d - 1} \int_{\square} f(x_k + h (v- v_\nu)) \psi_{\nu}(v)dv
=h^d 2^d \int_{\square} f(x_k + h v) \psi_{0}(v)dv,\end{aligned}$$
\
where the last equality follows again from considering $\psi_\nu \circ g_\nu = \psi_0$.
Solving procedure {#sec:solving}
=================
Now we discuss how to solve the discretized, fully populated multilevel Toeplitz system. The fundamental procedure uses an efficient implementation for the matrix-vector product of multilevel Toeplitz matrices, which is then delivered to the conjugate gradient (CG) method.
Let us first illuminate the implementation of the matrix-vector product $Tx , $ where $T \in {\mathbb{R}}^{\operatorname{\mathbf{L}}^d \times \operatorname{\mathbf{L}}^d}$ is a symmetric $d$-level Toeplitz matrix of order $\operatorname{\mathbf{L}}= (L_0, \ldots, L_{d-1})$ and $x$ a vector in ${\mathbb{R}}^{\operatorname{\mathbf{L}}^d}$. The crucial idea is to embed the Toeplitz matrix into a circulant matrix for which matrix-vector products can be efficiently computed with the help of the discrete fourier transform (DFT) [@structuredmatrices]. Here, by a $d$-level circulant matrix, we mean a matrix $C\in {\mathbb{R}}^{\operatorname{\mathbf{L}}^d \times \operatorname{\mathbf{L}}^d}$, which satisfies $$C_{\operatorname{\mathbf{i}}\operatorname{\mathbf{j}}} = C((\operatorname{\mathbf{i}}-\operatorname{\mathbf{j}})\mod \operatorname{\mathbf{L}}),$$ where $(\operatorname{\mathbf{i}}\mod \operatorname{\mathbf{L}}){\coloneqq}(i_k \mod L_k)_k$. In the real symmetric case, such that $a_{\operatorname{\mathbf{i}}\operatorname{\mathbf{j}}} = a(|\operatorname{\mathbf{i}}- \operatorname{\mathbf{j}}|)$ as it is present in our setting, $T$ can be reconstructed from its first row $R {\coloneqq}(T_{0i})_i \in {\mathbb{R}}^{\operatorname{\mathbf{L}}^d}$. Since circulant matrices are special Toeplitz matrices, the same holds for these matrices as well. Due to the multilevel structure it is convenient to represent $T$ by a tensor $t$ in ${\mathbb{R}}^{L_0 \times \cdots \times L_{d-1}}$, which is composed of the values contained in $R$. More precisely we define $$t(\operatorname{\mathbf{i}}) {\coloneqq}R_{E^{\operatorname{\mathbf{L}}}(\operatorname{\mathbf{i}})}$$ for $ \operatorname{\mathbf{i}}\in \prod_{i=0}^{d-1} {\left\{}
\newcommand{\rk}{\right\}}0,\ldots, L_i-1\rk$ with $E^{\operatorname{\mathbf{L}}}$ from above. Now $t$ can be embedded into the tensor representation $c\in {\mathbb{R}}^{2L_0 \times \cdots \times 2L_{d-1}}$ of the associated $d$-level circulant matrix by $$c(\operatorname{\mathbf{i}}) {\coloneqq}t(\hat{i}_0, \ldots, \hat{i}_{d-1}),$$ where $$\hat{i}_k {\coloneqq}\begin{cases}
i_k: &i_k<L_k, \\
0: &i_k = L_k, \\
2L_k-i_k: &else.
\end{cases}$$ We note that $t = c([0:L_0-1], \ldots, [0:L_{d-1}-1]) $. Thus, we can use Algorithm \[product\] to compute the product $T \cdot x$, where the DFT is carried out by the fast fourier transform (FFT).
\
INPUT: $t \in {\mathbb{R}}^{L_0 \times \cdots \times L_{d-1}}$ representing $T \in {\mathbb{R}}^{\operatorname{\mathbf{L}}^d \times \operatorname{\mathbf{L}}^d}$, $x \in {\mathbb{R}}^{\operatorname{\mathbf{L}}^d} $\
OUTPUT: $y = Tx$\
\
1. Construct $c\in {\mathbb{R}}^{2L_0 \times \cdots \times 2L_{d-1}}$ by $c(\operatorname{\mathbf{i}}) {\coloneqq}t((\hat{i}_0, \ldots, \hat{i}_{d-1}) $\
2. Construct $x' \in {\mathbb{R}}^{2L_0 \times \cdots \times 2L_{d-1}}$ by $$x'(\operatorname{\mathbf{i}}){\coloneqq}\begin{cases}
x_{E^L(\operatorname{\mathbf{i}})}: & \operatorname{\mathbf{i}}\in \prod_{i=0}^{d-1} {\left\{}
\newcommand{\rk}{\right\}}0,\ldots, L_i-1\rk \\
0: else
\end{cases}$$\
3. Compute $\Lambda = FFT_L(c) $\
4. Compute $z = FFT_L(x')$\
5. Compute $w = \Lambda z $ (pointwise)\
6. Compute $y' = FFT^{-1}_L(w)$\
7. Construct $y \in {\mathbb{R}}^{L_0 \times \cdots \times L_{d-1}}$ by $y(\operatorname{\mathbf{i}}) = y'(\operatorname{\mathbf{i}})$ for $ \operatorname{\mathbf{i}}\in \prod_{i=0}^{d-1} {\left\{}
\newcommand{\rk}{\right\}}0,\ldots, L_i-1\rk $\
8. Return $y.reshape(\operatorname{\mathbf{L}}^d)$
In the Python code we use the library pyFFTW (<https://hgomersall.github.io/pyFFTW/>) to perform a parallelized multidimensional DFT, which is a pythonic wrapper around the C subroutine library FFTW (<http://www.fftw.org/>). Furthermore, we want to note, that an MPI implementation for solving multilevel Toeplitz systems in this fashion is presented in [@MPIproduct], which inspired us to apply the upper procedure.\
Finally, with this algorithm at hand, we employ a CG method, as it can be found for example in [@nocedale:numopt], to obtain the solution of the discretized system (\[discretizedproblem\]).
Numerical Experiments {#sec:numres}
=====================
In this last section we want to complete the previous considerations by presenting numerical results in 1d, 2d and for the first time also in 3d. We now specify the nonlocal diffusion operator $-{\mathcal{L}}$ by choosing the fractional kernel ${\gamma}(x,y) = \frac{c_{d,s}}{2||y-x||^{d+2s}}$ and shortly recall how the fractional Laplace operator $(-\Delta)^s$ crystallizes out as special case of $-{\mathcal{L}}$. Furthermore we describe in detail the employed numerical integration and finally discuss the results of the implementation.
Relation to space-fractional diffusion problems
-----------------------------------------------
We follow [@fractlapl] and define the action of the *fractional Laplace operator $(-\Delta)^s$* on a function ${u:{\mathbb{R}}^d \to {\mathbb{R}}}$ by $$\begin{aligned}
(-\Delta)^s u(x) {\coloneqq}c_{d,s} \int_{{\mathbb{R}}^d} \frac{u(x)- u(y)}{||y-x||^{d+2s}}dy,\end{aligned}$$ where $c_{d,s} {\coloneqq}s 2^{2s} \frac{\Gamma(\tfrac{d+2}{2})}{\Gamma(\tfrac{1}{2}) \Gamma(1-s)} .$ The homogeneous steady-state *space-fractional diffusion problem* then reads as $$\begin{aligned}
(-\Delta)^s u(x) &= f(x), ~~~x \in \Omega, \nonumber\\
u(x) &= 0, ~~~~~~~ x \in \Omega^c \label{fractprob}.\end{aligned}$$ Thus, choosing the fractional kernel ${\gamma}(x,y) {\coloneqq}\frac{c_{d,s}}{2||y-x||^{d+2s}}\mathcal{X}_{B_R(x)}(y)$, which satisfies the conditions (\[case\_1\]) and (\[gen\_assumption\]), the nonlocal Dirichlet problem (\[prob\]) can be considered as a truncated version of problem (\[fractprob\]). In [@fractlapl] the authors show that the weak solution $u_R$ of the truncated problem (\[prob\]), for some interaction radius $R>0$, converges to the weak solution $u_\infty$ of (\[fractprob\]) as $R \to \infty$. We repeat the corresponding result stated in [@fractlapl].
Let $u_R \in V_c({\Omega \cup \Omega_I})$ and $u_\infty \in H_\Omega^s({\mathbb{R}}^d) {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}u \in H^s({\mathbb{R}}^d) \colon u_{|\Omega^c} \equiv 0 \rk$ denote the weak solutions of (\[prob\]) and (\[fractprob\]) respectively. Then $$||u_\infty - u_R ||_{H^s({\Omega \cup \Omega_I})} \leq \frac{K_d}{C_1^2 s(R-I)^{2s}}||u_\infty||_{L^2(\Omega)},$$ where $I {\coloneqq}\min{\left\{}
\newcommand{\rk}{\right\}}L\in {\mathbb{R}}\colon \Omega \subset B_L(x)~~ \forall ~x \in \Omega \rk$, $C_1$ is the equivalence constant from above and $K_d$ is a constant depending only on the space dimension $d$.
See [@fractlapl Theorem 3.1].
Combining this result with the finite element estimate (\[fem\_estimate\_fract\]) we arrive at the estimate $$\begin{aligned}
||u_\infty - u_{R}^N ||_{H^s({\Omega \cup \Omega_I})} \leq \frac{K_d}{C_1^2 s(R-I)^{2s}}||u_\infty||_{L^2(\Omega)} + C h^{m+t-s}||u_R||_{H^{m+t}({\Omega \cup \Omega_I})}.\end{aligned}$$ Thus, the accuracy of finite element solutions for the weak formulation of problem (\[fractprob\]) depend on both, the discretization quality and the interaction horizon [@fractlapl].
Numerical computation of the integrals
--------------------------------------
In this subsection we want to point out how we numerically handle the occurring integrals.
### Nonsingular integrals
We mainly have to compute integrals of the form $
\int_{\square} \int_{\square} g(x,y) dydx \label{referenceintegral},
$ where $\square = [0,1]^d$ and $g \colon {\mathbb{R}}^d \times {\mathbb{R}}^d \to {\mathbb{R}}$ is a (typically smooth) function, which we assume to have no singularities in the domain $\square \times \square$. We approximate the value of this integral by employing a $n$-point Gauss-Legendre quadrature rule in each dimension. More precisely, we built a $d$-dimensional tensor grid ${\texttt{X}}\in {\mathbb{R}}^{d \times n^{d}}$ with associated weights $\texttt{W}^{single} \in {\mathbb{R}}^{n^d}$, such that $\int_{\square} g(x,y) dy \approx \sum_{i=0}^{n^{d}-1} g(x, {\texttt{X}}_i) \texttt{W}^{single}_i$ for $x \in {\texttt{X}}$. Finally, we define the arrays $$\begin{aligned}
\texttt{V} &{\coloneqq}({\texttt{X}}, {\texttt{X}}, \ldots, {\texttt{X}}) \in {\mathbb{R}}^{d \times n^{2d}}, \\
\texttt{Q} &{\coloneqq}({\texttt{X}}_0, \ldots, {\texttt{X}}_0,{\texttt{X}}_1, \ldots, {\texttt{X}}_1, \ldots , {\texttt{X}}_{n^d - 1}, \ldots, {\texttt{X}}_{n^d - 1} ) \in {\mathbb{R}}^{d \times n^{2d}} \\end{aligned}$$ with associated weights $\texttt{W}^{double}\in {\mathbb{R}}^{n^{2d}}$, such that we finally arrive at the following quadrature rule: $$\begin{aligned}
\int_{\square} \int_{\square} g(x,y) dydx \approx \sum_{i=0}^{n^{2d}-1} g(\texttt{V}_i, \texttt{Q}_i) \texttt{W}^{double}_i = (g(\texttt{V},\texttt{Q}) \cdot \texttt{W}^{double}).sum().\end{aligned}$$
### Singular integrals
As we have seen for the space-fractional diffusion problem, the kernel function may come along with singularities at $(x,y)$ with $x=y$. Therefore we start with a general observation, which paves the way for numerically handling these singularities. Let $f \colon {\mathbb{R}}^d \times {\mathbb{R}}^d \to {\mathbb{R}}$ be a symmetric function, i.e., $f(x,y) = f(y,x)$, and let us further define the sets $M {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}(x,y) \in \square\times \square \colon y_d \in [0,x_d] \rk$ and $M' {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}(x,y) \in \square\times \square \colon (y,x) \in M\rk.$ Then it is straightforward to show that $M \cup M' = \square \times \square$ and $$\begin{aligned}
M \cap M' = &{\left\{}
\newcommand{\rk}{\right\}}(x,y)\in \square \times \square \colon x_d = y_d \rk \\
= &{\left\{}
\newcommand{\rk}{\right\}}(v^{d-1},z,w^{d-1},z) \in {\mathbb{R}}^{2d} \colon (v^{d-1},w^{d-1},z) \in [0,1]^{2d-1} \rk,\end{aligned}$$ such that $\lambda_{2d}(M \cap M')= 0$. Hence, we find that $$\begin{aligned}
\int_{\square \times \square} f d\lambda_{2d} &= \int_{M} f d\lambda_{2d} + \int_{M'} f d\lambda_{2d} - \int_{M \cap M'} f d\lambda_{2d}\\
&=\int_{M} f d\lambda_{2d} + \int_{M'} f d\lambda_{2d} .\end{aligned}$$ From the symmetry of $f$ we additionally deduce that $\int_{M} f d\lambda_{2d} = \int_{M'} f d\lambda_{2d} $ and therefore we finally obtain $$\begin{aligned}
\int_{\square \times \square} f d\lambda_{2d} = 2 \int_{M} f d\lambda_{2d}.\end{aligned}$$ This observation can now be applied to the singular integrals occurring in the vector ${\texttt{sing}}$ such that $$\begin{aligned}
&\int_{\square}\int_{\square}((\psi_{i}(y)-\psi_{i}(x))(\psi_{0}(y)-\psi_{0}(x))){\gamma}( h y, h x)dydx \\
= &2\int_{[0,1]^d}\int_{[0,1]^{d-1}\times [0,x_{d-1}]} ((\psi_{i}(y)-\psi_{i}(x))(\psi_{0}(y)-\psi_{0}(x))){\gamma}( h y, h x)dydx .\end{aligned}$$ The essential advantage of this representation relies on the fact that the singularities are now located on the boundary of the integration domain. Thus, we do not evaluate the integrand on its singularities while using quadrature points which lie in the interior. For the outer integral we employ the $d$-dimensional Gauss quadrature as pointed out above. Then for the inner integral with a fixed $x \in {\texttt{X}}$ we extend the one-dimensional adaptive (G7,K15)-Gauss-Kronrod quadrature rule to $d$-dimensional integrals by again tensorising the one-dimensional quadrature points. Moreover, in order to take full advantage of the Gauss-Kronrod quadrature, we divide the set $[0,1]^{d-1}\times [0,x_{d-1}]$ into $2^{d-1}$ disjoint rectangular subsets such that the singularity $x$ is located at a vertex. The latter partitioning reinforces the adaptivity property of the Gauss-Kronrod quadrature rule.
### Integrals with large interaction horizon
Now we discuss the quadrature of $$\begin{aligned}
P_0(x) =~ &\int_{ (J \cup J_k)^c} {\gamma}( h x, h y)\mathcal{X}_{B_{R}( h x)}( h y) dy \\
=~&(1/h^d)\int_{ (I_0 \cup I_k )^c} {\gamma}( h x, y-( a + h ))\mathcal{X}_{B_{R}( h x)}(y-( a + h )) dy.\end{aligned}$$ We carry out two simplifications for the implementation, which are mainly motivated by the fact that ${\gamma}(x,y) \to 0$ as $y \to \infty$ for kernels such as the fractional one. First, since $h \to 0$, we set $B_{R}( h x) \equiv B_{R}(0)$. Especially when $R$ is large and $h$ small, this simplification does not significantly affect the value of the integral. Second, we employ the $||\cdot||_\infty$-norm for the ball $B_R(0)$ instead of the $||\cdot||_2$-norm. Hereby we also loose accuracy in the numerical integration but it simplifies the domain of integration in the sense that we can use our quadrature rules for rectangular elements. The latter simplification can additionally be justified by the fact that we want to model $R \to \infty$ for the fractional kernel and therefore have to truncate the $||\cdot||_2$-ball in any case. Consequently, we are concerned with the quadrature of the integral $$\begin{aligned}
\int_{ B_{R}^{||\cdot||_\infty}( a + h ) \backslash (I_0 \cup I_k )} {\gamma}( h x, y-(a+ h )) dy\end{aligned}$$ for $x \in {\texttt{X}}$. For this purpose, we define the box $\mathcal{B}{\coloneqq}\prod_{i=0}^{d-1} [a_i-\lambda,a_i+\lambda]$ for a constant $R\geq \lambda > 2h$ such that $(I_0 \cup I_k ) \subset \mathcal{B}$ and we partition $$B_{R}^{||\cdot||_\infty}( a + h ) = \mathcal{B} \cup B_{R}( a + h ) \backslash \mathcal{B} .$$ Thus, we obtain $$\begin{aligned}
P_0(x)h^d&\approx\int_{ \mathcal{B} \backslash (I_0 \cup I_k )} {\gamma}( h x, y-( a + h )) dy + \int_{ B_{R}^{||\cdot||_\infty}( a + h ) \backslash \mathcal{B}} {\gamma}( h x, y-( a + h )) dy.\end{aligned}$$ We discretize $ \mathcal{B}$ with the same elements which we used for $\Omega$. This is convenient for two reasons. On the one hand we capture the critical values of the kernel, which in case of singular kernels typically decrease as $y\to \infty$. On the other hand, we have to leave out the integration over $(I_0 \cup I_k )$, which then can easily be implemented since we use the same discretization. However, this results in $\operatorname{\mathbf{N}}_2^d$, where $N_2^i {\coloneqq}2\lambda / h $, hypercubes with base points $ y_j {\coloneqq}( a -\lambda e) + h E^{-\operatorname{\mathbf{N}}_2}(j)$ such that $$\begin{aligned}
\int_{ \mathcal{B} \backslash (I_0 \cup I_k )} {\gamma}( h x, y) dy&= \sum_{j=0,j \notin R_i}^{\operatorname{\mathbf{N}}_2^d -1} \int_{ y_j + h \square} {\gamma}( h x, y-( a + h )) dy\\
&= h^d \sum_{j=0,j \notin R_i}^{\operatorname{\mathbf{N}}_2^d - 1} \int_{ \square} {\gamma}(\lambda e + h (e-E^{-\operatorname{\mathbf{N}}_2}(j)), h (y-x)) dy,\end{aligned}$$ where $R_i$ contains the indices for those elements, which are contained in $(I_0 \cup I_k )$. This set can be characterized as $R_i {\coloneqq}{\left\{}
\newcommand{\rk}{\right\}}0\leq j < \operatorname{\mathbf{N}}_2^d - 1 \colon y_j + h \square \subset (I_0 \cup I_k )\rk$. Note that by definition we have $I_0 = a + h (e + \bigcup_{0 \leq \nu < 2^d} \square - v_\nu)$ and since $E^{-\operatorname{\mathbf{L}}}(k) = v_i$ for $k \in {\texttt{idx}}_s^k$ such that $x_k = a + h (e +v_i)$ we know that $I_0^c \cap I_k = a + h (e +v_i + \bigcup_{\nu \in D_i^c} \square - v_\nu)$. Hence, $ y_j + h \square \subset (I_0 \cup I_k )$ if and only if $ y_j = a + h (e - v_\nu)$ for $0 \leq \nu < 2^d$ or $ y_j = a + h (e +v_i- v_\nu)$ for $\nu \in D_i^c$. Since $ y_j = a -\lambda e + h E^{-\operatorname{\mathbf{N}}_2}(j) $ we find by equating $ y_j$ with these requirements that $$\begin{aligned}
R_i ={\left\{}
\newcommand{\rk}{\right\}}E^{\operatorname{\mathbf{N}}_2}(e - v_\nu +\lambda h ^{-1} e) : 0 \leq \nu < 2^d \rk
\cup {\left\{}
\newcommand{\rk}{\right\}}E^{\operatorname{\mathbf{N}}_2}(e +v_i - v_\nu +\lambda h ^{-1} e): \nu \in D_i^c \rk.\end{aligned}$$ Now we discuss the quadrature of the second integral $\int_{ B_{R}^{||\cdot||_\infty}( a + h ) \backslash \mathcal{B}} {\gamma}( h x, y-( a + h )) dy$ for $x \in {\texttt{X}}$. Since we want to apply the algorithm to fractional diffusion, we have to get around the computational costs that occur when $R$ is large. In order to alleviate those costs we follow the idea in [@fractlapl] and apply a coarsening rule to discretize the domain $ B_{R}^{||\cdot||_\infty}( a + h ) \backslash \mathcal{B}$. Assume we have a procedure which outputs a triangulation $(z, \hat{ h })$ of $ B_{R}^{||\cdot||_\infty}( a + h ) \backslash \mathcal{B}$ consisting of $ N_3 $ hyperrectangles with base points $ z_j \in {\mathbb{R}}^d$ and sides of length $ \hat{ h }_j \in {\mathbb{R}}^d$. Then we obtain $$\begin{aligned}
\int_{ B_{R}^{||\cdot||_\infty}( a + h ) \backslash \mathcal{B}} {\gamma}( h x, y-( a + h )) dy &= \sum_{j=0}^{ N_3-1 } \int_{ z_j + \hat{ h }_j \square} {\gamma}( h x, y-( a + h ) ) dy\\
&= \sum_{j=0}^{ N_3 -1} \hat{ h }_j ^d\int_{ \square} {\gamma}(- z_j+ a + h , \hat{ h }_j y - h x ) dy.\end{aligned}$$ All in all we thus have
$$\begin{aligned}
{\texttt{rad}}_k &\approx 2|D_i| { h }^{2d}\sum_{j=0,j \notin R_i}^{\operatorname{\mathbf{N}}_2^d-1} \int_{\square}\int_{ \square}\psi_0(x)\psi_{i}(x){\gamma}(\lambda e + h (e -E^{-\operatorname{\mathbf{N}}_2}(j)), h (y-x)) dydx \\
&+2|D_i| h ^{d} \sum_{j=0}^{ N_3-1 } \hat{ h }_j ^d\int_{\square}\int_{ \square}\psi_0(x)\psi_{i}(x){\gamma}(- z_j+ a + h , \hat{ h }_j y- h x) dydx.
\end{aligned}$$
This leaves space for discussion concerning the choice of an optimal coarsening rule. We use the following preliminary approach in our code: We decompose $ B_{R}^{||\cdot||_\infty}( a + h ) \backslash \mathcal{B}$ into $(3^d-1)$ $d$-dimensional hyperrectangles surrounding the box $\mathcal{B}$. Then we build a tensor grid by employing in each dimension the coarsening strategy $ v + i^qh_{min} $ where $v$ is a vertex of $\mathcal{B}$, $q \geq 1$ the coarsening parameter and $h_{min}$ a minimum grid size. By concatenating all arrays we obtain a triangulation $(z, \hat{ h })$ of $ B_{R}^{||\cdot||_\infty}( a + h ) \backslash \mathcal{B}$.
Numerical results
-----------------
The implementation has been carried out in Python and the examples were run on a HP Workstation Z240 MT J9C17ET with Intel Core i7-6700 - 4 x 3.40GHz. Since we started from an arbitrary dimension throughout the whole analyzes, the codes for each dimension $d \in {\left\{}
\newcommand{\rk}{\right\}}1,2,3\rk$ own the same structure. Depending on the dimension, one only has to adapt the index sets ${\texttt{idx}}_s^i$, ${\texttt{idx}}_s^k$, $D_i$ and $D_i^c$, the implementation of the quadrature rules discussed above, the $2^d$ element basis functions and the plotting procedure. The rest can be implemented in a generic way. In addition to that, we framed the relevant representations for implementing the assembly of the first row. The implementation of the solving procedure only consists of delivering Algorithm \[product\] to the CG method. Moreover, the codes are parallelized over 8 threads on the four Intel cores. Within the assembling process of the first row, we first compute the more challenging $(d+1)$ entries $M_k = {\texttt{sing}}_k + {\texttt{rad}}_k + {\texttt{dis}}_k$ for $k \in {\texttt{idx}}_s^k$. Here we parallelize the computations of the integrals in ${\texttt{rad}}_k$ over the base points $y_j$ for the box and $z_j$ for the coarsening strategy. For the remaining $\operatorname{\mathbf{L}}^d - (d+1)$ indices we have that $M_k = {\texttt{dis}}_k$ and we can simply parallelize the loop over ${\left\{}
\newcommand{\rk}{\right\}}0\leq k < \operatorname{\mathbf{L}}^d \rk \backslash {\texttt{idx}}_s^k$. As mentioned above, the solving process is parallelized via the parallel fourier transform pyFFTW.
In all examples we use the fractional kernel $${\gamma}(x,y) = \frac{c_{d,s}}{2||y-x||_2^{d+2s}}\mathcal{X}_{B_R(x)}(y)$$ for $R=T+\lambda$ where $T=2^{10}$ with coarsening parameter $q=1.5$, minimum grid size $h_{min}=10^{-2}$ and a parameter $\lambda > 2h$ for the box $\mathcal{B}$. The CG method stops if a sufficient decrease of the residual $||Ax_k-b||/||b|| < 10^{-12}$ is reached. We present numerical examples for $d \in {\left\{}
\newcommand{\rk}{\right\}}1,2,3\rk$. For each grid size $ h $ we report on the number of grid points (“dofs”) and the number of CG iterations (“cg its”) as well as the CPU time (“CPU solving”) needed for solving the discretized system. Furthermore we compute the energy error $||u_R^h - u_\infty||_{H^s({\Omega \cup \Omega_I})}$, where $u_\infty$ is a numerical surrogate taken to be the finite element solution on the finest grid, and the rate of convergence.
1d {#d .unnumbered}
--
For the 1d example we choose the following set of parameters:
- Domain: $a=-1$, $b=1$, $\Omega = [-1,1]$
- Fraction: $s=0.7$
- Source term: $f(x) \equiv 1$
- Parameter for box: $\lambda = 5$
- Gauss points: $n=7$ for unit interval $[0,1]$
The results are presented in Figure \[fig:1d\] and Table \[tab:1d\].
[ |c| r | r | c |c |c|c|]{} $h$ & dofs & cg its & energy error &rate &CPU solving \[s\]&CPU solving direct \[s\]\
$2^{-9}$ & 1,023 &201 &3.15e-04 & 0.71 &0.04 &0.00\
$2^{-10}$ & 2,047 &328 &1.86e-04 &0.71 &0.11 &0.01\
$2^{-11}$ & 4,095 &535 &1.08e-04 & 0.72 &0.20 &0.03\
$2^{-12}$ & 8,191 &932 &6.13e-05 & 0.72 &1.22 &0.12\
$2^{-13}$ & 16,383 &1,519 &3.34e-05 & 0.75 & 2.96 &0.50\
$2^{-14}$ & 32,767 & 2,475 &1.70e-05 & 0.79 & 13.99 &2.03\
$2^{-17}$ & 65,535 & 13,755 & - & - & 540.71 &143.22\
2d {#d-1 .unnumbered}
--
For the 2d example we choose the following set of parameters:
- Domain: $a=(0,0)$, $b=(1,0.2)$, $\Omega = [0,1] \times [0,0.2]$
- Fraction: $s=0.4$
- Source term: $f(x) = x_0$
- Parameter for box: $\lambda = 1$
- Gauss points in each dimension: $n=6$, i.e., $36$ quadrature points for the unit square $[0,1]^2$
The results are presented in Figure \[fig:2d\] and Table \[tab:2d\].
[ |c | r | c| c |c |c|]{} $h$ & dofs & cg its& energy error &rate &CPU solving \[s\]\
$2^{-8}$ & 12,750 &40 & 4.10e-05 & 0.61 &0.07\
$2^{-9}$ & 51,611 &54 & 2.49e-05 & 0.60 &0.55\
$2^{-10}$ & 207,669 &72 & 1.42e-05 & 0.62 &2.33\
$2^{-11}$ & 835,176 &96 & 7.17e-06 & 0.70 &15.13\
$2^{-12}$ & 3,349,710 &128 & 2.17e-06 & 0.75 & 81.57\
$2^{-13}$ & 13,408,667 & 169 & - & - & 485.13\
3d {#d-2 .unnumbered}
--
For the 3d example we choose the following set of parameters:
- Domain: $a=(0,0,0)$, $b=(1,1,1)$, $\Omega = [0,1]^3$
- Fraction: $s= 0.4$
- Source term: $f(x) \equiv 1$
- Parameter for box: $\lambda = 0.5$
- Gauss points in each dimension: $n=4$, i.e., $64$ quadrature points for the unit cube $[0,1]^3$
The results are presented in Figure \[fig:3d\] and Table \[tab:3d\].
[ |c | r |c| c |c |c|]{} $h$ & dofs & cg its & energy error &rate &CPU solving \[min\]\
$2^{-5}$ & 29,791 &21 & 1.81e-04 &0.77 &0.01\
$2^{-6}$ & 250,047 & 23 & 3.08e-05 & 0.71 &0.03\
$2^{-7}$ & 2,048,383 & 30 & 1.55e-05 & 0.73 &0.53\
$2^{-8}$ & 16,581,375 & 41 & 5.90e-06 & 0.75 &4.73\
$2^{-9}$ & 133,432,831 & 55 & - &- &36.28\
Discussion
----------
Concerning the number of CG iteration we want to point out several observations. First, we generally observed for various parameters that the number of CG iterations increases as we emphasize the singularity, i.e., as $s \to 1$. This coincides with Theorem 6.3 in [@wellposedness], stating that for shape-regular and quasi-uniform meshes the following estimate for the condition number $cond(A)$ holds: $$cond(A)\leq ch^{-2s},$$ where $c>0$ is a generic constant. A second general observation throughout various experiments was, that the ratio $\tfrac{k{(h/2)}}{k(h)}$, where $k(h)$ denotes the number of CG iterations needed for grid size $h$, depends only on $s$. More precisely, we observe that $\tfrac{k{(h/2)}}{k(h)} = c_s $, where $c_s$ is a constant independent of the dimension and the domain. For $s=0.4$ for instance we observe that $k(h/2) \approx 1.33 k(h)$, whereas for $s=0.7$ we find that $k(h/2) \approx 1.63 k(h)$. This is also reflected in the results above. However, since we consider a very “structured” case, where the domains are hyperrectangles triangulated with regular grids, we want to leave it an observation. Finally a third and last observation in this regard is the surprisingly low number of CG iterations needed for the 3d case. Running the code for different dimensions, but for a fixed comparable parameter setting, where we set the domain to be the unit hypercube $[0,1]^d$ and the source term to be $f \equiv 1$, we find that for problems of the same size, i.e., with the same number of degrees of freedom, the number of CG iterations decreases as the dimension increases. Specifically, let $s=0.4$, $\lambda = 0.5$ and $T=2^{10}$. Then in Table \[tab:compare\_dofs\] we find the results where the size of the discretized system is fixed to $dofs = \operatorname{\mathbf{L}}^d \approx 250000$.
[ |c | c |c|]{} $d$ & h & cg its\
$1$ & $1./250,048 \approx 2^{-18} $ &615\
$2$ & $2^{-9}$ & 62\
$3$ & $2^{-6} $ &23\
This might be due to the additional Toeplitz levels affecting the condition number of the stiffness matrix. In contrast to that, while fixing the grid size, the number of CG iterations seems to vary less comparing the different dimensions. In Table \[tab:compare\] the reader finds the results for a fixed grid size $h= 2^{-6}$.
[ |c | c |c|]{} $d$ & dofs & cg its\
$1$ & 63 &17\
$2$ & 3,969 & 24\
$3$ & 250,047 &23\
However, this has to be analyzed more concretely and a thorough investigation is not the intention at this point.
Besides this, we also want to mention how the assembling procedure is affected by the choice of the fraction $s$. For $s \to 1$ the treatment of the singularity becomes a more delicate issue and one has to increase the number of quadrature points $n$ to prevent instabilities. However, since the kernel decays more rapidly as $y\to \infty$ these costs can be balanced by choosing a smaller $\lambda$. In contrast to that, as $s \to 0$, a moderate number of quadrature points suffices, but $\lambda$ has to be chosen larger in order to prevent a degenerated solution.
Finally, we also note, that the library pyFFTW needs a lot of memory for building the FFT object, such that we had to move the 3d computations for the finest grid to a machine with a larger RAM. However, one can circumvent this problem by using the sequential FFT implementation available in the NumPy library.
Concluding remarks
==================
We presented a finite element implementation for the steady-state nonlocal Dirichlet problem with homogeneous volume constraints on an arbitrary $d$-dimensional hyperrectangle and for translation invariant kernel functions. We use a continuous Galerkin method with multilinear element basis functions and theoretically back up our numerics with the framework for nonlocal diffusion developed by Gunzburger et al. [@wellposedness]. The key result showing the multilevel Toeplitz structure of the stiffness matrix is proven for arbitrary dimension and paves the way for the first 3d implementations in this area. Furthermore, we comprehensively analyze the entries of the stiffness matrix and derive representations which can be efficiently implemented. Since throughout the whole analysis we start from an arbitrary dimension, we can almost generically implement the code; we mainly have to adapt the implementation of the quadrature rules, the element basis functions as well as the index sets.
An important extension of this work is to incorporate the case where the interaction horizon is smaller than the diameter of the domain. This complicates the integration and with that the assembling procedure, but the stiffness matrix is no more fully populated and its structure still remains multilevel Toeplitz. Having that, one can model the transition to local diffusion and access a greater range of kernels. The resulting code for 2d would then present a fast and efficient implementation, which could be used for example in image processing. Also, since certain kernel functions allow for solutions with jump discontinuities, also discontinuous Galerkin methods are conforming [@wellposedness]. In this case, one has to carefully analyze the structure of the resulting stiffness matrix, which might differ from a multilevel Toeplitz one. Moreover, an aspect concerning the solving procedure, which is not examined above, is that of an efficient preconditioner for the discretized Galerkin system. A multigrid method might be a reasonable candidate due to the simple structure of the grid (see also [@chen2016convergence; @PANG2012693]). In general, a lot of effort has been put in the research of preconditioning structured matrices (see e.g., [@olshans; @serra2007toeplitz; @chan1988optimal; @Capizzano2000AnyCP]). Since we observe a moderate number of CG iterations in our numerical examples, a preconditioner has not been implemented yet.
The main drawback of our approach relies on the fact that the code is strictly limited to regular grids and is thus not applicable to more complicated domains. It is crucial that each element has the same geometry in order to achieve the multilevel Toeplitz structure of the stiffness matrix; meaning that only rectangular domains are reasonable.
However, one could think about a coupling strategy, which allows to decompose a general domain into rectangular parts and the remaining parts. This is beyond the scope of this paper and left to future work. In contrast to that, the restriction to translation invariant kernels appears to be rather weak, since a lot of kernels treated in literature are even radial.
Acknowledgement {#acknowledgement .unnumbered}
===============
The first author has been supported by the German Research Foundation (DFG) within the Research Training Group 2126: ‘Algorithmic Optimization’.
[^1]: Universitaet Trier, D-54286 Trier, Germany, Email: vollmann@uni-trier.de, volker.schulz@uni-trier.de
| {
"pile_set_name": "ArXiv"
} |
---
author:
- |
$^{1}$Romeo Orsolino, $^{1}$Michele Focchi, $^{1}$Carlos Mastalli,\
$^{2}$Hongkai Dai, $^{1}$Darwin G. Caldwell and $^{1}$Claudio Semini\
$^{1}$Department of Advanced Robotics, Istituto Italiano di Tecnologia (IIT), Genova, Italy\
$^{2}$Toyota Research Institute (TRI), Los Altos, USA
bibliography:
- 'references/refs\_abstract.bib'
title: '**The Actuation-consistent Wrench Polytope (AWP) and the Feasible Wrench Polytope (FWP)**'
---
Motivation
==========
The motivation of our current research is to devise motion planners for legged locomotion that are able to exploit the robot’s actuation capabilities. This means, when possible, to minimize joint torques or to propel as much as admissible when required. For this reason we define two new 6-dimensional bounded polytopes that we name Actuation-consistent Wrench Polytope (AWP) and Feasible Wrench Polytope (FWP).
Introduction
============
We define the former polytope (the AWP) as the set of all wrenches that a robot can generate on its own center of mass (CoM). This considers the contact forces that the robot can generate given its current configuration and actuation capabilities. Unlike the Contact Wrench Cone (CWC) [@Hirukawa2006; @Dai2016], the AWP is a bounded polytope in $\in {\rm I\!R}^6$.\
The intersection of the AWP with the CWC results in the second convex bounded polytope $\in {\rm I\!R}^6$ that we define as Feasible Wrench Polytope (FWP). This contains all the feasible wrenches that can be realized on the robot’s CoM, both given the *internal* properties of the robot (its posture and its actuation limits) and the *external* properties coming from the contact with the environment (unilaterality, normal direction, friction cone coefficient).
![3D force polytopes (blue) and linear friction cones (pink). The intersection of these two sets forms 3D *bounded friction polytopes* that contain all the feasible contact forces that can be generated at each foot.[]{data-label="fig:fws_2"}](figs/fws_11 "fig:") ![3D force polytopes (blue) and linear friction cones (pink). The intersection of these two sets forms 3D *bounded friction polytopes* that contain all the feasible contact forces that can be generated at each foot.[]{data-label="fig:fws_2"}](figs/fws_10 "fig:")
![Actuation Wrench Polytope (AWP) (left) and Contact Wrench Cone (CWC) (right) in a the case of a planar model. The FWP is the intersection of the AWP and the CWC.[]{data-label="fig:tls_2"}](figs/awp_3D "fig:") ![Actuation Wrench Polytope (AWP) (left) and Contact Wrench Cone (CWC) (right) in a the case of a planar model. The FWP is the intersection of the AWP and the CWC.[]{data-label="fig:tls_2"}](figs/cwc_3D "fig:")
The Actuation Wrench Polytope (AWP)
===================================
The AWP is a bounded convex polytope in ${\rm I\!R}^6$ that contains all the wrenches that a robot can apply given its torque limits and its current configuration.\
Let us consider a tree-structured robot with $n_b$ branches: the first step for computing the AWP is to compute the upper and lower bounds of the contact forces $\mathbf{f}^{max} \in {\rm I\!R}^{3}$ that each end effector of robot can realize on the environment. This can be performed using the following relationship: $$\label{eq:1}
\mathbf{f}^{\max}_i = \mathbf{J}_i^{T^{\#}} \big(\mathbf{M}_{bi}^T
\mathbf{\ddot{q}}_b + \mathbf{M}_i \mathbf{\ddot{q}}_i + \mathbf{c}_i + \mathbf{g}_i -
\mathbf{\tau}_i^{\max}\big),$$ where $\mathbf{\tau}^{max}_i \in {\rm I\!R}^{n}$ is a vector containing the torques generated by the $n$ actuated joints $\mathbf{q}_i$ of the robotic limb and $\mathbf{q}_b$ represents the degrees of freedom of the base. $\mathbf{g}_i$ accounts for the gravity acting on the limbs and $\mathbf{c}_i$ is the centrifugal/coriolis term. $\mathbf{J}^{\#}(q)$ is the Moore-Penrose pseudoinverse of the trasposed jacobian matrix of the leg: $\mathbf{J}^{\#} = (\mathbf{J}(\mathbf{q}) \mathbf{J}(\mathbf{q})^T)^{-1} \mathbf{J}(\mathbf{q})$.\
Since each element of $\mathbf{\tau}^{max}$ can take on two values (either the maximum or the minimum torque) then there are $2^n$ combinations that give origin to $2^n$ different values of $\mathbf{f}^{max}_i$ with $i = 1,2,\dots, n$. Each $\mathbf{f}^{max}_i \in {\rm I\!R}^3$ is a vertex of a 3D force polytope [@Chiacchio1997] that represents the set of all possible contact forces that can be generated by the limb in that configuration. Fig. \[fig:fws\_2\] represents such 3D force polytopes composed by the torque limits (blue) and by the linear friction cones (pink) for a quadruped robot.\
The second step for the computation of the AWP is to add three more dimensions to each of these 3D points $\mathbf{f}^{max}_i$: $$\mathbf{w}_i = \left(
\begin{bmatrix}
\mathbf{f}^{max}_i \\
\mathbf{p}_k \times \mathbf{f}^{max}_i
\end{bmatrix} \right) \in {\rm I\!R}^6 \hspace{.5cm} with: i = 1,2,\dots, 2^n$$ with $\mathbf{p}_k \in {\rm I\!R}^3$ being the position of the $k^{th}$ end-effector.\
The new dimensions add the corresponding torque that the robot can generate on the its own CoM.\
In this way it is possible to obtain a 6D bounded polytope for each of the $n_b$ limbs of the tree-structured robot: $\mathcal{W}_{k} = ConvHull(\mathbf{w}_1^k,\mathbf{w}_2^k,\dots,\mathbf{w}_{2^n}^k)$ with $k = 1,2,\dots n_b$.\
The final step to compute the $AWP$ is then to perform the sum of the 6D wrench polytopes $\mathcal{W}_k$ of each limb: $$AWP = \mathcal{W}_1 \bigoplus \mathcal{W}_2 \bigoplus \dots \mathcal{W}_{n_b}$$ Fig. \[fig:tls\_2\] shows the AWP and the CWC in the case of a planar model, whose full wrench space is described by the variables $F_x, F_z$ and $\tau_y$ and can thus be represented in 3D.
The Feasible Wrench Polytope (FWP)
==================================
The goal of the FWP is to remove from the AWP all the contact forces that are not feasible because of the constraints given by the environment. These features are the fact that the leg can only push and not pull (unilaterality) and the fact that, to avoid slippage, the contact force should stay within the friction cone. The computation of these constrains requires the knowledge of the surface normal and of the friction coefficient at each contact point.\
The FWP can be obtained by intersecting the AWP with the CWC. This is computationally expensive because of the high cardinality of the two sets (high number of half-spaces). For this reason we analyze possible approaches (e.g. exploiting the $V$-description, the $H$-description or the double-description [@Caron2015]) that can allow to use AWP and the FWP for *online* motion planning.
Conclusion
==========
The purpose of the AWP and of the FWP is to have a description as precise as possible of the admissible states that a legged robot can reach. The FWP can be used to devise controllers that exploit the all actuation range of a robot in order to, by instance, reject external disturbances on terrains of arbitrary roughness.\
The FWP can also be exploited to design motion planners that minimize the actuation torques of each joint while still using a simplified centroidal model of the robot.\
Besides that, the concept of AWP and FWP that has been described so far for the specific case of a legged robots, can be extended to other types of platforms that share a similar complexity of interaction with the environment (such as robotic manipulators for grasping).\
Our current research focuses on assessing the performance of *online* motion planners that only use the vertex-description of the 6D sets [@Delos2015].
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'Kernel methods are used extensively in classical machine learning, especially in pattern recognition. Here we propose a kernel-based quantum machine learning algorithm which can be implemented on a near-term, intermediate-scale quantum device. Our method estimates classically intractable kernel functions, using a restricted quantum model known as “deterministic quantum computing with one qubit”. Our work provides a framework for studying the role of quantum correlations other than quantum entanglement, for machine learning applications.'
author:
- Roohollah Ghobadi
- 'Jaspreet S. Oberoi'
- Ehsan Zahedinejhad
title: The Power of One Qubit in Machine Learning
---
[^1]
[*Introduction.*]{} Noisy, intermediate-scale quantum (NISQ) devices, consisting of up to a few hundred qubits, have been presented as a new frontier for achieving “quantum supremacy”, that is, surpassing the performance of classical computing devices [@preskill]. This is plausible due to quantum advantages offered by non-universal quantum computational models such as Boson sampling [@aaronson], instantaneous quantum polynomial-time (IQP) sampling [@bremner2016average], and deterministic quantum computing with one qubit (DQC1) [@Knill]. Recent experiments seeking quantum supremacy have involved sampling from the output distribution of such non-universal models.
The potential applications of NISQ devices are a subject of extensive investigation in various fields, including quantum chemistry [@lloyd1996universal; @aspuru2005simulated; @lanyon2010towards; @KMT+17; @matsuura2018vanqver], quantum optimization [@FGG+14], and machine learning. Machine learning could benefit from NISQ devices due to the favourable exponential scaling of the Hilbert space and the potential capability of quantum correlations present in these devices to unveil hidden correlations in big data [@Biamonte; @perdomo2018opportunities].
Proposals for using NISQ devices for machine learning include quantum Boltzmann machines [@AAR+18], quantum clustering algorithms [@OMA+17], and quantum neural networks [@FN18]. Very recently, a kernel-based supervised machine learning method—one based on a similarity measure between data points—has been proposed as an alternative route toward achieving a quantum advantage [@schuld2019quantum; @havlivcek2019supervised]. In the mentioned approach, a quantum processing unit is used to estimate a computationally expensive kernel function which can then be used as an input to a classical machine learning algorithm.
The DQC1 model is a non-universal quantum computing model which provides an exponential speedup in estimating the normalized trace of a unitary matrix, independent of the size of the matrix, over classical computing methods [@Knill]. The quantum speedup achieved by the DQC1 model is attributed to a non-classical correlation known as quantum discord [@Datta2; @Datta], which is robust against noise. It has also been shown that the DQC1 model cannot be efficiently simulated using classical devices unless the polynomial-time hierarchy collapses to the second level [@fujii2018impossibility].
In this paper, we propose a QML algorithm that can be implemented on a NISQ device. Our scheme utilizes and redirects the computational advantage offered by the DQC1 model for estimating the kernels that are classically intractable to compute. We also provide a necessary condition for a kernel to be classically tractable. We then present simulation results for two synthesized datasets to demonstrate the efficacy of our method.
[*The kernel method.*]{} To set the stage for our proposal, we introduce support vector machines (SVM) and the kernel method and in the context of supervised machine learning. Let us assume a set of training ($X_\text{train}$) and test ($X_\text{test}$) datasets, where . Each data point $\vec{x}\in{X}$ is assigned a label through a mapping $s:X\rightarrow{\{+1,-1\}}$. Classifying the data involves using training (i.e., labelled) data $X_\text{train}\rightarrow\{+1,-1 \}$ to find a classifier $f$ which can, with high probability, predict the correct label of the unseen (i.e., test) data points $X_\text{test}$ (i.e., $f:X_\text{test}\rightarrow\{+1,-1\}$).
![(a) A support vector machine (SVM) with a two-dimensional linearly separable dataset. Circles in green and orange represent samples of two classes in the dataset. The support vectors (on the dotted red line) are those samples from each class that are closest to the decision boundary, represented by the blue line, with maximized margin. The variable $b$ is the offset and $\vec{w}$ is a normal vector to the decision boundary. (b) The kernel method: two non-linearly separable classes (left) can become linearly separable when mapped to a higher-dimensional space (right). []{data-label="kernel"}](svm-kernel.pdf){width="46.00000%"}
For the simple case of a linearly separable dataset, as shown in Fig. \[kernel\](a), one can find a hyperplane , where $\vec{w}$ and $b$ are the hyperplane normal vector and offset, respectively, both of which can be determined using the training data.
The classification problem can be reduced to maximizing the margin (which is proportional to $||\vec{w}||^{-2}$) between the hyperplane and nearest data points, known as support vectors, subject to the condition that (see Fig. \[kernel\](a)). We can express the classifier in terms of the Lagrange multiplier as , with $\alpha_{i}\in\mathbb{R}$ [@hofmann2008kernel]. The classifier function depends on the inner product of the data points, which is the basis of the kernel method and the generalization of SVMs to nonlinear classifiers. To generalize the SVMs, one can define the feature map that transforms the original data points into vectors in a higher-dimensional space, as shown in Fig. \[kernel\](b). Formally, we define the feature map $\Phi:X\rightarrow\mathcal{H}$, where $\mathcal{H}$ is a Hilbert space. The kernel function, a similarity measure between data points $\vec{x},\vec{x}'\in X$, can be defined as $K(\vec{x},\vec{x}')=\braket{\Phi(\vec{x})|\Phi(\vec{x}')}$, where the bra–ket notation shows the inner product on the Hilbert space $\mathcal{H}$.
The link between the kernel function and machine learning has been established by the “representer theorem” [@hofmann2008kernel], which guarantees that for a positive semi-definite kernel [^2] the classifier can be expressed as $$f(\vec{x})=\sum_{i}\alpha_{i}K(\vec{x},\vec{x}_{i}),
\label{representer}$$ where $\vec{x}\in{X}_{\text{test}}$, and $\vec{x}_i\in{X}_{\text{train}}$ are support vectors.
The kernel method can be extended to the quantum domain [@havlivcek2019supervised; @schuld2019quantum]. To do so, one can define the feature map $|\Phi(\vec{x})\rangle=U_{\Phi(\vec{x})}|0\rangle^{\otimes{n}}$, where $\vec{x}$ is encoded in the quantum circuit $U_{\Phi(\vec{x})}$, and $|0\rangle^{\otimes{n}}$ is the initial state of $n$ qubits. The kernel function is then defined according to $K(\vec{x},\vec{x}')=\braket{\Phi(\vec{x})|\Phi(\vec{x}')}$. So long as it is possible to efficiently estimate a kernel using classical means, one cannot expect to attain a quantum advantage [@schuld2019quantum; @havlivcek2019supervised]. In other words, a necessary condition for achieving a quantum advantage in the kernel method is to realize a kernel function(s), one that is highly inefficient or intractable for classical devices to estimate [@havlivcek2019supervised; @van2006quantum].
[*DQC1.*]{} The deterministic quantum computing with one qubit (DQC1) model is a non-universal quantum computing model [@Knill] which provides an exponential speedup in estimating the normalized trace of a unitary matrix, independent of the size of the matrix, over classical computing resources [@Datta; @fujii2018impossibility]. The model defies the common notion that achieving a quantum advantage in computation requires pure quantum states and quantum entanglement as resources. The DQC1 circuit is depicted in Fig. 2(a), where the initial state $|0\rangle\langle0|\otimes\rho_{n}$ evolves under the unitary interaction $$U=|0\rangle\langle0|\otimes \mathbb{1}_{n}+|1\rangle\langle1|\otimes U_{n},
\label{unitary}$$ where $\mathbb{1}_n$ is the $N\times{N}$ ($N=2^n$) identity matrix and $U_n$ is a unitary operator acting on the $n$-qubit register. The final state $\rho_f$ of the control qubit, as in [@Knill], becomes $$\rho_{f}=\frac{1}{2}\begin{pmatrix}
1&\text{Tr}(\rho_{n}U_{n})\\
\text{Tr}(\rho_{n}U_{n}^{\dagger})&1
\end{pmatrix},
\label{rho1}$$ where refers to the trace of the matrix. In the special case where $\rho_n=\frac{\mathbb{1}_n}{N}$, the off-diagonal terms in Eq. (\[rho1\]) become $\frac{1}{N}\text{Tr}(U_{n})$, suggesting that one can estimate the trace of an arbitrarily large matrix $U_{n}$ by measuring the decoherence of the control qubit. By measuring the Pauli operators of the control qubit, one obtains and $\langle\sigma_{y}\rangle=\frac{1}{N}\text{Im}[\text{Tr}(U_{n})]$. The number of measurements required to estimate the trace within a distance $\epsilon$ with accuracy $\delta$ is given by $O(\log(1/\delta)/\epsilon^{2})$, independent of the number of register qubits [@Knill]. Note that one can obtain the same ($\epsilon$, $\delta$) scaling for a special case where the $U_{n}$ is a real, positive semi-definite matrix using a classical randomized algorithm [@avron2011randomized]. Importantly, replacing the pure control qubit with $\frac{\mathbb{1}_{1}+\beta\sigma_{z}}{2}$ (where $\beta\in\mathbb{R}$, and $\sigma_z$ denotes the Pauli-Z operator) adds an overhead of $\beta^{-2}$, suggesting that the quantum advantage of the model is robust against imperfect preparations of the control qubit.
It is worth noting that the DQC1 model has been experimentally realized for optical [@lanyon2008experimental], nuclear magnetic resonance [@passante2011experimental], and superconducting [@Wang] qubits. In addition, a cold-atom-based scheme using the Rydberg interaction with 100 qubits has been proposed recently for DQC1 [@Mansell].
![(a) The DQC1 circuit. The control qubit is initialized in the state $\rho{_a}=\ket{0}\bra{0}$, and the $n$-qubit register (indicated by a “/”) is initialized in the state $\rho{_n}$. $H$ denotes the Hadamard gate and $U_n$ is a unitary operator acting on the register. (b) Our implementation of a circuit with decomposition , where $\mathcal{U}^{r}$ is the encoding circuit with a depth of $r$. (c) The circuit structure of the unitary operator $\mathcal{U}$ adapted from [@havlivcek2019supervised] to construct the kernel function for the two samples $\{\vec{x},\vec{x}'\}\in{X}$. We have chosen $r=3$ for our simulation. Here $\phi(\vec{x})$ is the feature encoding (defined in the text).[]{data-label="DQC1"}](circuit_processed){width="48.00000%"}
[*Method.*]{} Here, we explain how we employ the DQC1 circuit to construct the kernel function. We consider a decomposition of $U_{n}$ in Eq. (\[unitary\]) as $U_{n}=\mathcal{U}^{r}(\vec{x})\mathcal{U}^{r^\dagger}(\vec{x}')$, where $\mathcal{U}^{r}(\vec{x})$ and $\mathcal{U}^{r^\dagger}(\vec{x}')$ represent the encoding of two data points $\vec{x}$ and $\vec{x}'$, respectively, and $r$ is the depth of the quantum circuit. We define the kernel function as
$$K(\vec{x},\vec{x}')=\text{Tr}(\rho_{n}\mathcal{U}^{r}(\vec{x})\mathcal{U}^{r^\dagger}(\vec{x}')).
\label{kernel2}$$
Using and , we obtain $$\rho_{f}=\frac{1}{2}\begin{pmatrix}
1&K(\vec{x},\vec{x}')\\
K^{*}(\vec{x},\vec{x}')&1
\end{pmatrix},
\label{rho2}$$ which is the main result of this work. Once the kernel has been obtained, one can use it in any kernel-based machine learning algorithm.
The flexibility in choosing $\rho_{n}$ and $\mathcal{U}^{r}$ in Eq. (\[kernel2\]) allows one to adapt this method to cater to different kernels, depending on the dataset. Our scheme can be applied to both discrete and continuous variable systems [@lau2017quantum; @liu2016power]. For example, using $\rho_{n}=|0\rangle\langle 0|^{\otimes n}$ and $\mathcal{U}^{r}(\vec{x})=D(\vec{x})$ in Eq. (\[kernel2\]) with $D$ as the displacement operator [@scully1999quantum], one obtains the well-known, shift-invariant radial basis function (RBF) kernel $K(\vec{x},\vec{x}')=e^{-|\vec{x}-\vec{x}'|^{2}}$ (see also [@chatterjee2016generalized]). As another example, for $\rho_{n}=\frac{\mathbb{1}_n}{N}$, the resulting kernel is , where $\delta$ is the Dirac delta function.
Please note that shift-invariant kernels, such as the RBF kernel, can be efficiently estimated classically. To show this, one can use Bochner’s theorem to write a shift-invariant kernel as the Fourier transform of a probability distribution $p(\omega)$ [@rudin2017fourier], that is,
$$K(\vec{x}-\vec{x}')=\int d\omega p(\omega)e^{i\omega.(\vec{x}-\vec{x}')}.
\label{kshift}$$
Since $|e^{i\omega.(\vec{x}-\vec{x})}|^{2} = 1$, Hoeffding’s inequality guaranties an efficient estimation of Eq. with a maximum error of $\epsilon$ by drawing $O(\epsilon^{-2})$ samples from $p(\omega)$ (see also [@rahimi2008random]). This argument applies to rotationally invariant kernels as well [@rotation].
*Simulation.* We now provide a proof-of-principle example, in which a particular DQC1 quantum circuit performs the classification task on two datasets. In [@havlivcek2019supervised], a quantum circuit is proposed, which has been conjectured to lead to a kernel that is intractable for a classical device (see also [@van2006quantum]). We consider a circuit that has the same feature map as [@havlivcek2019supervised], , where $H$ denotes the Hadamard gate, and , with $\phi_{i}(\vec{x})=x_{i}$ for $i\in\{1,2\}$ and . The requirement for obtaining a kernel that is non-translationally invariant imposes a lower bound on the depth of the circuit. For example, in the case of $r=1$, from Eq. (\[kernel2\]) we obtain $K(\vec{x}-\vec{x}')=\text{Tr}(\rho_{n}\mathcal{U}_{\phi(\vec{x}-\vec{x}')})$ and, therefore, the resulting kernal is classically simulatable (see also [@havlivcek2019supervised]). For this reason, in our simulation, we use $r=3$ to ensure that the kernel is not shift invariant.
We run our experiments on two two-dimensional datasets with binary labels. The “make\_moons” and “make\_circles” methods in the “scikit-learn” datasets module are used to generate these datasets. We consider three levels of noise to generate three datasets for both the make\_moons and make\_circles methods. For the moons dataset, we use the noise values $\zeta=0.0$, $0.1$, $0.15$ (see Fig. \[fig:moon\_dataset\](a)), and for the circles dataset, we use $\zeta=0.0$, $0.05$, $0.1$ (see Fig. \[fig:circle\_dataset\](a)). Consistently, across all six datasets, a total of $2000$ points are generated and each dataset is split into $1600$ training and $400$ test samples.
We run the quantum circuit in Fig. \[DQC1\](a)–(c) for each pair of training data and estimate the kernel $K(\vec{x},\vec{x}')$ by directly calculating the trace of the $U_n$ operator in Fig. \[DQC1\](a). We then use the absolute value of the resultant kernel to find the support vectors for the classifier. Finally, we use the classifier to predict the labels for the test data.
Figures \[fig:moon\_dataset\](b)–(c) and \[fig:circle\_dataset\](b)–(c) show the performance of our kernel approach on the “moons” and “circles” datasets. For each dataset, we report the classification accuracy on the “training/test” datasets. Figure \[fig:moon\_dataset\](a) shows the moons dataset, generated using three levels of noise. Figure \[fig:moon\_dataset\](b) shows the results of our quantum classifier for the fully mixed state $\rho_{n}=\frac{\mathbb{1}_n}{N}$, and Fig. \[fig:moon\_dataset\](c) shows the results when the initial state is the pure state $\rho_{n}=|0\rangle\langle0|^{\otimes n}$. It is interesting to note that the classification accuracy is reduced when we use the pure state initialization instead of the mixed state initialization. The performance of our algorithm is consistent across the two datasets. In both cases, we observe that the quantum circuit learns to classify well (see Tables \[table:moons\] and \[table:circles\]), and also that the circuit prepared at the mixed state outperforms the one prepared at the pure state.
We also compare the performance of our quantum kernel approach with that of the classical RBF kernel. We consider the same training/test sizes as those used for the quantum case. Unlike the quantum kernel, for training the SVM using the classical RBF kernel, we have used five-fold cross-validation to ensure that we obtain the best results for the classical RBF kernel. Tables \[table:moons\] and \[table:circles\] provide a summary of the performance of the SVM using quantum and classical kernels, respectively. For both the moons and circles datasets, the classification results of the SVM using the quantum kernel with the mixed state are comparable to those of the SVM using the classical RBF kernel.
![(a) Plots of the synthesized moons datasets, with $2000$ samples, for a given noise (in the dataset) value . We use the make\_moons method in the dataset module to generate each dataset. Shown are the training/test scores (i.e., accuracy) for (b) registers in the mixed state and (c) registers in the pure state.[]{data-label="fig:moon_dataset"}](moons_datasets.jpg "fig:"){width="45.00000%"}\
![(a) Plots of the synthesized moons datasets, with $2000$ samples, for a given noise (in the dataset) value . We use the make\_moons method in the dataset module to generate each dataset. Shown are the training/test scores (i.e., accuracy) for (b) registers in the mixed state and (c) registers in the pure state.[]{data-label="fig:moon_dataset"}](moons_mixed_state.jpg "fig:"){width="45.00000%"}\
![(a) Plots of the synthesized moons datasets, with $2000$ samples, for a given noise (in the dataset) value . We use the make\_moons method in the dataset module to generate each dataset. Shown are the training/test scores (i.e., accuracy) for (b) registers in the mixed state and (c) registers in the pure state.[]{data-label="fig:moon_dataset"}](moons_pure_state.jpg "fig:"){width="45.00000%"}\
![[(a) Plots of the synthesized circles dataset, with $2000$ samples, for a given noise (in the dataset) value . We use the make\_circles method in the dataset module to generate each dataset. Shown are the training/test scores (i.e., accuracy) for (b) registers in the mixed state and (c) registers in the pure state.]{}[]{data-label="fig:circle_dataset"}](circles_datasets.jpg "fig:"){width="45.00000%"}\
![[(a) Plots of the synthesized circles dataset, with $2000$ samples, for a given noise (in the dataset) value . We use the make\_circles method in the dataset module to generate each dataset. Shown are the training/test scores (i.e., accuracy) for (b) registers in the mixed state and (c) registers in the pure state.]{}[]{data-label="fig:circle_dataset"}](circles_mixed_state.jpg "fig:"){width="45.00000%"}\
![[(a) Plots of the synthesized circles dataset, with $2000$ samples, for a given noise (in the dataset) value . We use the make\_circles method in the dataset module to generate each dataset. Shown are the training/test scores (i.e., accuracy) for (b) registers in the mixed state and (c) registers in the pure state.]{}[]{data-label="fig:circle_dataset"}](circles_pure_state.jpg "fig:"){width="45.00000%"}\
------------- ---------- ------ ---------- ------ ---------- ------ --
training test training test training test
mixed state 1.0 1.0 1.0 1.0 0.99 0.98
pure state 0.96 0.96 0.95 0.92 0.93 0.90
RBF kernel 1.0 1.0 1.0 1.0 1.0 0.98
------------- ---------- ------ ---------- ------ ---------- ------ --
: A summary of the performance of the SVM algorithm using the quantum and classical kernels on the moons datasets (see Fig. $\ref{fig:moon_dataset}$). We report the training/test score of the classifier when the register’s qubits are initialized at mixed and pure states. The classical kernel is an RBF kernel. Here $\zeta$ denotes the noise values used in generating the datasets.
\[table:moons\]
------------- ---------- ------ ---------- ------ ---------- ------ --
training test training test training test
mixed state 1.0 1.0 0.98 0.97 0.84 0.82
pure state 0.89 0.88 0.83 0.83 0.73 0.73
RBF kernel 1.0 1.0 0.98 0.97 0.85 0.81
------------- ---------- ------ ---------- ------ ---------- ------ --
: A summary of the performance of the SVM algorithm using the quantum and classical kernels on the circles datasets (see Fig. $\ref{fig:circle_dataset}$). We report the training/test score of the classifier when the register’s qubits are initialized at mixed and pure states. The classical kernel is an RBF kernel. Here $\zeta$ denotes the noise values used in generating the datasets.
\[table:circles\]
[*The effect of noise.*]{} As a final remark, we wish to comment on how the effect of noise inherent to quantum gates may be characterized in our scheme. Note that in the absence of noise, we have $K(\vec{x},\vec{x})=1$. In practice, however, to take the noise into account, one must modify Eq. (\[kernel2\]) into the equation $\widetilde{K}(\vec{x},\vec{x}')=\text{Tr}(\rho_{n}\mathcal{\widetilde{U}}(\vec{x})\mathcal{\widetilde{U}}^{\dagger}(\vec{x}'))$, where $\mathcal{\widetilde{U}}$ denotes the noisy experimental implementation of $\mathcal{U}$. Note that . Having access to $\widetilde{K}(\vec{x},\vec{x})$, by measuring the control qubit, one can efficiently estimate the average fidelity, a measure of the impact of noise, by using $F(\vec{x})=\frac{|\widetilde{K}(\vec{x},\vec{x})|^{2}+N}{N^{2}+N}$ [@nielsen2002simple; @Poulin].
*Conclusion.* We have proposed a kernel-based scheme for QML, based on the DQC1 model. We have numerically tested our method to classify data points of two-dimensional synthesized datasets using a two-qubit circuit. Our work highlights the role of quantum correlations such as quantum discord in machine learning and benefits from the relationship between the fidelity of the process and the kernel function, to assess the effect of noise. Our method provides a framework for exploring the possibility of achieving quantum supremacy in machine learning, as it provides a means to efficiently estimate classically intractable kernels using NISQ devices.
Acknowledgement
===============
We thank Mark Schmidt for insightful discussions and for reviewing a draft of the paper. R. Ghobadi appreciates the multiple discussions he had with with Artur Scherer. We greatly thank Marko Bucyk for reviewing and editing the manuscript. Partial funding for this work was provided by a Mitacs Elevate fellowship.
J. Preskill, Quantum **2** 330 (2018).
S. Aaronson and A. Arkhipov, *The computational complexity of linear optics*, Proceedings of the 43rd annual ACM symposium on Theory of computing, 2011, San Jose (ACM, New York, 2011), p. 333.
Michael J. Bremner, Ashley Montanaro, Dan J. Shepherd, Phys. Rev. Lett. **117**, 080501 (2016).
E. Knill, R. Laflamme, Phys.Rev.Lett.**81** 5672 (1998).
S. Lloyd, Science, **273** 1073-1078 (1996).
A. Aspuru-Guzik, A. Dutoi, P. Love and M. Head-Gordon, Science, **309**, 1704(2005).
Lanyon, Benjamin P and Whitfield, James D and Gillett, Geoff G and Goggin, Michael E and Almeida, Marcelo P and Kassal, Ivan and Biamonte, Jacob D and Mohseni, Masoud and Powell, Ben J and Barbieri [*et al*]{}. Nat. chem **2**, 106 (2010).
A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, J. M. Gambetta, Nature **549**, 242 (2017).
S. Matsuura, T. Yamazaki, V. Senicourt and A, Zaribafiyan, arXiv:1810.11511 (2018).
E. Farhi, J. Goldstone, S. Gutmann, arXiv:1411.4028 (2014).
J. Biamonte, [*et al*]{}. Nature **549**, 195 (2017).
A. Perdomo-Ortiz, M. Benedetti, J. Realpe-Gómez, R. Biswas, Quantum Science and Technology **3**, 030502 (2018).
M. H. Amin, E. Andriyash, J. Rolfe, B. Kulchytskyy, R. Melko, Phys. Rev. X 8, 021050 (2018).
J. Otterbach, R. Manenti, N. Alidoust, A. Bestwick, M. Block, B. Bloom, S. Caldwell, N. Didier, E. S. Fried, S. Hong, [*et.al.*]{} arXiv:1712.05771 (2017).
E. Farhi, H. Neven, arXiv:1802.06002 (2018).
M. Schuld and N. Killoran, Phys. Rev. Lett. **122**, 040504 (2019).
V. Havl[í]{}[č]{}ek,A.D. C[ó]{}rcoles, Antonio, K. Temme, A.W. Harrow, A. Kandala, J. M. Chow and J. M. Gambetta Nature **567**, 209 (2019).
A. Datta, A. Shaji, C. M. Caves, Phys. Rev. Lett. **100**, 050502 (2008).
A. Datta, G. Vidal, Phys. Rev. A **75**, 042310 (2007).
K. Fujii, H. Kobayashi, T. Morimae, H. Nishimura, S. Tamate, and S. Tani, Phys. Rev. Lett. 120, 200502 (2018).
T. Hofmann, B. Sch[ö]{}lkopf, and A.J. Smola, The annals of statistics 1171 (2008).
Van Dam, W., Hallgren, S. and Ip, L. Quantum algorithms for some hidden shift problems. SIAM J. on Computing **36**, 763 (2006).
H.Avron and S. Toledo, Journal of ACM (JACM) **58**, 8 (2011).
B. P. Lanyon, M. Barbieri, M. P. Almeida, and A. G. White, Phys. Rev. Lett. **101**, 200501 (2008).
G. Passante, O. Moussa, D. A. Trottier, and R. Laflamme, Phys. Rev. A **84**, 044302 (2011).
W. Wang, B. Yadin, Y. Ma, J. Ma, Y. Xu, L. Hu, H. Wang, Y. P. Song, Mile Gu, L. Sun, arXiv:1806.05543 (2018).
C. Mansell, S. Bergamini, New Journal of Physics, **16**, 053045 (2014).
H. K. Lau, R. Pooser, G. Siopsis, and C. Weedbrook, Phys. Rev. Lett. **118**, 080501 (2017).
N. Liu, J. Thompson, C. Weedbrook, S. Lloyd, V. Vedral, M. Gu, K. Modi, Phys. Rev. A **93**, 052304 (2016).
R. Chatterjee and T. Yu, Quantum Information and Communication **17**, 1292 (2017).
M. O. Scully and M. S. Zubairy, Quantum optics (1999).
W. Rudin, [*Fourier analysis on groups*]{} (Courier Dover Publications 2017).
A. Rahimi, and R. Benjamin, [*Advances in neural information processing systems*]{} (2008) pp.1177-1184.
B. Bullins, C. Zhang, Yi Zhang, arXiv:1710.10230 , (2018).
M. A. Nielsen, Phys. Lett. A **303** (4): 249 (2002).
D. Poulin, R. Blume-Kohout, R. Laflamme, H. Ollivier, Phys. Rev. Lett. **92**, 177906 (2004).
[^1]: Corresponding author: farid.ghobadi@1qbit;\
jaspreet.oberoi@1qbit.com, ehsan.zahedinejad@1qbit.com
[^2]: The kernel is positive semi-definite if $\forall c_{i},c_{j}\in\mathbb{C}$, and $\forall \vec{x}_{i},\vec{x}_{j}\in X$, we have $\sum_{i,j}c_{i}c^{*}_{j}K(\vec{x}_{i},\vec{x}_{j})\geq0$
| {
"pile_set_name": "ArXiv"
} |
---
abstract: 'New graded modules for the current algebra of ${\mathfrak{sl}}_n$ are introduced. Relating these modules to the fusion product of simple ${\mathfrak{sl}}_n$-modules and local Weyl modules of truncated current algebras shows their expected impact on several outstanding conjectures. We further generalize results on PBW filtrations of simple ${\mathfrak{sl}}_n$-modules and use them to provide decomposition formulas for these new modules in important cases.'
address:
- 'Mathematisches Institut, Universität zu Köln, Germany'
- 'School of Mathematics and Statistics, University of Glasgow, UK'
author:
- Ghislain Fourier
bibliography:
- 'biblist-fusion.bib'
title: 'New homogeneous ideals for current algebras: filtrations, fusion products and Pieri rules'
---
[^1]
Introduction
============
We consider the simple complex Lie algebra ${\mathfrak{sl}}_n = {\mathfrak{b}} \oplus {\mathfrak{n}}^-$ and its current algebra ${\mathfrak{sl}}_n \otimes \bc[t]$. We fix a pair $(\lambda_1, \lambda_2)$ of dominant integral ${\mathfrak{sl}}_n$-weights. $F_{\lambda_1, \lambda_2}$ will be introduced as the cyclic ${\mathfrak{sl}}_n \otimes \bc[t]$-module defined by the homogeneous ideal generated by the kernel of an evaluation map of ${\mathfrak{b}} \otimes \bc[t]$ and certain monomials in $U({\mathfrak{n}}^- \otimes \bc[t])$. $F_{\lambda_1, \lambda_2}$ decomposes into simple, finite-dimensional ${\mathfrak{sl}}_n$-modules: $$F_{\lambda_1, \lambda_2} =_{{\mathfrak{sl}}_n} \bigoplus_{\tau \in P^+} V(\tau)^{\oplus a_{\lambda_1, \lambda_2}^{\tau}}.$$ As $F_{\lambda_1, \lambda_2}$ is a highest weight module, we have $a_{\lambda_1, \lambda_2}^{\lambda_1 + \lambda_2} = 1 \text{ and } a_{\lambda_1, \lambda_2}^{\tau} = 0 \text{ if } \tau \nleq \lambda_1 + \lambda_2.$ Moreover, ${\mathfrak{sl}}_n \otimes t^2 \bc[t].F_{\lambda_1, \lambda_2} = 0$ and hence $$F_{\lambda_1, \lambda_2} = U({\mathfrak{n}}^- \otimes \bc[t]/(t^2)).\mathbbm{1} \cong U({\mathfrak{n}}^-) S({\mathfrak{n}}^-).\mathbbm{1}.$$ Due to this observation, the ${\mathfrak{sl}}_n$-highest weight vectors and therefore the multiplicities $ a_{\lambda_1, \lambda_2}^{\tau}$ should be “controlled“ by $S({\mathfrak{n}}^-).\mathbbm{1}$. This provides a close relation to the framework of PBW filtrations ([@FFoL11a; @FFoL13a]). By construction, $F_{\lambda_1, \lambda_2}$ is a quotient of $S({\mathfrak{n}}^-)/\mathcal{I}(\lambda_1, \lambda_2)$ with an induced ${\mathfrak{n}}^+$-action $\circ$, where the ideal is generated by $$U({\mathfrak{n}}^+) \circ \langle f_\alpha^{a_\alpha + 1}\, | \, \text{ for all positive roots } \alpha \rangle \subset S({\mathfrak{n}}^-)$$ for some $a_\alpha$ depending on $\lambda_1, \lambda_2$. Generalizing the results from [@FFoL11a Theorem and Theorem B] we see that a spanning set of $S({\mathfrak{n}}^-).\mathbbm{1}$ can be parameterized by integer points in a polytope defined through Dyck paths conditions (Corollary \[span-m\]). This leads to the question wether one can give a polytope parametrizing the highest weight vectors. We give a positive answer in certain important cases:
\[intro-thm\] Suppose $\lambda_1, \lambda_2 $ satisfy one of the following:
1. $\lambda_1, \lambda_2$ are both rectangular weights, e.g. multiples of some fundamental weights $\omega_i, \omega_j$,
2. $\lambda_1$ is arbitrary and $\lambda_2$ is either $\omega_j$ or $k \omega_1$,
3. $\lambda_1 + w(\lambda_2)$ is dominant for all Weyl group elements,
then for all dominant weights $\tau$: $$a_{\lambda_1, \lambda_2}^{\tau} = c_{\lambda_1, \lambda_2}^{\tau}, \text{ the Littlewood-Richardson coefficients.}$$
Part (2) might be seen as a Pieri rule while part (3) covers $\lambda_1 \gg \lambda_2$. So for fixed $\lambda_2$ we cover the minimal case, e.g. $\lambda_1$ being a fundamental weight, and the large case, e.g. $\lambda_1 \gg \lambda_2$. Note that the results from [@CV13], [@V13] imply $a_{\lambda_1, \lambda_2}^{\tau} = c_{\lambda_1, \lambda_2}^{\tau}$ for all $\tau \in P^+$ if $\lambda_1 = m \omega_i$ and the height of $\lambda_2$ is less than $m+1$. This covers of course part (1) of the theorem but we provide here a different proof using the relation to the PBW filtration.\
The paper is motivated by the search for homogeneous ideals in $U({\mathfrak{sl}}_n \otimes \bc[t])$ defining the fusion product $V(\lambda_1)_{c_1} \ast V(\lambda_2)_{c_2}$ of two simple ${\mathfrak{sl}}_n$-modules. This is the associated graded module of the tensor product of corresponding evaluation modules ([@FL99] and also Section \[fusion\]). These ideals can be deduced straightforward for ${\mathfrak{sl}}_2$ (Lemma \[fusion-sl2\]) and we generalize this to obtain generators for every ${\mathfrak{sl}}_2$-triple. The theorem implies that in the considered cases (Lemma \[whatever\]) $$F_{\lambda_1, \lambda_2} \cong V(\lambda_1)_{c_1} \ast V(\lambda_2)_{c_2}$$ and we conjecture that this is true for all pairs of dominant integral weights.\
Let us briefly explain why these modules $F_{\lambda_1, \lambda_2}$ and especially the conjectured isomorphism to the fusion product is of special interest. In fact, this is closely related to several important conjectures:\
The first one is the conjecture that the fusion product of finitely many tensor factors is independent of the evaluation parameter ([@FL99]). This independence has been proved for some classes of modules but so far not for arbitrary tuples of dominant integral weights. Note that two-factor case can be deduced from straightforward calculations.\
The second conjecture is on Schur positivity of certain symmetric functions. In [@CFS14] (see also [@DP2007]) a partial order on pairs of dominant weights has been introduced. It is conjectured that along with the partial order, the difference of the products of the corresponding Schur functions is a non-negative linear combination of Schur functions (this has been conjectured also independently by Lam, Postnikov and Pylyavskyy), hence *Schur positive*. Note here, that this generalizes a conjecture on Schur positivity along row shuffles ([@Oko97; @FFLP05]), proved in [@LPP07].\
The third related conjecture is on local Weyl modules for truncated current algebras. Local Weyl modules for generalized current algebras, ${\mathfrak{sl}}_n \otimes A$, where $A$ is a commutative, associative, unital $\bc$-algebra, have gained much attention in the last two decades. Due to their homological properties they play an important role in the category of finite-dimensional ${\mathfrak{sl}}_n \otimes A$-modules, which is not semi-simple in general (for more see [@CFK10]).\
Although quite a lot of research has been done on local Weyl modules, their explicit character is known for a few algebras only. For $A = \bc[t^{\pm 1}], \bc[t]$ their character is given by the tensor product of fundamental modules for ${\mathfrak{sl}}_n$ ([@CL06; @FoL07]), for semi-simple, finite-dimensional $A$, the character is given by $\dim A$ copies of a simple ${\mathfrak{sl}}_n$-module. Besides these cases the character is not known for general local Weyl modules, not even for the “smallest“ non-semi-simple algebra $A = \bc[t]/(t^2)$.\
It is conjectured that for $A = \bc[t]/(t^K)$ this character is also, similar to $\bc[t]$, given by the tensor product of simple ${\mathfrak{sl}}_n$-modules. We investigate here on the $K=2$ case and prove that in this case the local Weyl modules are isomorphic to certain $F_{\lambda_1, \lambda_2}$, more detailed: $(\lambda_1, \lambda_2)$ is the unique maximal element in the aforementioned poset of pairs of dominant weights (adding up to a fixed $\lambda$).\
A proof, that the $a_{\lambda_1, \lambda_2}^{\tau}$ are in fact the Littlewood-Richardson coefficients, would imply the conjectures on Schur positivity and on local Weyl modules immediately (Lemma \[fusion-sp\], Lemma \[weyl-f\]) and gives another proof for the two-factor of the independence conjecture (Lemma \[whatever\]).
The paper is organized as follows: In Section \[one\] we refer to the basic definitions and in Section \[graded\] we introduce the modules $F_{\lambda_1, \lambda_2}$, proving first properties. In Section \[PBW\] we recall the PBW filtration and work out the relation to our new modules, while in Section \[fusion\] we recall the fusion products and work out their relation to our modules. In Section \[firstproof\] we give the proof for the ${\mathfrak{sl}}_2$ -case and part(3) of Theorem \[intro-thm\]. Section \[KR\] contains the proofs of part (1) of Theorem \[intro-thm\] and Section \[pieri\] the proof of part (2). Section \[poset\] recalls the partial order on pairs of dominant weights and also local Weyl modules, and relates these constructions to the new modules.
**Acknowledgement:** The author would like to thank Evgeny Feigin for various discussions on these modules and explaining the calculations for the independence conjecture in the two-factor case, and further Christian Korff for asking about Pieri rules.
Preliminaries {#one}
=============
Let ${\mathfrak{g}} = {\mathfrak{sl}}_{n}(\bc)$, the special linear Lie algebra. We fix a triangular decomposition ${\mathfrak{sl}}_n = {\mathfrak{n}}^+ \oplus {\mathfrak{h}} \oplus {\mathfrak{n}}^-$ and denote a fixed set of simple roots $\Pi = \{ \alpha_1, \ldots, \alpha_{n-1}\}$, here we use the numbering from [@Bou02]. Further, we set $I =\{ i, \ldots, n-1 \}$. The sets of roots is denoted R, the set of positive roots $R^+$. Every root $\beta \in R^+$ can be expressed uniquely as $\alpha_i + \alpha_{i+1} + \ldots + \alpha_j$ for some $i \leq j$, we denote this root $\alpha_{i,j}$. For $\alpha \in R$, we denote the root space $${\mathfrak{g}}_{\alpha} = \{ x \in {\mathfrak{sl}}_n \, | \, [h,x] = \alpha(h) x \; \forall \; h \, \in {\mathfrak{h}} \} = \langle x_\alpha^+ \, | \, \text{ if } \alpha \in R^+ \rangle.$$ Further, for $\alpha \in R^+$, we fix a ${\mathfrak{sl}}_2$-triple $\{ x_\alpha^+, x_{\alpha}^-, h_{\alpha} \}$. Denote $P \subset {\mathfrak{h}}^*$, respective $P^+$ the integral weights, respective dominant integral weights, and $\{ \omega_1, \ldots, \omega_{n-1}\}$ the set of fundamental weights.\
We recall some notations and facts from representation theory. Let $V$ be a finite-dimensional ${\mathfrak{sl}}_{n}$-module, then $V$ decomposes into its weight spaces with respect to the ${\mathfrak{h}}$-action $$V = \bigoplus_{ \tau \in P } V_\tau = \bigoplus_{ \tau \in P } \{ v \in V \; | \; h.v = \tau(h).v \text{ for all } h \in {\mathfrak{h}}\}$$ $P^+$ parameterizes the simple finite-dimensional modules. For $\lambda \in P^+$ we denote the simple, finite-dimensional ${\mathfrak{sl}}_n$-module of highest weight $\lambda$ by $V(\lambda)$. Further we denote by $v_{\lambda}$ a highest weight vector of $V(\lambda)$
The category of finite-dimensional ${\mathfrak{sl}}_n$-modules is semi-simple, hence the tensor product of two simple modules decomposes into the direct sum of simple modules, so for $\lambda_1, \lambda_2 \in P^+$ $$V(\lambda_1) \otimes V(\lambda_2) \cong_{{\mathfrak{sl}}_n} \bigoplus_{\tau \in P^+} V(\tau)^{c_{\lambda_1, \lambda_2}^\tau}.$$ Here, $c_{\lambda_1, \lambda_2}^\tau$ denotes the multiplicity of the simple module $V(\tau)$ in a decomposition of the tensor product. These numbers are known as Littlewood-Richardson coefficients and there are several known formulas to compute them ([@Kli68; @N2002; @Lit94] to name but a few).
The vector space ${\mathfrak{sl}}_n \otimes \bc[t]$ equipped with the bracket $$[x \otimes p(t) , y \otimes q(t)] = [x,y] \otimes p(t)q(t) \; \; \forall \, x,y \in {\mathfrak{sl}}_n, p(t), q(t) \in \bc[t]$$ is a Lie algebra and called the *current algebra* of ${\mathfrak{sl}}_n$. One may also view this as the Lie algebra of regular functions on $\bc$ with values in ${\mathfrak{sl}}_n$ (see [@NSS12]). The natural grading on $\bc[t]$ induces a grading on $U({\mathfrak{sl}}_n \otimes \bc[t])$, where the component of degree $0$ is $U({\mathfrak{sl}}_n \otimes 1)$.
For a fixed $k \geq 1$, the *truncated current algebra* is the graded quotient of the current algebra $${\mathfrak{sl}}_n \otimes \bc[t] / ({\mathfrak{sl}}_n \otimes t^K \bc[t]) \cong {\mathfrak{sl}}_n \otimes \bc[t]/(t^K).$$ In this paper we will be dealing mainly with the $K=2$ case. Then $U({\mathfrak{sl}}_n \otimes \bc[t]/(t^2))$ can be seen as the smash product of $U({\mathfrak{sl}}_n)$ and the polynomial ring $S({\mathfrak{n}}^-)$ ([@Hag13]).
The representation theory of ${\mathfrak{sl}}_n \otimes \bc[t]$ has been subject to a lot of research during the last 25 years ([@CP01; @FF02; @CM04; @CL06; @FoL06; @FoL07; @Nao12] to name but a few). The most important property we should mention is, that the category of finite-dimensional ${\mathfrak{sl}}_n \otimes \bc[t]$-modules is not semi-simple.\
Every simple, finite-dimensional module is the tensor product of evaluation modules ([@Rao93]). This is still true if we replace $\bc[t]$ by a commutative, finitely generated algebra $A$ and instead of complex numbers, evaluations in pairwise distinct maximal ideals [@CFK10].\
Although the simple, finite-dimensional modules are therefore easily described and quite well understood, the task of understanding the indecomposable modules is still unsolved (besides the case $A = \bc[t], \bc[t^{\pm}]$ and the cases where $A$ is finite-dimensional and semi-simple).\
Even in the case where $A$ is the two-dimensional truncated polynomial ring, $A = \bc[t]/(t^2)$, the category of finite-dimensional modules is far from being well understood. While the simple modules are in one-to-one correspondence to simple modules of ${\mathfrak{sl}}_{n}$ (by using the evaluation at the unique maximal ideal of $ \bc[t]/(t^2)$, [@CFK10]), there is not much known about indecomposables, projectives etc. We will return to this point in Section \[poset\].
Some new graded module {#graded}
======================
We introduce new graded modules for ${\mathfrak{sl}}_{n} \otimes \bc[t]$ as follows. For fixed $\lambda_1, \lambda_2 \in P^+$, let $$(\lambda_1 + \lambda_2) : {\mathfrak{h}} \longrightarrow \bc_{\lambda_1 + \lambda_2}$$ be the one-dimensional ${\mathfrak{h}}$-module. We extend this trivially to an action of ${\mathfrak{b}} = {\mathfrak{n}}^+ \oplus {\mathfrak{h}}$ on $\bc_{\lambda_1 + \lambda_2}$. And further, by evaluation at $t = 0$, we obtain a one-dimensional module $${\mathfrak{b}} \otimes \bc[t] \longrightarrow {\mathfrak{b}} \longrightarrow {\mathfrak{h}} \longrightarrow \bc_{\lambda_1 + \lambda_2}.$$ We consider the induced module for the subalgebra $({\mathfrak{b}} \otimes 1) \oplus ({\mathfrak{sl}}_n \otimes t\bc[t]) \subset {\mathfrak{sl}}_n \otimes \bc[t]$ $$\Ind_{{\mathfrak{b}} \otimes \bc[t]}^{{\mathfrak{b}} \otimes 1 \oplus {\mathfrak{sl}}_n \otimes t\bc[t]} \bc_{\lambda_1 + \lambda_2}$$ and denote by $M_{\lambda_1, \lambda_2}$ the quotient by the left ideal generated by $${\mathfrak{n}}^- \otimes t^2\bc[t] \text{ and } (f_{\alpha} \otimes t)^{\min \{\lambda_1(h_\alpha), \lambda_2(h_\alpha) \} + 1 }, \, \forall \; \alpha \in R^+.$$ We introduce the ${\mathfrak{sl}}_n \otimes \bc[t]$-module $F_{\lambda_1, \lambda_2}$ as the maximal integrable (as a ${\mathfrak{sl}}_n$-module) quotient $$F_{\lambda_1, \lambda_2} = \overline{\Ind_{{\mathfrak{b}} \otimes 1 \oplus {\mathfrak{sl}}_n \otimes t\bc[t]}^{{\mathfrak{sl}}_n \otimes \bc[t]} M_{\lambda_1, \lambda_2}}.$$
Due to the construction, we can give defining relations on a generator of $F_{\lambda_1, \lambda_2}$.
\[relation\] Let $\lambda_1, \lambda_2 \in P^+$ and $\lambda = \lambda_1 + \lambda_2$. Then $F_{\lambda_1, \lambda_2}$ is the ${\mathfrak{sl}}_n \otimes \bc[t]$-module generated through $w$ with relations $${\mathfrak{n}}^+ \otimes \bc[t].w = 0 , \ {\mathfrak{h}} \otimes t \bc[t].w = 0 , \ {\mathfrak{n}}^- \otimes t^2\bc[t] = 0$$ and for all $\alpha \in R^+$ and $h \in {\mathfrak{h}}$: $$0 = (f_{\alpha} \otimes 1)^{\lambda(h_\alpha) + 1}.w = (f_\alpha \otimes t)^{\min\{\lambda_1(h_\alpha), \lambda_2(h_\alpha) \}+1} = (h\otimes 1 - \lambda(h)).w.$$
We have to deal with the ${\mathfrak{sl}}_n$-relation only. But since $F_{\lambda_1, \lambda_2}$ is integrable we have immediately $(f_\alpha \otimes 1)^{\lambda(h_\alpha) + 1}.\mathbbm{1} = 0$. Therefore $F_{\lambda_1, \lambda_2}$ is a quotient of the module given by the relations in the proposition. On the other hand, every module satisfying the relations is an integrable quotient of $\Ind_{{\mathfrak{b}} \otimes 1 \oplus {\mathfrak{sl}}_n \otimes t\bc[t]}^{{\mathfrak{sl}}_n \otimes \bc[t]} M_{\lambda_1, \lambda_2}$.
\[firstprop\] Let $\lambda_1, \lambda_2 \in P^+$. Then
1. $F_{\lambda_1, \lambda_2}$ is a non-negatively graded ${\mathfrak{sl}}_{n} \otimes \bc[t]$-module.
2. $F_{\lambda_1, \lambda_2}$ is finite-dimensional.
3. $F_{\lambda_1, \lambda_2}= \bigoplus_{s \geq 0} F_{\lambda_1, \lambda_2}^s$, and $ F_{\lambda_1, \lambda_2}^s$ is a ${\mathfrak{sl}}_n$-module.
4. $F_{\lambda_1, \lambda_2}$ has a unique simple quotient isomorphic to $V(\lambda_1 + \lambda_2)_0$.
5. ${\mathfrak{sl}}_{n} \otimes t^2 \bc[t]. F_{\lambda_1, \lambda_2} = 0$, and hence $F_{\lambda_1, \lambda_2}$ is a ${\mathfrak{sl}}_n \otimes \bc[t]/(t^2)$-module.
Part $(1)$ is clear, since the defining relations of $F_{\lambda_1, \lambda_2}$ are homogeneous and $U({\mathfrak{sl}}_n \otimes \bc[t])$ is non-negatively graded. Due to the defining relations, $F_{\lambda_1, \lambda_2}$ is a quotient of the local graded Weyl module for $U({\mathfrak{sl}}_n \otimes \bc[t])$ of highest weight $\lambda_1 + \lambda_2$, $W_{\bc[t]}(0, \lambda_1 + \lambda_2)$ (see Proposition \[f-weyl\] or [@CP01] for details, they are not relevant here). In [@CP01] it is shown that this local graded Weyl module is finite-dimensional, which implies $(2)$.\
Now, as ${\mathfrak{sl}}_n \cong {\mathfrak{sl}}_n \otimes 1 \hookrightarrow {\mathfrak{sl}}_n \otimes \bc[t]$, $F_{\lambda_1, \lambda_2}$ is also a finite-dimensional ${\mathfrak{sl}}_n$-module, hence decomposes into a direct sum of simple finite-dimensional ${\mathfrak{sl}}_n$-modules. Moreover, as $U({\mathfrak{sl}}_n)$ is the degree $0$ part of $U({\mathfrak{sl}}_n \otimes \bc[t])$, we see that each graded component $F^s_{\lambda_1, \lambda_2}$ is a ${\mathfrak{sl}}_n$-module and each simple ${\mathfrak{sl}}_n$-module is contained in a unique $F_{\lambda_1, \lambda_2}^s$. This implies $(3)$.\
The degree $0$ component of $F_{\lambda_1, \lambda_2}$ is obviously isomorphic to $V(\lambda_1 + \lambda_2)$ as a ${\mathfrak{sl}}_n$-module. A standard argument shows that $$U({\mathfrak{sl}}_n \otimes \bc[t]){\mathfrak{sl}}_n \otimes t\bc[t].\mathbbm{1}$$ is the maximal proper submodule not containing $\mathbbm{1}$. The quotient by this submodule is isomorphic to the graded evaluation module $V(\lambda_1 + \lambda_2)_0$, this gives $(4)$. Part $(5)$ follows again immediately from the defining relations.
Since $F_{\lambda_1, \lambda_2}^s$ is a ${\mathfrak{sl}}_{n}$-module, it has a decomposition into a direct sum of simple ${\mathfrak{sl}}_{n}$-modules $$F_{\lambda_1, \lambda_2}^s \cong_{{\mathfrak{sl}}_{n}} \bigoplus_{\tau \in P^+} V(\tau)^{\oplus a_{\lambda_1, \lambda_2}^\tau(s)}, \text{ for some } a_{\lambda_1, \lambda_2}^{\tau}(s) \geq 0.$$ We set $$a_{\lambda_1, \lambda_2}^{\tau} := \sum\limits_{s \geq 0} a_{\lambda_1, \lambda_2}^{\tau}(s),$$ so we have $$\dim \Hom_{{\mathfrak{sl}}_{n}}( F_{\lambda_1, \lambda_2}, V(\tau)) = a_{\lambda_1, \lambda_2}^{\tau} .$$
We see immediately from Proposition \[relation\]:
\[cor-hw\] Let $\lambda_1, \lambda_2 \in P^+$, then $$a_{\lambda_1, \lambda_2}^{\lambda_1 + \lambda_2} = 1 \, ; \, a_{\lambda_1, \lambda_2}^{\tau} = 0 \text { for } \tau \nleq \lambda_1 + \lambda_2.$$
The main theorem of the paper is the following:
\[main-thm\] Let $\lambda_1, \lambda_2\in P^+$, then we have $ a_{\lambda_1, \lambda_2}^{\tau} \geq c_{\lambda_1, \lambda_2}^{\tau}$, $\forall \; \tau \in P^+$.\
Moreover:
1. (Pieri rules) Let $\lambda_1 \in P^+$, $\lambda_2 \in \{ \omega_j, k \omega_1\}$ for some $j \in I$ or $k \geq 1$, then: $a_{\lambda_1, \lambda_2}^{\tau} = c_{\lambda_1, \lambda_2}^{\tau}$, $\, \forall \, \tau \in P^+$.
2. Let $\lambda_1 = m_i \omega_i, \lambda = m_j \omega_j$ for some $i, j \in I, m_i, m_j \geq 0$, then: $a_{\lambda_1, \lambda_2}^{\tau} = c_{\lambda_1, \lambda_2}^{\tau}$, $\, \forall \, \tau \in P^+$.
3. If $\lambda_1 \gg \lambda_2$, then: $a_{\lambda_1, \lambda_2}^{\tau} = c_{\lambda_1, \lambda_2}^{\tau}$, $\, \forall \, \tau \in P^+$.
The proofs will be given in the following sections, but we should note the following here:
\[first-rem\] In the proof we will see that $\lambda_1 \gg \lambda_2$ can be made precise, by requesting $$c_{\lambda_1, \lambda_2}^{\tau} = \dim V(\lambda_2)_{\tau - \lambda_1}$$ for all $\tau \in P^+$. Note that this is equivalent to $\lambda_1 + w(\lambda_2) \in P^+$ for all $w \in W$, the Weyl group of ${\mathfrak{sl}}_n$.
From the work [@CV13; @V13] one can deduce further that $a_{\lambda_1, \lambda_2}^{\tau} = c_{\lambda_1, \lambda_2}^{\tau}$ if $\lambda_1 = m \omega_i$ and $\lambda_2(h_\theta) \leq m$ (where $\theta$ is the highest root of ${\mathfrak{sl}}_n$). The authors were using relations on Demazure modules and their fusion products, generalizing an approach presented in [@FoL06]. This of course includes $(2)$ of the theorem but we give a new proof here that might be generalized to other but rectangular weights.
PBW filtration and polytopes {#PBW}
============================
In this section we recall the PBW filtration and we will see how the results from [@FFoL11a] can be adapted here in order to understand the ${\mathfrak{sl}}_{n}$-structure on $F_{\lambda_1, \lambda_2}$.\
By the PBW theorem and the construction of $F_{\lambda_1, \lambda_2}$ as an induced module we know that $$F_{\lambda_1, \lambda_2} = U({\mathfrak{n^-}})U({\mathfrak{n^-}} \otimes t). \mathbbm{1}.$$ In order to understand the ${\mathfrak{sl}}_{n}$-decomposition of $ F_{\lambda_1, \lambda_2}$ it would be sufficient to parametrize all ${\mathfrak{sl}}_{n}$-highest weight vectors. The equation above suggests, that this set of highest weight vectors should be controlled by $U({\mathfrak{n^-}} \otimes t). \mathbbm{1}$. We start with analyzing this.
We recall the notion of Dyck path from [@FFoL11a]:\
A Dyck path of length $s$ is a sequence of positive roots $$\mathbf{p} = (\beta_1, \ldots, \beta_s)$$ with $s\geq 1$ and such that if $\beta_i = \alpha_{k,\ell}$ then $\beta_{i+1} \in \{ \alpha_{k+1, \ell}, \alpha_{k, \ell +1} \}$. If $\beta_1 = \alpha_{k_1, \ell_1}$ and $\beta_s = \alpha_{k_s, \ell_s}$, then we call $$\alpha_{k_1, \ell_s} \text{ the base root of the path } \mathbf{p}, \text{ denoted by } \beta(\mathbf{p})$$ Denote the set of all Dyck paths by $\mathbb{D}$.
The PBW filtration on $U({\mathfrak{n}}^-)$ is given as follows: $$U({\mathfrak{n}}^-)^{\leq s} = \left\langle x_1 \cdots x_r \, | \, 0 \leq r\leq s \text{ and } x_i \in {\mathfrak{n}}^- \right\rangle_{\bc}$$ The associated graded algebra is a commutative algebra isomorphic to $S({\mathfrak{n}}^-)$, the polynomial ring in ${\mathfrak{n}}^-$. The adjoint action of ${\mathfrak{n}}^+$ on ${\mathfrak{sl}}_n$ induces an action $\circ$ on $S({\mathfrak{n}}^-)$.\
We fix a tuple of non-negative integers $$\mathbf{a} := (a_\alpha) \in \bz_{\geq 0}^{n(n-1)/2}$$ and consider the ideal $\mathcal{I}(\mathbf{a}) \subset S({\mathfrak{n}}^-)$ given by $$\mathcal{I}(\mathbf{a}) = S({\mathfrak{n}}^-)\left\langle \sum_{\alpha \in R^+} U({\mathfrak{n}}^+) \circ f_\alpha^{a_{\alpha}+1} \right\rangle.$$
We fix $\mathbf{a} = (a_\alpha)$ and define a polytop in $ \mathbb{R}^{n(n-1)/2}$: $$\mathcal{P}(\mathbf{a}) = \left\{ (x_\alpha) \in \mathbb{R}^{n(n-1)/2} \, | \, \forall \; \mathbf{p} \in \mathbb{D}: \sum_{ \alpha \in \mathbf{p}} x_\alpha \leq a_{\beta(\mathbf{p})}\right\} .$$ We denote $$S(\mathbf{a}) = \mathcal{P}(\mathbf{a}) \cap \mathbb{Z}_{\geq 0}^{n(n-1)/2}$$ the set of integer points in $ \mathcal{P}(\mathbf{a})$.\
This construction of the polytope covers the cases considered in [@FFoL11a; @FFoL11b; @FFoL13; @FFoL13a; @G11; @BD14], where $a_\alpha := \lambda(h_\alpha)$ for some fixed $\lambda \in P^+$.\
We define further the degree and the weight of an integer point: Let $\mathbf{s} = (s_\alpha) \in \mathbb{Z}_{\geq 0}^{n(n-1)/2}$, then $$\deg (\mathbf{s}) = \sum_{\alpha \in R^+} s_\alpha \text{ and } {\operatorname{wt}}(\mathbf{s}) = \sum_{\alpha \in R^+} s_\alpha \alpha \in P.$$
Although, our approach generalizes the construction provided in [@FFoL11a], we obtain a similar result on a spanning set of $S({\mathfrak{n}}^-)/ \mathcal{I}(\mathbf{a})$ (see [@FFoL11a Theorem 2]). For this denote $$\mathbf{f}^{\mathbf{t}} = \prod_{\alpha \in R^+} f_\alpha^{t_\alpha} \in S({\mathfrak{n}}^-) \text{ where } \mathbf{t} = (t_\alpha) \in \mathbb{Z}_{\geq 0}^{n(n-1)/2}.$$
\[span-set\] We fix $\mathbf{a} = (a_\alpha) \in \mathbb{Z}_{\geq 0}^{n(n-1)/2}$, then $$\{ \overline{\mathbf{f}^{\mathbf{s}}} \; | \;\mathbf{s} \in S(\mathbf{a}) \}$$ is a spanning set of $S({\mathfrak{n}}^-)/ \mathcal{I}(\mathbf{a})$.
Here we follow the idea in [@FFoL11a]. ${\mathfrak{n}}^+$ acts by differential operators on $S({\mathfrak{n}}^-)$, namely $e_\alpha \circ f_\beta = f_{\beta - \alpha}$ or $0$ if $\beta - \alpha$ is a positive root. Using these differential operators and an appropriate total order $\prec$ on the monomials in $S({\mathfrak{n}}^-)$, we can prove in exactly the same way as [@FFoL11a Proposition 1] a straightening law. Namely if $\mathbf{s} \notin S(\mathbf{a})$, then $$\overline{\mathbf{f}^{\mathbf{s}}} = \overline{\sum_ { \mathbf{t} \prec \mathbf{s} } c_{\mathbf{t}} \mathbf{f}^{\mathbf{t}}}.$$ This implies now the lemma. For more details we refer to [@FFoL11a].
In [@FFoL11a], and the case $a_\alpha := \lambda(h_\alpha)$, for some fixed $\lambda \in P^+$, it was further proved that this set is in fact a basis. We can not prove this here and although we conjecture that this is also true in our generality.
By construction $M_{\lambda_1, \lambda_2}$ is a cyclic $U({\mathfrak{n^-}} \otimes t)$-module. So there exists an ideal $\mathcal{I}_{\lambda_1, \lambda_2}$ such that $$M_{\lambda_1, \lambda_2} \cong U({\mathfrak{n^-}} \otimes t)/\mathcal{I}_{\lambda_1, \lambda_2}.$$ Since $M_{\lambda_1, \lambda_2}$ is a ${\mathfrak{n}}^+$-module, the ideal $\mathcal{I}_{\lambda_1, \lambda_2}$ is stable under the adjoint action of ${\mathfrak{n}}^+$ (on $U({\mathfrak{n}}^- \otimes t)$). Moreover the action is a graded action (where ${\mathfrak{n}}^+$ has degree $0$). Note that we have the identification $$S({\mathfrak{n}}^-) \cong U({\mathfrak{n}}^- \otimes t) \subset U({\mathfrak{sl}}_n \otimes \bc[t]/(t^2)) \text{ via } \mathbf{f}^{\mathbf{t}} \mapsto \prod_{\alpha} (f_\alpha \otimes t)^{t_\alpha}.$$ Then we have the obvious proposition:
\[prop-maps\] For $\lambda_1, \lambda_2 \in P^+$ we set $$\mathbf{a} := (a_\alpha) \text{ where } a_\alpha = \min\{\lambda_1(h_\alpha), \lambda_2(h_\alpha)\}.$$ Then we have maps of $S({\mathfrak{n}}^-)$-modules $$S({\mathfrak{n}}^-) / \mathcal{I}(\mathbf{a}) \twoheadrightarrow M_{\lambda_1, \lambda_2} \twoheadrightarrow U({\mathfrak{n}}^- \otimes t).\mathbbm{1} \subset F_{\lambda_1, \lambda_2}.$$
To emphasize the dependence on $\lambda_1, \lambda_2$, we denote the set of integer points $S(\mathbf{a})$ in this case by $S(\lambda_1, \lambda_2)$. Then, combining Proposition \[prop-maps\] and Lemma \[span-set\], we have:
\[span-m\] Fix $\lambda_1, \lambda_2 \in P^+$, then $$\{ \mathbf{f}^{\mathbf{s}}.\mathbbm{1} \; | \; \mathbf{s} \in S(\lambda_1, \lambda_2) \}$$ is, via the identification, a spanning set for $M_{\lambda_1, \lambda_2}$ and hence for $U({\mathfrak{n}}^- \otimes t).\mathbbm{1} \subset F_{\lambda_1, \lambda_2}$.
In order to identify the ${\mathfrak{sl}}_n$-highest weight vectors in $F_{\lambda_1, \lambda_2}$ with images of $\prod_{\alpha} (f \otimes t)^{s_\alpha} .\mathbbm{1}$ for some $\mathbf{s} \in S(\lambda_1, \lambda_2)$, we introduce an appropriate filtration of $U({\mathfrak{n}}^- \otimes t)$. First we filter by the degree of $t$ and the further by the height of the weights. Finally, we filter further by a total order on the monomials.\
Recall that $U({\mathfrak{n}}^- \otimes t) \cong S({\mathfrak{n}}^-)$ if considered as the subalgebra in $U({\mathfrak{sl}}_n \otimes \bc[t]/t^2)$ as we continue to do. Therefore $U({\mathfrak{n}}^- \otimes t)$ is naturally graded by $t$ and we keep denoting the graded components $U({\mathfrak{n}}^- \otimes t)^s$. For $\tau \in P$, we denote $$U({\mathfrak{n}}^- \otimes t)_\tau = \{ v \in U({\mathfrak{n}}^- \otimes t) \ | \ {\operatorname{wt}}(v) = \tau \}.$$ All weights of $U({\mathfrak{n^-}} \otimes t)$ are in $\bigoplus_{i \in I} \bz_{\leq 0} \alpha_i$. Let $\tau = \sum_{i \in I} a_ i \alpha_I \in \bigoplus_{i \in I} \bz_{\leq 0} \alpha_i$. Then we denote the height of $\tau$ $$\hei(\tau) := \sum_{i \in I} - a_i.$$ So we have a filtration of the graded components $$U({\mathfrak{n}}^- \otimes t)^{s, \leq \ell} = \langle u \in U({\mathfrak{n}}^- \otimes t)^s_\tau \ | \ \hei(\tau) \leq \ell \}.$$ This is spanned by monomials of total degree $s$ and whose weights have height less or equals to $\ell$.\
On the other hand, $U({\mathfrak{n}}^- \otimes t)$ is $\bz_{\geq 0}^{n(n-1)/2}$ graded. Each graded component is one-dimensional, spanned by $\prod_{\alpha \in R^+} (f_\alpha \otimes t)^{s_\alpha}$ for some $\mathbf{s} \in \bz_{\geq 0}^{n(n-1)/2}$. We order the $n(n-1)/2$-tuples by first ordering the positive roots $$\alpha_{i,j} \leq \alpha_{k,\ell} :\Leftrightarrow i < k \text{ or } i = k \text{ and } j \leq \ell .$$ Using the lexicographic order $\leq$ we obtain an order on the monomials spanning $U({\mathfrak{n}}^- \otimes t)$.\
Combining this we introduce a finer filtration on $U({\mathfrak{n}}^- \otimes t)^s$. So given $s, \ell \geq 0$ and $\mathbf{n}\in \bz_{\geq 0}^{n(n-1)/2}$ with $\deg (\mathbf{n}) = s$, $\hei(-{\operatorname{wt}}(\mathbf{n})) = \ell$, we have $$U({\mathfrak{n}}^- \otimes t)^{s, \leq \ell}_{\leq \mathbf{n}} = U({\mathfrak{n}}^- \otimes t)^{s, < \ell} + \left\langle \prod_{\alpha \in R^+} (f_\alpha \otimes t)^{m_\alpha} \ | \ \deg(\mathbf{m}) = s \, , \, \hei( -{\operatorname{wt}}\mathbf{m}) = \ell \, , \, \mathbf{m} < \mathbf{n} \right\rangle_{\bc}.$$
We turn back to the module $F_{\lambda_1, \lambda_2}$ and recall its graded components $F^s_{\lambda_1, \lambda_2}$. We define $$\mathcal{F}^{\leq \ell}(F_{\lambda_1, \lambda_2}^s) := U({\mathfrak{sl}}_n) U({\mathfrak{n}}^- \otimes t)^{s, \leq \ell} .\mathbbm{1} \subseteq F_{\lambda_1, \lambda_2}^{s}.$$ By construction $$\mathcal{F}^{\leq \ell}(F_{\lambda_1, \lambda_2}^s) /\mathcal{F}^{ < \ell}(F_{\lambda_1, \lambda_2}^s)$$ is a ${\mathfrak{sl}}_n$-module and we have the following
Let $\mathbf{s} \in S(\lambda_1, \lambda_2)$, then the image of $$\mathbf{f}^{\mathbf{s}}.\mathbbm{1} \in \mathcal{F}^{\leq \ell}(F_{\lambda_1, \lambda_2}^s) /\mathcal{F}^{< \ell}(F_{\lambda_1, \lambda_2}^s)$$ is either $0$ or a ${\mathfrak{sl}}_n$-highest weight vector of weight $\lambda_1 + \lambda_2 - {\operatorname{wt}}(\mathbf{s} )$.
Since $\hei(e_\beta) > 0 $, we see using the commutator relations that $$e_{\beta} \prod_{\alpha}(f_\alpha \otimes t)^{s_{\alpha}} \in U({\mathfrak{n}}^- \otimes t)^{s, \leq \ell} U({\mathfrak{n}}^+)_+ + U({\mathfrak{n}}^- \otimes t)^{s, < \ell} U({\mathfrak{n}}^+).$$ This implies that $$e_{\alpha} \prod_{\alpha}(f_\alpha \otimes t)^{s_{\alpha}} = 0 \in \mathcal{F}^{\leq \ell}(F_{\lambda_1, \lambda_2}^s) /\mathcal{F}^{< \ell}(F_{\lambda_1, \lambda_2}^s).$$
We see that, by choosing this appropriate filtration, the highest weight vectors (for the ${\mathfrak{sl}}_n$-action) of the associated graded module $F_{\lambda_1, \lambda_2}$, are of the form $\mathbf{f}^{\mathbf{s}}.\mathbbm{1}$ for some $\mathbf{s}$.\
By using the refinement of the filtration we can say even more. So given $s, \ell \geq 0$ and $\mathbf{n}\in \bz_{\geq 0}^{n(n-1)/2}$ with $\deg (\mathbf{n}) = s$, $\hei(-{\operatorname{wt}}(\mathbf{n})) = \ell$, we have $$\mathcal{F}_{\leq \mathbf{n}}^{\leq \ell}(F_{\lambda_1, \lambda_2}^s) := U({\mathfrak{sl}}_n) U({\mathfrak{n}}^- \otimes t)_{\leq \mathbf{n}}^{s, \leq \ell} .\mathbbm{1} \subset F_{\lambda_1, \lambda_2}^{s}.$$ Then the graded components $$\mathfrak{G}_{\mathbf{n}}^{s, \ell}(F_{\lambda_1, \lambda_2}) := \mathcal{F}_{\leq \mathbf{n}}^{\leq \ell}(F_{\lambda_1, \lambda_2}^s) / \left( \mathcal{F}^{< \ell}(F_{\lambda_1, \lambda_2}^s) + \sum_{ \mathbf{m} < \mathbf{n} } \mathcal{F}_{\leq \mathbf{m}}^{\leq \ell}(F_{\lambda_1, \lambda_2}^s) \right)$$ are simple ${\mathfrak{sl}}_n$-modules.
We have seen in Corollary \[span-m\] that the monomials corresponding to points in $S(\lambda_1, \lambda_2)$ are a spanning set of $U({\mathfrak{n}}^{-} \otimes t). \mathbbm{1}$.
We say $\mathbf{n} \in S(\lambda_1, \lambda_2)$ is a *highest weight point* if $\mathfrak{G}_{ \mathbf{n}}^{s, \ell}(F_{\lambda_1, \lambda_2}) $ is non-zero for $s = \deg(\mathbf{n})$ and $\ell = \hei(-{\operatorname{wt}}(\mathbf{n}))$. The set of highest weight points is denoted $S_{hw}(\lambda_1, \lambda_2)$.
Note that, since $F_{\lambda_1, \lambda_2}$ is an integrable ${\mathfrak{sl}}_n$-module, we have for all $\mathbf{s} \in S_{hw}(\lambda_1, \lambda_2)$ $$\lambda_1 + \lambda _ 2 - {\operatorname{wt}}(\mathbf{s}) \in P^+.$$
\[cor-upper\] For $\lambda_1, \lambda_2, \tau \in P^+$ we have $$\dim \Hom_{{\mathfrak{sl}}_n} (F_{\lambda_1, \lambda_2}, V(\tau)) = \sharp \{ \mathbf{s} \in S_{hw}(\lambda_1, \lambda_2) \, | \, {\operatorname{wt}}(\mathbf{s}) = \lambda_1 + \lambda_2 - \tau \}$$ Moreover $$\dim \Hom_{{\mathfrak{sl}}_n} (F^s_{\lambda_1, \lambda_2}, V(\tau)) = \sharp \{ \mathbf{s} \in S_{hw}(\lambda_1, \lambda_2) \, | \, {\operatorname{wt}}(\mathbf{s}) = \lambda_1 + \lambda_2 - \tau \text{ and } \deg( \mathbf{s}) = s.\}$$
Fusion products {#fusion}
===============
In this section we recall the fusion product of two simple ${\mathfrak{sl}}_{n}$-modules and work out the relation to the modules $F_{\lambda_1, \lambda_2}$.
The following construction is due to [@FL99]. Recall the grading on $U({\mathfrak{sl}}_{n} \otimes \bc[t])$ given by the degree function on $\bc[t]$ $$U({\mathfrak{sl}}_n \otimes \bc[t]) ^r = \{ u \in U({\mathfrak{sl}}_n \otimes \bc[t]) \; | \; \deg(u) \leq r \}.$$ Then $\mathcal{F}^{0} = U({\mathfrak{sl}}_n)$ and we set $\mathcal{F}^{-1} = 0$.\
Let $V(\lambda_1), \ldots, V(\lambda_k)$ be simple ${\mathfrak{sl}}_n$-modules of highest weights $\lambda_1, \ldots, \lambda_k$. Further let $c_1, \ldots, c_k$ be pairwise distinct complex numbers. Then $V(\lambda_i)$ can be endowed with the structure of a ${\mathfrak{sl}}_n \otimes \bc[t]$-module via $$x \otimes p(t).v = p(c_i)x.v \; \; \text{ for all } x \in {\mathfrak{sl}}_n, p(t) \in \bc[t], v \in V(\lambda_i),$$ we denote this module $V(\lambda_i)_{c_i}$. Then $$V(\lambda_1)_{c_1} \otimes \cdots \otimes V(\lambda_k)_{c_k}$$ is cyclic generated by the tensor product of highest weight vectors $v_{\lambda_1} \otimes \cdots \otimes v_{\lambda_k}$ (even more it is simple [@Rao93; @CFK10]). The grading on $U({\mathfrak{sl}}_n \otimes \bc[t])$ induces a filtration on $V(\lambda_1)_{c_1} \otimes \cdots \otimes V(\lambda_k)_{c_k}$ $$U({\mathfrak{sl}}_n \otimes \bc[t])^{\leq r}. v_{\lambda_1} \otimes \cdots \otimes v_{\lambda_k}.$$ Since $U({\mathfrak{sl}}_n \otimes \bc[t])$ is graded, the associated graded is again a module for $U({\mathfrak{sl}}_n \otimes \bc[t])$, denoted usually by $$V(\lambda_1)_{c_1} \ast \cdots \ast V(\lambda_k)_{c_k},$$ and is called the *fusion product*. Recall that the graded components are $U({\mathfrak{sl}}_n)$-modules, since $U({\mathfrak{sl}} \otimes 1)$ is the degree $0$ component of $U({\mathfrak{sl}}_n \otimes \bc[t])$. Further, since we have not changed the ${\mathfrak{sl}}_n$-structure in this construction:
\[fusion-decom\] Let $\lambda_1, \lambda_2 \in P^+$, $c_1 \neq c_2 \in \bc$, then for all $\tau \in P^+$ $$\dim \Hom_{{\mathfrak{sl}}_n} (V(\lambda_1)_{c_1} \ast V(\lambda_2)_{c_2}, V(\tau)) = c_{\lambda_1, \lambda_2}^{\tau}.$$
\[whatever\] For $\lambda_1, \lambda_2 \in P^+$, $c_1 \neq c_2 \in \bc$ we have a surjective map of ${\mathfrak{sl}}_{n} \otimes \bc[t]$-modules: $$F_{\lambda_1, \lambda_2} \twoheadrightarrow V(\lambda_1)_{c_1} \ast V(\lambda_2)_{c_2},$$ moreover $a_{\lambda_1, \lambda_2}^{\tau} \geq c_{\lambda_1, \lambda_2}^{\tau}$, $ \forall \, \tau \in P^+$.
We prove the ${\mathfrak{sl}}_2$-case first. Here dominant integral weights are parameterized by $\bz_{\geq 0}$, and for $k \geq 0$ let $V(k) =Sym^k \bc^2$. Fix $k \geq m \geq 0$. Then $$\dim (V(k)_{c_1} \otimes V(m)_{c_2})_{k + m - 2 \ell} = \begin{cases} \ell + 1 \text{ for } 0 \leq \ell \leq m \\ m + 1 \text{ for } m \leq \ell \leq k \\ k + m + 1- \ell \text{ for } k \leq \ell \leq k + m \end{cases}$$ Since $c_1 \neq c_2$, we se, using the Vandermonde determinant, that $$(f_\alpha \otimes t)^{m+1} v_k \otimes v_m \in \langle (f_\alpha \otimes 1)^{m+1} v_k \otimes v_m, \ldots ,(f_\alpha \otimes 1) (f_\alpha \otimes t)^m v_k \otimes v_m \rangle_{\bc}$$ since the $k-2$-weight space is at most $m+1$-dimensional. This implies that $(f_\alpha \otimes t)^{m+1} v_k \otimes v_m$ is $0$ in the associated graded module.\
We see further, that the weight space of weight $k+m -2$ is two dimensional and spanned by the vectors $ (f_\alpha \otimes 1) v_k \otimes v_m, (f_\alpha \otimes t) v_k \otimes v_m$. This implies that for $\ell \geq 2$, $(f_\alpha \otimes t^\ell) v_k \otimes v_m = 0$ in the fusion product, similarly we see that for all $\ell \geq 1$, $h \otimes t^\ell v_k \otimes v_m = 0$ in the fusion product.\
This implies that there is a surjective map of ${\mathfrak{sl}}_2$-modules $$F_{k\omega,m\omega} \twoheadrightarrow V(k)_{c_1} \ast V(m)_{c_2}.$$ Let us turn to the general case. Let $\lambda_1, \lambda_2 \in P^+$, $c_1 \neq c_2 \in \bc$, $\alpha \in R^+$, and let $m = \min \{ \lambda_1(h_\alpha), \lambda_2(h_\alpha) \}$. By considering the ${\mathfrak{sl}}_2$-triple $\{ e_\alpha, h_\alpha, f_\alpha\}$ we see with the same argument as above that $$(f_\alpha \otimes t)^{m+1} v_{\lambda_1} \otimes v_{\lambda_2} \in \operatorname{ span } \{ (f_\alpha \otimes 1)^{m+1} v_{\lambda_1} \otimes v_{\lambda_2}, \ldots , (f_\alpha \otimes 1)(f_\alpha \otimes t)^m v_{\lambda_1} \otimes v_{\lambda_2} \}$$ This implies that $ (f_\alpha \otimes t)^{m+1} v_{\lambda_1} \otimes v_{\lambda_2} = 0$ in the associated graded. The remaining defining relations for $F_{\lambda_1, \lambda_2}$ are easily verified.
Using this lemma we have the following very interesting consequence:
If $\forall \, \tau \in P^+$: $a^{\tau}_{\lambda_1, \lambda_2} = c^{\tau}_{\lambda_1, \lambda_2}$, then for all $c_1 \neq c_2 \in \bc:$ $$V(\lambda_1)_{c_1} \ast V(\lambda_2)_{c_2} \cong_{{\mathfrak{sl}}_{n} \otimes \bc[t] } F_{\lambda_1, \lambda_2}.$$ Moreover, the fusion product in this case is independent of the parameter $c_1, c_2$, providing another proof of a conjecture by B.Feigin and S.Loktev ([@FL99]).
By Lemma \[whatever\] we have for all $\lambda_1, \lambda_2 \in P^+$ and $c_1 \neq c_2 \in \bc$ a surjective map of ${\mathfrak{sl}}_n \otimes \bc[t]$-modules $$F_{\lambda_1, \lambda_2} \twoheadrightarrow V(\lambda_1)_{c_1} \ast V(\lambda_2)_{c_2}.$$ With Corollary \[fusion-decom\] we know that the multiplicity of $V(\tau)$ in the fusion product is $c_{\lambda_1, \lambda_2}^\tau$. By assumption, this is equal to $a_{\lambda_1, \lambda_2}^\tau$, which is the multiplicity of $V(\tau)$ in $F_{\lambda_1, \lambda_2}$. So the modules are isomorphic as ${\mathfrak{sl}}_n$-modules and hence by a dimension argument also as ${\mathfrak{sl}}_n \otimes \bc[t]$-modules.\
Since $F_{\lambda_1, \lambda_2}$ is a graded module and independent of any evaluation parameter, the same is true for the fusion product $V(\lambda_1)_{c_1} \ast V(\lambda_2)_{c_2}$.
First proofs for parts of the main theorem {#firstproof}
==========================================
We prove here the ${\mathfrak{sl}}_2$-case, namely $a_{m \omega_1, k \omega_1}^{\tau} = c_{m \omega_1, k \omega_1}^{\tau}$ for all $m, k \geq 0$ and $\tau \in P^+$. In the following section we prove the $\lambda_1 \gg \lambda_2$-case.
In this section we consider the ${\mathfrak{sl}}_2$-case. In this case, dominant integral weights are parametrized by non-negative integers.
\[fusion-sl2\] Let $m_1, m_2 \geq 0$, then for all $c_1 \neq c_2 \in \bc$ $$F_{m_1 \omega_1, m_2 \omega_1} \cong_{{\mathfrak{sl}}_2 \otimes \bc[t]} V(m_1 \omega_1)_{c_1} \ast V(m_2 \omega_1)_{c_2}.$$ Moreover, $a_{m_1 \omega_1, m_2 \omega_1}^{k \omega_1} = c_{m_1 \omega_1, m_2 \omega_1}^{k \omega_1}$.
This proves Theorem \[main-thm\](1) for $A_1$.
Let $m_1, m_2 \in \mathbb{Z}_{\geq 0}$, then by Lemma \[whatever\] it suffices to prove that $ \forall \; k \geq 0$ $$a_{m_1 \omega_1, m_2 \omega_1}^{k \omega_1} \leq c_{m_1 \omega_1, m_2 \omega_1}^{k \omega_1} \;$$ Suppose $m_1 \geq m_2$. Then the relations of $F_{m_1 \omega_1, m_2 \omega_2}$ can be rewritten as $$(h \otimes 1).\mathbbm{1} = (m_1 + m_2 + 1). \mathbbm{1}; \; ; (f \otimes 1)^{m_1 + m_2 + 1}. \mathbbm{1} = 0 \; ; \; (f \otimes t)^{m_2 + 1}. \mathbbm{1}= 0,$$ while $({\mathfrak{n}}^- \otimes t^2 \bc[t] \oplus {\mathfrak{b}} \otimes t\bc[t] \oplus {\mathfrak{n}}^+ \otimes 1). \mathbbm{1} = 0$. By considering $F_{m_1 \omega_1, m_2 \omega_1}$ as an ${\mathfrak{sl}}_2$-module we see from the relations, that it is generated by $$\{ \mathbbm{1}, (f \otimes t).\mathbbm{1}, (f \otimes t)^{2}.\mathbbm{1}, \ldots, , (f \otimes t)^{m_2}.\mathbbm{1} \} .$$ This implies that $F_{m_1 \omega_1, m_2 \omega_1}$ is multiplicity free and moreover we see that $$a_{m_1 \omega_1, m_2 \omega_1}^{k \omega_1} = 1 \Rightarrow k = m_1 + m_2 - 2 \ell \text{ for some } \ell \in \{0, \ldots, m_2 \}.$$ The famous Clebsch-Gordan formula gives for $k = m_1 + m_2 - 2 \ell \text{ for some } \ell \in \{0, \ldots, m_2 \}$ $$c_{m_1 \omega_1, m_2 \omega_1}^{k \omega_1} =1 \text{ and } c_{m_1 \omega_1, m_2 \omega_1}^{k \omega_1} = 0 \text{ else }.$$ This implies (with Lemma \[whatever\]) $$a_{m_1 \omega_1, m_2 \omega_1}^{k \omega_1} \leq c_{m_1 \omega_1, m_2 \omega_1}^{k \omega_1} \leq a_{m_1 \omega_1, m_2 \omega_1}^{k \omega_1}.$$
Note here, that this elementary result follows also from [@FF02] and [@CV13].
Let $\lambda_1, \lambda_2 \in P^+$. We say $$\lambda_1 \gg \lambda_2 \Leftrightarrow \lambda_1 + w(\lambda_2) \in P^+ \Leftrightarrow c_{\lambda_1, \lambda_2}^{\tau} = \dim V(\lambda_2)_{ \tau - \lambda_1},
\, \forall \tau \in P^+.$$ This is certainly satisfied if $\lambda_1(h_\alpha) \gg \lambda(h_\alpha)$ for all $\alpha \in R^+$.\
Suppose now $\lambda_1 \gg \lambda_2$, then $\min\{ \lambda_1(h_\alpha), \lambda_2(h_\alpha) \} = \lambda_2(h_\alpha)$. Which implies that if we define $$\mathbf{a} \in \bz^{n(n-1)/2} \text{ via } a_\alpha = \min \{ \lambda_1(h_\alpha), \lambda_2(h_\alpha) \}$$ then $a_\alpha = \lambda_2(h_\alpha)$. Let us denote $V(\lambda_2)^{a}$ the associated graded module obtained through the PBW filtration $U({\mathfrak{n}}^-)$ on the highest weight vector $v_{\lambda_2} \in V(\lambda_2)$ (see [@FFoL11a] for more details). This is a module for $S({\mathfrak{n}}^-)$, the associated graded algebra of $U({\mathfrak{n}}^-)$.
\[classicalpbw\] If $\lambda \gg \lambda_2$, then $$S({\mathfrak{n}}^-)/\mathcal{I}(\mathbf{a}) \cong V(\lambda_2)^{a}.$$
This is nothing but [@FFoL11a Theorem A].
We are ready to prove:
If $\lambda_1 \gg \lambda_2$, then $$a_{\lambda_1, \lambda_2}^{\tau} = c_{\lambda_1, \lambda_2}^{\tau}$$ and $$F_{\lambda_1, \lambda_2} \cong_{{\mathfrak{sl}}_n \otimes \bc[t]} V(\lambda_1)_{c_1} \ast V(\lambda_2)_{c_2}$$ for all $c_1 \neq c_2 \in \bc$.
With Corollary \[cor-upper\] we see that $$a_{\lambda_1, \lambda_2}^{\tau} \leq \sharp \{ \mathbf{s} \in S(\lambda_1, \lambda_2) \, | \, {\operatorname{wt}}(\mathbf{s}) = \lambda_1 + \lambda_2 - \tau \}.$$ On the other hand, by Lemma \[whatever\], we have $$a_{\lambda_1, \lambda_2}^{\tau} \geq c_{\lambda_1, \lambda_2}^{\tau}.$$ By assumption $\lambda_1 \gg \lambda_2$, which implies (Remark \[first-rem\]) $$c_{\lambda_1, \lambda_2}^{\tau} = \dim V(\lambda_2)_{\tau - \lambda_1}.$$ Now [@FFoL11a Theorem B] gives in this case a parametrization of a basis of $V(\lambda_2)$ in terms of (in our notation) $S(\lambda_1, \lambda_2)$, namely $$\dim V(\lambda_2)_{\tau - \lambda_1} = \sharp \{ \mathbf{s} \in S(\lambda_1, \lambda_2) \, | \, {\operatorname{wt}}(\mathbf{s}) = \lambda_2 - (\tau -\lambda_1)\}.$$ Which implies also $$a_{\lambda_1, \lambda_2}^{\tau} \leq c_{\lambda_1, \lambda_2}^{\tau},$$ hence the equality follows.
Rectangular weights {#KR}
===================
In this section we prove generators and relations for the fusion product of two arbitrary Kirillov-Reshetikhin modules. These modules are defined in the context of simple, finite-dimensional modules for the quantum affine algebra. They are indexed by a node $i \in I$, a level $m$ and an evaluation parameter $a \in \bc(q)^*$ and denoted $KR( m \omega_i, a)$. For more on their importance we refer here to the survey [@CH10].\
In this paper we consider the non-quantum analog (obtained through the $q\mapsto 1$ limit). In the ${\mathfrak{sl}}_n$-case, they are isomorphic to evaluation modules $V(m \omega_i)_{c}$ for some $c \in \bc$.\
We have seen in Lemma \[whatever\] that $$F_{m_i \omega_i, m_j \omega_j} \twoheadrightarrow V(m_i \omega_i)_{c_1} \ast V(m_j \omega_j)_{c_2}$$ for all $c_1 \neq c_2$. We want to prove that this map is in fact an isomorphism, so we have to show that for all $\tau \in P^+$ $$a_{m_i \omega_i, m_j \omega_j}^{\tau} = c_{m_i \omega_i, m_j \omega_j}^{\tau} .$$
First, we will give formulas for the right hand side. We refer here to [@N2002] where the decomposition of a tensor product was computed by using combinatorics of Young tableaux. A formula for the tensor product of $V(\lambda_1)$ with $V(\omega_1)$ is given explicitly and as well as the induction procedure for $V(\lambda_2)$. In the special case of $\lambda_1 = m_i \omega_i$ and $\lambda_2 = m_j \omega_j$ one can deduce straightforward that for all $\tau \in P^+$: $$c_{m_i \omega_i, m_j \omega_j}^{\tau} \in \{ 0,1\}.$$ Moreover
For $i \leq j$, $c_{m_i \omega_i, m_j \omega_j}^{\tau}=1$ if and only if (setting $\omega_n = \omega_0 = 0$.) $$\tau = m_i \omega_i + m_j \omega_j + \sum_{q \geq 0}^{ \min\{ i, j+i, n-j\}} b_q(\omega_{i -q} + \omega_{j + q} - \omega_i - \omega_j)$$ with $$\sum_{q \geq 0}^{ \min\{ i, j+i, n-j\}} b_q \leq \min\{ m_i, m_j\} \; \; , \; b_q \geq 0$$
Second, we will compute $a_{m_i \omega_i, m_j \omega_j}^{\tau}$. For this we identify again $$\mathbf{f}^{\mathbf{s}} \leftrightarrow \prod_{\alpha } (f_{\alpha} \otimes t)^{s_\alpha}.$$ Recall, from Section \[PBW\] (and [@FFoL11a]) that ${\mathfrak{n}}^+$ acts by differential operators on $S({\mathfrak{n}}^-)$. Here, we introduce a new class of operators as follows. Let $R^+_{\lambda_1, \lambda_2} = \{ \alpha \in R^+ \, | \, \lambda_1(h_\alpha) = \lambda_2(h_\alpha) = 0$}. Then ${\mathfrak{n}}^-_{\lambda_1, \lambda_2} = \langle f_\alpha \, | \, \alpha \in R^+_{\lambda_1, \lambda_2}\rangle $ is a subalgebra. We define for $\alpha \in R^+_{\lambda_1, \lambda_2}, \beta \in R^+$: $$f_\alpha \circ f_\beta \otimes t = \begin{cases} f_{\alpha + \beta} \otimes t \text{ if } \alpha + \beta \in R^+ \\ 0 \; \; \; \text{ else } \end{cases}$$ This is induced by the adjoint action of ${\mathfrak{n}}^-$ on ${\mathfrak{n}}^- \otimes t$ (we normalize if necessary here). Moreover
This action induces an action of differential operators on $U({\mathfrak{n}}^- \otimes t).\mathbbm{1} \subset F_{\lambda_1, \lambda_2}$.
This follows easily from the fact that ${\mathfrak{n}}^-_{\lambda_1, \lambda_2}.\mathbbm{1} = 0 \in F_{\lambda_1, \lambda_2}$.
In the following we will abbreviate $f_{\alpha_{k, \ell}}$ with $f_{k, \ell}$, $s_{\alpha_{k, \ell}}$ with $s_{k, \ell}$. Denote further $\mathbf{e}_{k,l}$, the basis vector of $\mathbb{R}^{n(n-1)/2}$ having $1$ for $e_{\alpha_{k, \ell}}$ and $0$ elsewhere. So let $\alpha \in R^+_{\lambda_1, \lambda_2}$ and $\gamma = \alpha + \beta \in R^+$, then $$f_\alpha \circ \mathbf{f}^{\mathbf{e}_\beta} = \mathbf{f}^{\mathbf{e}_\gamma}.$$
We turn to the case $\lambda_1 = m_i \omega_i, \lambda_2 = m_j \omega_j$. Let $\mathbf{s} \in S(\lambda_1, \lambda_2)$, then $s_{k, \ell} = 0$ for $\ell < j$ or $k > i$. The following is the crucial lemma, which gives an upper bound for the set of highest weight points.
\[diagonal\] Let $i\leq j \in I, m_i, m_j \geq 0$, and $p := \min \{i-1, n-1-j \}$, then $$U({\mathfrak{n}}^- \otimes t).\mathbbm{1} \subset U({\mathfrak{n}}^-) \langle (f_{i,j} \otimes t)^{a_0} (f_{i-1, j+1} \otimes t)^{a_1} \cdots (f_{i- p, j+p})^{a_k}. \mathbbm{1} \, | \, a_q \geq 0, \forall \,q \rangle.$$ Moreover we have $$S_{hw}(\lambda_1, \lambda_2) \subseteq \{ \mathbf{s} \in S(m_i \omega_i, m_j \omega_j) \, | \, s_{k, \ell} = 0 \text{ if } (k, \ell) \neq (i-q, j+q) \text{ for some } q \}.$$
We have seen in Corollary \[span-m\], that $$\{ \mathbf{f}^{\mathbf{s}}.\mathbbm{1} \, | \, \mathbf{s} \in S(\lambda_1, \lambda_2) \}$$ generates $F_{\lambda_1, \lambda_2}$ as a $U({\mathfrak{n}}^{-})$-modules.\
In our case $\lambda_1 = m_i \omega_i, \lambda_2 = m_j \omega_j$ and let $\mathbf{s} \in \bz_{\geq 0}^{n(n-1)/2}$ with $s_{p, q} = 0$ for $q < j$ or $p > i$. Let $k, \ell$ be such that $ i-k > \ell - j$, $s_{k, \ell} \neq 0$ and $$\begin{array}{ll}
\text{Condition }(1):& s_{r, \ell} = 0, \; \forall \, r=1, \ldots , k-1 \\
\text{Condition }(2): & s_{r,s} = 0 \text{ if } r < k \text{ and } s< j + i -r, \text{ then }
\end{array}$$ So $\mathbf{s}$ is of the form: $$\left(
\begin{array}{cccccccccccc}
0 & \ldots & & 0 & 0 & \ldots & & & \ldots & 0 & s_{i,j}\\
& & & & & \ldots & & & \ldots & s_{i-1, j +1} & s_{i,j+1}\\
\vdots & \ddots & & \vdots & \vdots & & & & \iddots & & \vdots\\
0& & & 0 & 0 & \ldots &0 & s_{i+j - \ell+1, \ell -1} & \ldots & & s_{i, \ell-1}\\
0 & \ldots & 0 & s_{k, \ell} & s_{k+1, \ell} & \ldots & s_{i+j - \ell, \ell} & & \ldots&& s_{i, \ell}\\
s_{1, \ell+1} & \ldots & & & & \ldots & & & \ldots&& s_{i, \ell+1}\\
\vdots & \ddots & & & & \ddots& & &\ddots &&\vdots\\
s_{1, n-1} & \ldots & & & & \ldots & & & \ldots&& s_{i, n-1}
\end{array}
\right)$$ We consider $$f_{k,k} \circ \left( \mathbf{f}^{\mathbf{s} - \mathbf{e}_{k, \ell} + \mathbf{e}_{k+1,\ell}}\right).\mathbbm{1}$$ By expanding this we see that $$(s_{k+1, \ell}+1)\mathbf{f}^{\mathbf{s}}. \mathbbm{1} = \left[ f_{k,k} \circ \left( \mathbf{f}^{\mathbf{s} - \mathbf{e}_{{k, \ell}} + \mathbf{e}_{{k+1,\ell}}}\right) -
\sum\limits_{z = \ell +1}^{n-1} s_{k+1, z}
\left( \mathbf{f}^{\mathbf{s} - \mathbf{e}_{k,\ell} + \mathbf{e}_{k+1, \ell} + \mathbf{e}_{k, z} - \mathbf{e}_{k+1, z} } \right)\right]. \mathbbm{1}.$$ By iterating this we see that $$\mathbf{f}^{\mathbf{s}}. \mathbbm{1} \in \sum_{ \mathbf{n}_\alpha} U({\mathfrak{n}}^-) \mathbf{f}^{\mathbf{n}}. \mathbbm{1}$$ where the sum is over all $\mathbf{n} \in \bz_{\geq 0}^{n(n-1)/2}$ satisfying Condition (1) and (2) and moreover $n_{k,\ell}=0$.\
Using induction along the first row, then along the second row etc, we see that $$\mathbf{f}^{\mathbf{s}}. \mathbbm{1} \in \sum_{ \mathbf{n}} U({\mathfrak{n}}^-) \mathbf{f}^{\mathbf{n}}. \mathbbm{1}$$ where $n_{{k,\ell}} = 0 $ for all $k,\ell$ with $i - k > \ell - j$.\
A similar computation for the roots below the diagonal shows that we can assume also $n_{{ k , \ell}} = 0$ for all $(k, \ell) \neq (i - q, j + q)$ for some $q$. This proves the first part of the lemma. The claim on highest weight points follows now from the definition of $F_{\lambda_1, \lambda_2}$, namely $$(f_{i - q, j+ q} \otimes t)^K. \mathbbm{1} = 0 \text{ for } K \geq \min\{ m_i , m_j \}.$$
The following gives a stricter upper bound for the set of highest weight points.
Let $i\leq j \in I, m_i, m_j \geq 0$, $p = \min \{i-1, n-1-j \}$, then $$S_{hw}(\lambda_1, \lambda_2) \subseteq \{ a_0 \mathbf{e}_{i,j} + a_1 \mathbf{e}_{i-1, j+1} + \ldots + a_p \mathbf{e}_{i-p, j+p} \, | \, \min\{ m_i, m_j \} \geq a_0 \geq a_1 \geq \ldots \geq a_ p \geq 0 \}.$$
We just have to check that these are the only points of the ones described in Lemma \[diagonal\] whose monomials applied on $\mathbbm{1}$ give vectors of dominant weight. For this, the weight of the vector $$(f_{i,j} \otimes t)^{a_0} (f_{i-1, j+1} \otimes t)^{a_1} \cdots (f_{i- p, j+p})^{a_k}. \mathbbm{1}$$ is equal to $$m_i \omega_i + m_j \omega_j + \sum_{q = 0}^{p} a_p (\omega_{i-q} + \omega_{j+q} - \omega_{i-q-1} - \omega_{j+q+1} ).$$ This is equal to $$(m_i - a_0) \omega_i + (a_0 - a_1) \omega_{i-1} + \ldots + (a_{p-1} - a_p) \omega_p + (m_j - a_0) \omega_j + \ldots + (a_{p-1} - a_p)\omega_{j+p}$$ which is dominant if and only if $a_0 \geq a_1 \geq \ldots \geq a_p$.
Keep the notation from the proof and set $b_i = a_i - a_{i+1} \geq 0$, then the weight of $$(f_{i,j} \otimes t)^{a_0} (f_{i-1, j+1} \otimes t)^{a_1} \cdots (f_{i- p, j+p})^{a_k}. \mathbbm{1}$$ is equal $$m_i\omega_i + m_j \omega_j + \sum_{q = 0}^{\min\{ i-1, n-1 -j\}} b_q(\omega_{i-q} + \omega_{j+q} - \omega_{i} - \omega_{j})$$ with $$\sum_{q = 0}^{\min\{ i-1, n-1 -j\}} b_q = a_0 \leq \min\{m_i, m_j\}.$$ This implies
Let $i, j\in I$, $m_i, m_j \geq 0$, then $$a_{m_i \omega_i, m_j \omega_j}^{\tau} = c_{m_i \omega_i, m_j \omega_j}^{\tau}$$ for all $\tau \in P^+$ and hence $$F_{m_i \omega_i, m_j \omega_j} \cong_{{\mathfrak{sl}}_n \otimes \bc[t]} V(m_i \omega_i)_{c_1} \ast V(m_j \omega_j)_{c_2}$$ for all $c_1 \neq c_2 \in \bc$.
The Pieri rules {#pieri}
===============
In this section we want to compute the ${\mathfrak{sl}}_n$ decomposition on $F_{\lambda, \omega_j}$ and $F_{\lambda, k \omega_1}$. Mainly, we want to identify them with the fusion product of $V(\lambda)$ and $V(\omega_j)$ (resp. $V(k \omega_1)$). As for the Kirillov-Reshetikhin modules we will show that $a_{\lambda, \omega_j}^{\tau} = c_{\lambda, \omega_j}^{\tau}$ for all $\tau$ and similar for $k \omega_1$. Let us start with the latter case.\
On one hand, using again the Young tableaux combinatorics from [@N2002], we see that the highest weight vectors of $V(\lambda) \otimes V(k \omega_1)$ are parameterized by the set $$T_{\lambda, k \omega_1} := \{ ( b_1, \ldots, b_{n} ) \in \mathbb{Z}^{n}_{\geq 0} \, |, \, b_1 + \ldots + b_{n} = k \text{ and } b_j \leq m_{j-1} \; \forall \; j = 2, \ldots, n \}$$ where $\lambda = \sum m_i \omega_i$.\
Let $\mathbf{s} \in S_{hw}(\lambda, k\omega_1) \subseteq S(\lambda, k \omega_1)$, then $s_{i,j} = 0$ if $i \neq 1$. So there is no confusion if we write in the following $s_j$ for $s_{1,j}$. We have $s_{1} + \ldots + s_{n-1} \leq k$. Suppose now $s_{j} > m_j$ for some $j > 1$, then by definition of $F_{\lambda, k \omega_1}$ $$f_{j}^{s_{j}}. \mathbbm{1} = 0$$ This implies, recall the notation of Section \[KR\], $$\mathbf{f}^{\mathbf{s} + s_j \mathbf{e}_{1, j-1} - s_j \mathbf{e}_{1, j}}f_{j}^{s_{j}}. \mathbbm{1} = 0$$ Using commutator relations we have for some constants $c_k$ $$\sum_{k = 0}^{s_j}c_k f_{j}^{k} \mathbf{f}^{\mathbf{s} + k ( \mathbf{e}_{1, j-1} - \mathbf{e}_{1, j})}. \mathbbm{1} = 0$$ This implies that $$\mathbf{f}^{\mathbf{s}}.\mathbbm{1} \in \sum_{\mathbf{n}} U({\mathfrak{n}}^-) \mathbf{f}^{\mathbf{n}}.\mathbbm{1}$$ for some $\mathbf{n}$ with $n_\ell = s_\ell$ for $\ell > j$ and $ n_j < s_j$. But this is a contradiction to $\mathbf{s}\in S_{hw}(\lambda, k\omega_1)$.\
This implies that if $\mathbf{s} \in S_{hw}(\lambda, k\omega_1)$ we have $$s_{i,j} = 0 \text{ for } i \neq 1 \; , \; s_{j} \leq m_j \; , \; s_{1} + \ldots + s_{n-1} \leq k.$$ This implies $|S_{hw}(\lambda, k\omega_1) | \leq |T_{\lambda, k \omega_1} |$. Using now Lemma \[whatever\] we have equality here and so
For $\lambda \in P^+$, $k \geq 0$, we have $$a_{\lambda, k \omega_1}^{\tau} = c_{\lambda, k\omega_1}^{\tau} \text{ for all } \tau \in P^+$$ and so for all $c_1 \neq c_2 \in \bc$ $$F_{\lambda, k \omega_1} \cong V(\lambda)_{c_1}\ast V(k\omega_1)_{c_2}.$$
We consider here the $\omega_j$-case. As before, using Young Tableaux combinatorics from [@N2002], we have that the highest weight vectors of $V(\lambda) \otimes V(\omega_j)$ are parameterized by the set ($\lambda = \sum m_i \omega_i$) $$T_{\lambda, \omega_j} := \{ ( b_1 < \ldots < b_{j} ) \, |, \, b_i \in \{1, \ldots, n\} \text{ s.t.: } b_{i-1} \neq b_{i} - 1 \Rightarrow m_{b_i-1} \neq 0\}.$$ Let $\mathbf{s} \in S_{hw}(\lambda, \omega_j) \subseteq S(\lambda, \omega_j)$, then $s_{k, \ell} = 0$ if $\ell > j $ or $k < j$. We have for all Dyck path $\mathbf{p}$: $\beta_1 + \ldots + \beta_s \leq 1$. This implies that $s_\beta \in \{ 0, 1\}$ for all $\beta$ and even more, that the support of $\mathbf{s}_\alpha$ is of the form $$\{ \alpha_{i_1, j_1} , \ldots, \alpha_{i_\ell, j_\ell} \, | \, i_1 < i_2 \ldots < i_\ell \leq j \leq j_\ell < \ldots < j_1 \}$$ Let us parametrize this set as follows. Let $ \alpha_{i_1, j_1} , \ldots, \alpha_{i_\ell, j_\ell}$ be given from the set and denote $$\{ p_1 < \ldots < p_{j-\ell} := \{1, \ldots, j\}\setminus \{ i_1, \ldots, i_\ell \} .$$ Then we associate $$\alpha_{i_1, j_1} , \ldots, \alpha_{i_\ell, j_\ell} \leftrightarrow (p_1 < p_2 < \ldots < p_{j-\ell} < j_{\ell} + 1 < \ldots < j_{1} + 1)$$ This gives a one to one correspondence to $j$-tuples of strictly increasing integers smaller equals to $n$, hence parameterizes a basis of $V(\omega_k)$.\
Since we are interested in the highest weight vectors, we can exclude these tuples corresponding to vectors in $F_{\lambda, \omega_j}$ of non-dominant weight. The weight of such a vector $(p_1 < p_2 < \ldots < p_j)$ is given by $$\lambda + ( \omega_j - ( - \omega_{p_1-1} + \omega_{p_1 } - \omega_{p_2-1} + \omega_{p_2} - \ldots - \omega_{p_{j-1}-1} + \omega_{p_{j-1}} \omega_{p_j-1} + \omega_{p_j}).$$ With a short calculation one sees that this is dominant if and only if $p_i \neq p_{i+1} - 1 \Rightarrow m_{p_i}-1 > 0$.\
This implies that $a_{\lambda, \omega_j}^{\tau} \leq c_{\lambda, \omega_j}^{\tau}$ for all $\tau \in P^+$. Using Lemma \[whatever\] implies now equality for all $\tau$ which proves
For $\lambda \in P^+$, $j \in \{ 1, \ldots, n-1\} = I$, we have $$a_{\lambda, \omega_j}^{\tau} = c_{\lambda, \omega_j}^{\tau} \text{ for all } \tau \in P^+$$ and so for all $c_1 \neq c_2 \in \bc$ $$F_{\lambda, \omega_j} \cong V(\lambda)_{c_1}\ast V(\omega_j)_{c_2}.$$
Partial order and Weyl modules {#poset}
==============================
In [@CFS14] a partial order on pairs of dominant weight has been introduced. Let us recall here briefly the construction. Fix $\lambda \in P^+$ and consider the partitions of $\lambda$ with two parts $$P(\lambda,2) = \{ (\lambda_1, \lambda_2) \in P^+ \times P^+ \, | \, \lambda_1 + \lambda_2 = \lambda \}.$$ By abuse of notation we denote by $P(\lambda,2)$ the orbits of the natural $S_2$ action on $P(\lambda,2)$. In [@CFS14], the following partial order has been introduced on $P(\lambda,2)$: Let $\blambda = ( \lambda_1, \lambda - \lambda_1), \bmu = (\mu_1, \lambda - \mu_1) \in P(\lambda,2)$, then $$\blambda \preceq \bmu : \Leftrightarrow \; \forall \, \alpha \in R^+ : \min\{ \lambda_1(h_\alpha), (\lambda - \lambda_1)(h_\alpha)\} \leq \min \{ \mu_1(h_\alpha), (\mu - \mu_1)(h_\alpha) \}.$$ Certain properties of this poset were proved in [@CFS14] (and [@F14b]), e.g. there exists a smallest element in $P(\lambda,2)$, the orbit of $(\lambda, 0)$. It is less obvious that there exists also a unique maximal element: let $\lambda = \sum_{i = 1}^{n-1} m_i \omega_i$, and let $\{1\leq i_1 < \ldots < i_k \} = I_{odd}$ be the indices such that $m_i$ is odd. Then $\blambda^{\max} = (\lambda_1^{\max}, \lambda_2^{\max})$ given by $$\lambda_1^{\max}=
\sum_{j=1}^k ((m_{i_j}+ (-1)^j)/2)\omega_{i_s}+\sum_{i\in I \setminus I_{odd}}(m_i/2)\omega_i, \qquad
\lambda_2^{\max}=\lambda-\lambda_1^{\max},$$ is the unique maximal orbit in $P(\lambda,2)$, [@CFS14 Proposition 5.3].\
It was further shown that the cover relation of $\preceq$ on $P(\lambda,2)$ is determined by the Weyl group action [@CFS14 Proposition 6.1].
We want to relate the partial order and the modules $F_{\lambda_1, \lambda_2}$. Namely, we want to prove the following lemma:
\[fusion-sp\] Suppose $(\lambda_1, \lambda - \lambda_1) \preceq (\mu_1, \lambda - \mu_1) \in P(\lambda,2)$, then there exists a canonical surjective map of ${\mathfrak{sl}}_n \otimes \bc[t]$-modules $$F_{\mu_1, \mu - \mu_1} \twoheadrightarrow F_{\lambda_1, \lambda - \lambda_1}.$$
We have to compare the defining relations only. So let $\alpha \in R^+$, then on both modules we have $$(f_\alpha \otimes 1)^{\lambda(h_\alpha) + 1}.\mathbbm{1} = 0$$ and also the highest weight is in both cases $\lambda$. Let $M_1 = \min\{ \mu_1(h_\alpha), \lambda - \mu_1(h_\alpha) \}$ and $M_2 = \min\{ \lambda_1(h_\alpha), \lambda - \lambda_1(h_\alpha) \}$, then by assumption $M_1 \geq M_2$. By the defining relations of $F_{\lambda_1, \lambda - \lambda_1}$ we have $$(f \otimes t)^{M_2 + 1}. \mathbbm{1} = 0 \in F_{\lambda_1, \lambda - \lambda_1}$$ so especially $$(f \otimes t)^{M_1 + 1}. \mathbbm{1} = 0 \in F_{\lambda_1, \lambda - \lambda_1}.$$ This implies the lemma.
We turn to the unique maximal element in $P(\lambda,2)$, $\blambda^{\max} = (\lambda_1^{\max}, \lambda_2^{\max})$. In fact we want to identify $F_{ \lambda_1^{\max}, \lambda_2^{\max}}$ as the unique graded local Weyl module of ${\mathfrak{sl}}_n \otimes \bc[t]/(t^2)$ of highest weight $\lambda$. For this we recall the definition of a local Weyl module briefly in the following.\
Let $A$ be a commutative, finitely generated unital algebra over $\bc$. Then ${\mathfrak{sl}}_n \otimes A$ is a Lie algebra with bracket given by $$[x \otimes p, y \otimes q]= [x,y] \otimes pq$$ and it is called the generalized current algebra. We fix $\lambda \in P^+$, this induces an one-dimensional ${\mathfrak{h}}$-modules, which we denote $\bc_{\lambda}$. Let $\xi: ({\mathfrak{n^+}} \oplus {\mathfrak{h}}) \otimes A \longrightarrow {\mathfrak{h}}$ be a Lie algebra homomorphism. Then we can lift the structure on $\bc_{\lambda}$ to a $ ({\mathfrak{n^+}} \oplus {\mathfrak{h}}) \otimes A$-structure, and let us denote this one-dimensional module $\bc_{\lambda, \xi}$.
The local Weyl module $W_A(\xi, \lambda)$ is unique maximal integrable (as a ${\mathfrak{sl}}_n$-module) quotient of the ${\mathfrak{sl}}_n \otimes A$-module $$U({\mathfrak{sl}}_n \otimes A) \otimes_{ ({\mathfrak{n}}^+ \oplus {\mathfrak{h}} ) \otimes A} \bc_{\lambda, \xi}.$$
These modules have been introduced for $A= \bc[t^{\pm 1}]$ in [@CP01] and further generalized in [@FL04] and [@CFK10] to arbitrary commutative associative algebras over $\bc$. It has been shown in [@CFK10] that if $A$ is finitely generated, $W_A(\xi, \lambda)$ is finite-dimensional and further that these modules are parameterized by maximal ideals in a tensor product of symmetric powers of $A$.\
These modules play an important role in the representation theory of ${\mathfrak{sl}}_n \otimes A$, the interested reader is here referred to [@CFK10].\
As they are integrable as ${\mathfrak{sl}}_n$-modules, there exist a decompositions into finite-dimensional simple ${\mathfrak{sl}}_n$-modules. Unfortunately, these decomposition are known for special cases only. Namely for $A= \bc[t], \bc[t^{\pm 1}]$ they are computed in a series on paper [@CP01], [@CL06], [@FoL07]. If $A$ is semi-simple, then the local Weyl module obviously decomposes into a direct sum of local Weyl modules for ${\mathfrak{sl}}_n \otimes \bc = {\mathfrak{sl}}_n$, so into a direct sum of simple ${\mathfrak{sl}}_n$-modules.\
But outside of these cases, even for the “smallest“ non-semi-simple algebra $A = \bc[t]/(t^2)$, the ${\mathfrak{sl}}_n$ decomposition is unknown.\
Let us rewrite the defining relations for the local Weyl modules for $A = \bc[t]/(t^K)$. In fact, for each $\lambda \in P^+$ and $K\geq 1$, there exists a unique local Weyl module. This follows since there exists a unique non-trivial map $\lambda \circ \xi$, namely $\xi$ is the evaluation map at $t = 0$, so we denote $\xi$ by $0$.
\[localweyl\] Let $\lambda \in P^+$, then the *graded local Weyl module* $W_{\bc[t]/(t^K)}(0,\lambda)$ is generated by $w \neq 0$ with relations $$({\mathfrak{n}}^+ \oplus {\mathfrak{h}}) \otimes t.w = 0 \;, \; h - \lambda(h). w = 0 \; , \; {\mathfrak{n}}^+.w = 0 \; , \; (f_{\alpha} \otimes 1)^{\lambda(h_\alpha) + 1}.w = 0 .$$
Since the relations are homogeneous, we see that $W_{\bc[t]/(t^K)}(0,\lambda)$ is a graded ${\mathfrak{sl}}_n \otimes \bc[t]/(t^K)$-module. Even more, we have immediately from the defining relations
\[f-weyl\] Let $\lambda_1 + \lambda _2 = \lambda \in P^+$ and $K \geq 2$, then there exists a surjective map of ${\mathfrak{sl}}_n \otimes \bc[t]$-modules $$W_{\bc[t]/(t^K)}(0,\lambda) \twoheadrightarrow F_{\lambda_1, \lambda_2}.$$
In fact $ F_{\lambda_1, \lambda_2}$ is the quotient obtained by factorizing the $U( {\mathfrak{sl}}_n \otimes \bc[t])$-submodule generated by $$\{ (f_\alpha \otimes t)^{\min\{ \lambda_1(h_\alpha), \lambda_2(h_\alpha) \} +1 }.\mathbbm{1} \, | \, \alpha \in R^+ \} \cup \{ f_\alpha \otimes t^{\ell} \, | \, \ell \geq 2, \alpha \in R^+ \}.$$
In this subsection we are restricting ourselves to the case of the second truncated current algebra, and we denote $A = \bc[t]/(t^2)$. We will prove
\[weyl-f\] Let $\lambda \in P^+$ and $\blambda^{\max} = (\lambda_1^{\max}, \lambda_2^{\max})$ be the unique maximal element in $P(\lambda, 2)$. Then we have an isomorphism of ${\mathfrak{sl}}_n \otimes A$-modules (and by extending an isomorphism of ${\mathfrak{sl}}_n \otimes \bc[t]$-modules): $$W_A(0,\lambda) \cong F_{ \lambda_1^{\max}, \lambda_2^{\max}}.$$
We consider the ${\mathfrak{sl}}_2$-case first. Then $\lambda = m \omega$ and because $e, e \otimes t, h \otimes t$ are acting trivial on $\mathbbm{1}$, $$W_A(0,\lambda) = \operatorname{span} \{ f^K (f \otimes t)^L .w \,| ,\ K, L \geq 0 \}$$ So if we restrict to elements in degree $L$ (recall, that $W_A(0,\lambda)$ is graded by the degree of $t$), then this is spanned by $$\{ f^K (f \otimes t)^L.w \, | \, K \geq 0 \}$$ The weights in degree $L$ are therefore of the form $m - 2L - 2K$ with $K \geq 0$. Every graded component is a finite-dimensional ${\mathfrak{sl}}_2$-module, since ${\mathfrak{sl}}_2$ acts by degree $0$ and $W_A(0,\lambda)$ is finite-dimensional. This implies that the component of degree $L$ in $W(0,\lambda)$ is $0$ if there is no vector of dominant weight in degree $L$. So for $L > \lfloor m/2 \rfloor$ we have $(f \otimes t)^L.w = 0$.\
On the other hand, $\blambda = ( \lfloor m/2 \rfloor, \lceil m/2 \rceil )$ which implies that $ F_{\lambda_1^{\max}, \lambda_2^{\max} }$ is the quotient by the submodule generated by $$(f \otimes t)^L.\mathbbm{1} \text{ with } L > \lfloor m/2 \rfloor.$$ This implies that $W_A(0,\lambda) \cong F_{\lambda_1^{\max}, \lambda_2^{\max} }$.\
Let us turn to the general case. We have $$\min \{ \lambda_1^{\max}(h_\alpha), \lambda_2^{\max}(h_\alpha) \} = \lfloor \lambda(h_\alpha)/2 \rfloor.$$ It is enough to show that $(f_\alpha \otimes t)^{ \lfloor \lambda(h_\alpha)/2 \rfloor + 1}.\mathbbm{1} = 0 \in W_A(0, \lambda)$ for all $\alpha$.\
Fix $\alpha > 0$ and consider the Lie subalgebra ${\mathfrak{sl}}(\alpha) \otimes A = \langle e_\alpha, h_\alpha, f_\alpha, e_\alpha \otimes t, h_\alpha \otimes t, f_\alpha \otimes t \rangle$ which is isomorphic to ${\mathfrak{sl}}_2 \otimes A$.\
We consider the submodule $M = U({\mathfrak{sl}}(\alpha) \otimes A).\mathbbm{1} \subseteq W_A(0,\lambda)$. Then this is a quotient of the ${\mathfrak{sl}}_2 \otimes A$ local Weyl module $W_A(0, \lambda(h_\alpha) \omega)$ (since the defining relations are satisfied on the highest weight vector).\
The considerations above for the ${\mathfrak{sl}}_2$-case imply now that $$(f_\alpha \otimes t)^{\lfloor \lambda(h_\alpha)/2 \rfloor + 1}.\mathbbm{1} = 0 \in M \subseteq W_A(0,\lambda)$$ Which implies that $W_A(0, \lambda)$ is a quotient of $F_{\lambda_1^{\max}, \lambda_2^{\max}}$ and hence they are isomorphic.
[^1]: The author was partially supported by the DFG priority program 1388 “Representation Theory“
| {
"pile_set_name": "ArXiv"
} |